id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
78604 | pes2o/s2orc | v3-fos-license | Controlling liquid splash on superhydrophobic surfaces by a vesicle surfactant
By adding a small amount of a vesicle surfactant, “unavoidable” splashing is considerably reduced on superhydrophobic surfaces.
INTRODUCTION
Pesticide spraying is "hard" agricultural work, where more than 50% of agrochemicals are lost because of undesired bouncing and splashing behaviors on crop leaves with "waterproof" properties (1)(2)(3). In natural plants, superhydrophobic leaves are ubiquitous, and they usually get their nonwetting properties from the presence of waxy features on their surface. Particularly because of the combination of extremely low energized chemical composition and microstructured/nanostructured surface morphology, the superhydrophobic surface was demonstrated to facilitate droplets, even those with surfactant additives (4), bouncing and splashing within shortened contact time (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17). This increases the difficulty of droplet retention (18), thus threatening ecological security and human safety: If pesticides cannot be properly deposited on crops, then pests might not be controlled and plant injury might occur (1). Redundant pesticides might contribute to soil, air, or water pollution, and human health might be negatively affected.
Although adding surfactants to the sprayed liquid is considered to be a simple method to reduce surface tension and to improve drop retention on a smooth hydrophobic surface, surfactant liquids still slide or bounce off the hydrophobic surface at a tilted angle and reduce liquid splashing on the superhydrophobic surface (19). In addition, according to the Kelvin-Helmholtz instability, the wave number k max equals 2r a U 2 /3g, where r a is the air density, g is the fluid surface tension, and U is the relative velocity between gas and liquid. Therefore, reduced surface tension was believed to play a major role in the increased instability of the spilt droplet (20,21). Enhancing liquid deposition on the superhydrophobic surface is complex and difficult work, where selected surfactants need to diffuse from the bulk to the newly created interfaces quickly, because the contact time is typically only several milliseconds, and need to decrease retraction velocity to reduce instability due to the reduced surface tension. Several studies have demonstrated the difficulty of surfactant drop deposition on the super-hydrophobic surface. On the basis of these results, it would seem that surfactant additives should not be responsible for reducing high-speed liquid splashing on the superhydrophobic surface.
However, we found that this kind of "unavoidable" bouncing or splashing behavior was greatly inhibited or even completely reduced on the superhydrophobic surface at varied angles by adding a small amount of a double-chain vesicle surfactant. Unlike the micelle surfactant previously discussed, the vesicular surfactants additive demonstrated here was able to diffuse from the bulk to the newly formed interfaces during the deformation process, enter into the microstructured/nanostructured morphology, confine the motion of the liquid with the help of the wettability transition during the first inertial spreading stage of~2 ms, decrease the retraction velocity down to nearly zero, and reduce bouncing and splashing (Fig. 1). For the first time, we have found that the vesicle surfactant could exhibit a distinguished ability to alter the surface wettability during the impact process. By taking advantage of this wettability transition, not only have liquid splashing and retraction been completely reduced but the maximum wetting area is also maintained after the impact process. This behavior is different from that observed in previous research, in which a micelle surfactant drop could bounce off the superhydrophobic surface (18), viscosity induced the splashing reduction (1), a surfactant drop partly reduced the liquid retraction on the hydrophobic surface (22)(23)(24), and nanoparticles or surface charges suppressed droplet rebound on the superhydrophobic surface (19,25). Figure 1 shows the impacts of droplets of pure water and aqueous solutions containing the same mass fraction of the micelle surfactant sodium dodecyl sulfate (SDS), trisiloxane molecules (TSs), and the vesicle surfactant sodium bis(2-ethylhexyl) sulfosuccinate [Aerosol OT (AOT)] on a Brassica oleracea L. leaf (Fig. 1A) at a velocity of 2.53 ± 0.11 m s −1 using high-speed cameras (movie S1). In the experiment, the impact behavior was defined by measuring more than 30 different positions of the same sample and surfactant solutions. The B. oleracea L. leaf surface characterized by microstructured/nanostructured morphologies (Fig. 1, B and C) shows a water contact angle of 156.2 ± 4.3° (Fig. 1D), ensuring superhydrophobicity. The impacting water droplet, after reaching its maximum spreading at~1.8 ms, breaks up into numerous droplets ( Fig. 1E). By adding a certain amount of surfactants, the receding splash be inhibited by varying degrees (movie S1). Figure 1 (F and G) shows that aqueous droplets containing 1% SDS or 1% TSs partially reduced the receding splash but left several streams during retraction and finally broke up into several fragments. In contrast, by adding 1% AOT in the aqueous phase, the impacting drop first spread to a larger wetting area within 2 ms. Then, the liquid broke up and the receding laminar stream was substantially depressed (Fig. 1H) even after the maximum spreading stage (movie S1). Finally, a large wetting area was achieved and maintained. Partial and scarce receding behavior at low-speed impact was found in the micelle and vesicle regions ( fig. S1), respectively. Furthermore, on artificial superhydrophobic surfaces with varied structures and tilted angles, aqueous droplets containing 1% AOT would spread properly (movies S2 to S4). Notably, this kind of liquid deposition behavior is in contrast to previous works, where surfactant drops have been shown to bounce off the superhydrophobic surface (18), although surfactants can help the liquid wet the hydrophobic surface (movie S5) (25). AOT is therefore a unique surfactant that has a more pronounced effect than the others in controlling liquid deposition and in reducing unavoidable splash.
DISCUSSION
Compared with the mechanism of liquid deposition enhancement using polymer additives (1), surfactant additives cannot alter viscosity but can reduce the liquid's surface tension. Although surfactants can decrease the surface tension of the liquid, helping it spread on a hydrophobic surface under a low-speed impact (25), the reduction of surface tension also plays a major role in the increased instability and the enhanced droplet's splash (20,21). According to the Kelvin-Helmholtz instability, k max = 2r a U 2 r /3g, the key to reducing instability is via the retraction velocity U r , and the brevity of impact contact time should be enough for liquid droplets to wet the superhydrophobic surface (18). In our experiment, local pinning is observed for SDS (Fig. 1F) and TSs (Fig. 1G), and the entire pinning is found for AOT in the peripheral area of maximum spreading (Fig. 1H), where the retraction velocity U r slows down to a low value, resulting in a small k max . For the AOT drop, the motion of spreading is greatly confined, leading to extremely low instability and thus retarding the splash (movie S1).
The exceptional molecular structure of AOT distinguishes it from the other two surfactants in reducing splash and in enhancing liquid deposition. Cryo-TEM (transmission electron microscope) imaging was used to prove our assumption, which was achieved by allowing a free-falling surfactant drop to impact the Cu mesh followed by immersion in liquid nitrogen. This mimics the surfactant packing stage during the impact process. The significant differences of the surfactant aggregates are shown in Fig. 2 (A 1 to C 1 ), where the multilamellar vesicles were closely packed at the air/water interface for the AOT drop with a mass fraction of 1%, whereas micelles only randomly and loosely existed for the other two surfactant solutions at the same mass fractions. Compared with the sample molecular structures of SDS, TSs, and the previously mentioned surfactants in reducing the liquid bouncing (18,25), AOT has two alkyl chains and a relatively small hydrophilic head group. This particular molecular structure of AOT is the main reason for its compact and directed alignment and leads to the multilamellar vesicle structure (26). As shown in Fig. 2 (A 2 to C 2 ), among the three surfactants, the aqueous solution containing 1% AOT exhibits the lowest DST within a surface age of 80 ms. At the beginning of the bubble pressure measurement of~10 ms, its DST could decrease to a low value of~32 mN/m ( Fig. 2C 2 ), whereas both 1% SDS and 1% TSs have DSTs that begin in a high value of~43 mN/m (Fig. 2, A 2 and B 2 ). Similar to the property reflected in the diffusion coefficients achieved through 1 H nuclear magnetic resonance (NMR) spectrometry ( fig. S2) and dynamic contact angles ( fig. S3), the DST results indicate that AOT has the fastest diffusion speed to the air/water interface and thus has the strongest ability to reduce the surface tension when there are newly created surfaces.
Besides reducing splashing on natural superhydrophobic leaves, AOT is the most effective surfactant that inhibits the bouncing and splashing of the liquid drops on the artificial superhydrophobic surface at both low (~1.2 m s −1 ) and high (~2.5 m s −1 ) impact speeds. The artificial superhydrophobic surfaces include a microstructured/ nanostructured superhydrophobic surface composed of 20-nm hydrophobic SiO 2 nanoparticle composites with a typical size and spacing of around 200 nm and a CuO nanosheet structured superhydrophobic surface with a typical size of about 3 to 6 mm in length and 200 to 600 nm in width. The water contact angles of these artificial superhydrophobic surfaces are 161.3 ± 0.5°and 159.1 ± 1.7°, respectively, which are much higher than those of the natural B. oleracea L. leaf. A fluorinated glass slide with a water contact angle of 112.8 ± 1.1°is also used for comparison. As shown in movie S5, although the droplets containing SDS can properly deposit on a smooth hydrophobic surface, it is difficult to reduce the rebound on a superhydrophobic surface, no matter how low or how high the impact speed is, even for highly concentrated solutions. For TSs, with the increase in concentration, the induction period is shortened so that the low-speed impact behavior turns to emission from bouncing, whereas it is still easy to partially rebound after a high-speed impact. In contrast, for AOT, the liquid droplets can deposit on the hydrophobic surface at a low concentration of 0.1% and at any superhydrophobic surfaces with a concentration of 0.3% (movie S5).
A diagram is shown in Fig. 3 to explain how the receding splash can be substantially depressed by AOT. Driven by the inertia, the impacting liquid first spreads to a maximum diameter (27). As shown in the spreading state of the diagram, the liquid droplet experiences a large surface deformation during the high-speed impact, and the curved edge of the spreading drop is completely out of equilibrium when it reaches its maximum diameter. Then, surface tension acts on the liquid to retract the flow above the substrate. For water, the drop would break up into multiple droplets during the receding state and would splash in the final state (Fig. 1E). For the surfactant drops, if the surfactant molecules cannot replenish the newly created air/ water interface in time, typically with high DST, then the surface tension of the deformed drop could not be uniform, where nonuniform receding behavior occurs (Fig. 3B). Examples can be found for the SDS drop (Fig. 1F), the TS drop (Fig. 1G), and the AOT drop in the micelle region ( fig. S4). In contrast, if the surfactant with a low beginning DST can effectively saturate the newly created surface withiñ 1.8 ms (corresponding to the spreading time) and maintain the homogeneous low surface tension at the air/liquid/solid interface, then the liquid can uniformly deposit on the superhydrophobic surface (Fig. 3C). As shown in Fig. 1H and fig. S4, a gentle and uniform receding contact line will be obtained similar to AOT in the vesicle region. These results provide direct evidence for the role of AOT in controlling the receding splash.
The underlying mechanism for the abovementioned transient knockdown of the receding velocity at the pinning area can be ascribed to the wettability transition in the spreading phase. At the peripheral area of maximum spreading, the impacting water drop slides over air cushions that are trapped on or beneath the superhydrophobic surface, and it is difficult for the water to enter the nanostructures (Fig. 3A). The upward increased capillary force induced by the squeezed air entrapment in the nanostructures easily makes the water drop take off the surface (28). Similar behavior is observed for the micelle surfactant drops (Fig. 3B). The final "floating" state of the micelle surfactant drop, the 0.1% AOT drop, indicates that micelle surfactant drops could not properly reverse surface wettability, which can be seen from the cryo-SEM image in fig. S5. In contrast, the reduction of surface tension induced by the vesicle surfactant leads to the dropdown, reverses the capillary force, and makes an easier and deeper entry of the impacting drop in the nanostructure (the side view in spreading state in Fig. 3C). Cryo-SEM was used to prove the reverse of surface wettability during the impact, where the vesicle surfactant drop (1% AOT drop) is trapped between the gaps of nanoneedles and fully wets the nanostructured superhydrophobic surface (fig. S5). The outward hydrophobic tails of the surfactants at the air/liquid interface act as bridges to connect the drop and the nanostructure by hydrophobic force and to change the wettability of the superhydrophobic surface. Through this process, the surfactant droplets can be pinned and thus reduce the receding velocity via the wettability transition at the peripheral area of maximum spreading. As a result, the high-speed impacting AOT drops can firmly and quickly deposit on the superhydrophobic surface.
The wettability transition at the central contact point is easier than at the peripheral area because capillary forces are overcome by inertial effects (29) at a high Weber number regime (We > 200). Both the water drop and the surfactant drop tend to become convex in the nanostructure of the superhydrophobic surface because of the downward hammer pressure and the dynamic pressure (8). However, the water repellency of structure chemistry and the huge upward Laplace pressure induced by the squeezed air entrapment in the nanostructure rebound the water drop, as shown in Fig. 1E.
AOT also manifests itself in inhibiting rebound and splash on tilted superhydrophobic surfaces. Superhydrophobic surfaces with tilted angles of 30 o , 60 o , and 75 o are used. In the experiment, the SDS shows little effect on the liquid deposition within the tested concentration region ( Fig. 2A 3 ), although it is a good choice to inhibit rebound on the hydrophobic surface, as shown in movie S5 (15, 16, 23). The TS drop shows the impact behavior from bouncing, emission, to no rebound as the concentration increases from 0.01 to 1%, but it still rebounds partially on the oblique superhydrophobic surface (Fig. 2B 3 ). The percentage of liquid that bounces off the surface can be quantitatively measured through an analytical microbalance, and the impact processes of surfactant drops on the horizontal and tilted superhydrophobic surfaces are shown in movie S3. Only AOT can suppress the rebound of aqueous droplets on both horizontal and oblique surfaces at a low mass fraction of 0.3% at any inclined angles (Fig. 2C 3 ). Figure S1 depicts the impact behaviors of aqueous drops containing the AOT additive in three regions. At concentrations lower than the critical micelle concentration, complete bouncing occurs along with complete receding behavior. At the micelle region, partial receding of the contact line is accompanied with partial splashing, partial rebound, and no rebound. Scarce receding of the contact line takes place only at the vesicle region, indicating that both the rebound and receding splashes have been greatly inhibited.
CONCLUSION
In conclusion, although we have mainly focused on a specific microstructured/nanostructured superhydrophobic surface with varying tilted angles to elucidate the role of the vesicle surfactant (AOT) in inhibiting the receding splash, the scarce or gentle receding behavior can also be generalized to apply to other artificially fabricated superhydrophobic surfaces and other single-drop impact and spray processes at varied impact velocities (Fig. 4 and movies S6 to S9). In addition, AOT is shown to be a stable surfactant molecule ( fig. S6). This work helps advance our understanding of how to control liquid deposition on superhydrophobic surfaces. Therefore, this approach can potentially be used to improve the efficiency of pesticide spraying and to reduce environmental pollution.
MATERIALS AND METHODS
Superhydrophobic silicon nanowire structures A silicon wafer was cleaned with acetone, ethanol, and deionized water before it was immersed in 5 weight % hydrofluoric acid solution for 1 min to remove the oxidation layer. Then, it was put in a mixed solution of 4.8 M hydrofluoric acid and 0.5 mM silver nitrate for 1 min to deposit silver seed on the substrate. It was successively immersed in a solution of 4.8 M hydrofluoric acid and 0.15 M hydrogen peroxide for 30 min to realize metal-assisted etching of silicon, which resulted in the acquisition of silicon nanowire structures. The typical length of the nanowire was~1.2 mm, and the space between nanowires was about 50 nm. The as-prepared silicon plate was O 2 plasma-treated at 150 W for 30 s and then put in a sealed container together with a piece of glass coated with 0.5 ml of (heptadecafluoro-1,1,2,2-tetradecyl)trimethoxysilane. The container was evacuated with a vacuum pump. After 3 hours at 80°C, the plate showed surface superhydrophobicity with a contact angle of 154.5 ± 3.2°.
Microstructured/nanostructured SiO 2 surface
The commercial glass plates were cleaned with acetone, ethanol, and deionized water. In accordance with our previous research (30), the polymer-particle dispensed solution was prepared by adding 1 ml of Capstone ST-200 (DuPont Co.) solution and 1 g of hydrophobic fumed silica nanoparticles (average particle size of 14 nm; Evonik Degussa Co.) in 5 ml of acetone and 20 ml of ethanol. The solution was mixed and stirred for 30 min in a closed bottle. Precleaned glass plates were dipped in this solution at a speed of 80 mm s −1 and pulled out from the solution at a speed of 100 mm s −1 . Owing to the rapid evaporation of the solvent, the semitransparent membrane quickly transformed into a white coating with extremely high water repellency. From an SEM observation, the aggregate of SiO 2 nanoparticles had random features of typical size and spacing of around 200 nm.
Superhydrophobic CuO nanosheets
The copper plate was first cleaned with acetone, ethanol, and purified water before modification. It was then immersed in a solution of 0.15 M ammonium persulfate and 2.5 M sodium hydroxide for 20 min and subsequently immersed in a 0.1 M perfluorodecanoic acid solution for 1 hour. The prepared CuO nanosheets were about 3 to 6 mm in length and 200 to 600 nm in width. After the copper plate was rinsed with distilled water and dried with N 2 , it showed a high water repellency with a contact angle of 159.1 ± 1.7°.
Patterned pillar-structured silicon substrate Silicon wafers (n-type doped with phosphorus, <100>-oriented, 525 mm thick) were patterned using standard photolithography techniques. A thin layer of positive resist was spray-coated onto the silicon wafer at a rotational speed of 3000 rpm, which was followed by an ultraviolet (UV) exposure process (Karl Suss MA6). Then, the UV-exposed Si wafer was immersed in the resist developer to remove the unexposed photoresist. Subsequently, deep reactive ion etching was performed. Micropillar has a diameter of 10 mm, a space of 10 mm, and a height of 20 mm. After the substrates were resist-stripped (Microposit Remover 1165), they were cleaned with ethanol and acetone before the chemical modification process was performed. The as-prepared silicon plate was O 2 plasma-treated and then put in a sealed container together with a piece of glass coated with 0.5 ml of (heptadecafluoro-1,1,2,2-tetradecyl)trimethoxysilane for 2 hours at 80°C.
Characterization
Analysis of the droplet deposition on the B. oleracea L. leaf surface and superhydrophobic substrates was recorded with i-SPEED 3 (Olympus) high-speed cameras from the oblique view and a FASTCAM Mini UX100 (Photron) from the side view, respectively. SEM images were obtained using a field-emission SEM at 10 kV (Hitachi S-4800). The images of cryogenic electron microscopy were carried out using a fieldemission SEM (Hitachi S-4300) equipped with extra low-temperature equipment at 3 kV (cryo-electron microscope, Leica). Cryo-TEM images were obtained with FEI Tecnai Spirit BioTwin TEMs. Cryotransfer holders were used to ensure low-temperature transfer and observation of frozen hydrated specimens. Contact angles were measured using a contact angle measurement device (OCA 20, DataPhysics), with droplets of 3 ml to be removed dynamically. Each reported contact angle was an average of at least five independent measurements. The diffusion rates were determined with a Bruker AVANCE 600 NMR spectrometer. The DSTs were carried out with an automatic maximum bubble pressure tensiometer (Krüss BP100), which measures the behavior of a surfactant over a wide speed range as part of a single, fully automatic measuring process and determines surface tension as a function of surface age. The measured range in the time window is from 10 ms to 10 s. The capillary diameter is 0.210 mm. fig. S6. NMR characterization of the stability of the AOT surfactant over time. movie S1. Surfactant drop impact on the B. oleracea L. leaf. movie S2. Influence from surface morphology and impact speed. movie S3. Surfactant drop impact on microstructured/nanostructured superhydrophobic surface with different types, with varied concentrations, and at different tilted angles. movie S4. Controlling liquid splash on superhydrophobic surfaces by a vesicle surfactant: The comparison of the water drop's splashing and the 1% AOT drop's deposition on superhydrophobic surfaces with a long recording time. movie S5. Surfactant drops' impact on hydrophobic surfaces, microstructured/nanostructured superhydrophobic surfaces, and nanostructured superhydrophobic surfaces at low and high speeds. movie S6. Water spray impact on horizontal superhydrophobic surface. movie S7. Water spray impact on tilted superhydrophobic surface. movie S8. AOT (1%) spray impact on horizontal superhydrophobic surface. movie S9. AOT (1%) spray impact on tilted superhydrophobic surface. | 2018-04-03T05:59:34.459Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "e057ce58dfc71a284afc53ce0349e783ff46239e",
"oa_license": "CCBYNC",
"oa_url": "https://advances.sciencemag.org/content/advances/3/3/e1602188.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e057ce58dfc71a284afc53ce0349e783ff46239e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
30938771 | pes2o/s2orc | v3-fos-license | Microencapsulated Iron Fortification and Flavor Development in Cheddar Cheese
This study was designed to examine the effect of microencapsulated iron-fortified Cheddar cheese and L-ascorbic acid as a bioavailable helper of iron on chemical and sensory aspects. Coating material was PGMS, and ferric ammonium sulfate and Lascorbic acid were selected as core materials. The highest efficiency of microencapsulation of iron and L-ascorbic acid were 72 and 94%, respectively, with 5:1:50 ratio (w/w/v) as coating to core material to distilled water. TBA absorbance was significantly lower in microencapsulated treatments than those in uncapsulated treatments during ripening. The productions of short-chain free fatty acid and neutral volatile compound were not significantly different among treatments during ripening periods. In sensory aspects, bitterness, astrigency and sourness were higher in Cheddar cheese fortified with microencapsulated iron and uncapsulated L-ascorbic acid than others. The present study indicated that fortification of iron as well as L-ascorbic acid did not show any defect problem to Cheddar cheese, and suggested the possibility of iron fortification of Cheddar cheese. (Asian-Aust. J. Anim. Sci. 2003. Vol 16, No. 8 : 12051211)
INTRODUCTION
Today, cheese, a major product made of milk, has gained widespread popularity in the world, even it's nutritional and culinary specialty has been known a long time ago (Siggelkow, 1981).Also, the consumption of cheese and cheese products has been gradually increasing over the past few years.A great percentage of that increase has been in the consumption of cheese for snacks or lunches (Wendorff, 1981).
Although cheese is an excellent source of calcium and protein, it contains very little iron (Blanc, 1981).Fortification of iron in cheese would help meeting this nutritional need.Using dairy foods as a vehicle for supplementing iron seems to be an advantage because people who consume diets with low iron density usually consume more dairy products (Hekmat and McMahon, 1997).Futhermore, iron-fortified dairy foods have a relatively high iron bioavailiability (Woestyne et al., 1991).However, before any such fortification is undertaken in cheese, the effects of iron fortification on oxidation of milk fat, and sensory characteristics must be ascertained.
Iron in food is absorbed by the intestinal mucosa and especially, nonheme iron, the major dietary pool, is greatly influenced by meal composition.It is well known that Lascorbic acid is a powerful enhancer of nonheme iron absorption (Lynch and Cook, 1980).Its influence may be pronounced in meals of iron availability.L-ascorbic acid facilitates iron absorption by forming a chelate with ferric iron at acid pH that remains soluble at the alkaline pH of the duodenum.However, the addition of L-ascorbic acid influences on the quality of yogurt due to its high acid.Therefore, iron and L-ascorbic acid need microencapsulation.
Microencapsulation, which shows potential as carriers of enzymes in the food industry, could be a good vehicle for the addition of iron to milk (Bersen'eva et al., 1990;Jackson and Lee, 1991).Currently there is a considerable interest in developing encapsulated flavors and enzymes.Among several factors to be considered, choice of coating material is the most important and depends on the chemical and physical properties of the core material, the process used to form microcapsules, and the ultimate properties desired in microcapsules.
For microencapsulation-although several researchers have used coating materials such as milk fat, agar, and gelatin, etc. responsible for enzyme, flavor and iron microencapsulation in foods (Magee and Olson, 1981a, b;Braun and Olson, 1986), no study has measured the efficiency of iron microencapsulation using fatty acid esters, and the stability of microcapsule itself and inside the body.Therefore, the objective of this study was to examine the effect of microencapsulated iron and/or L-ascorbic acid added Cheddar cheese on chemical and sensory aspects during ripening.
Materials
For the microencapsulation of iron complex, polyglycerol monostearate (PGMS) was used as a coating material.It was purchased from Il-Shin Emulsifier Co., LTD. (Seoul, Korea).As core materials, water-soluble iron complex, ferric ammonium sulfate (FeNH 4 (SO 4 ) 2 ⋅4H 2 O) and L-ascorbic acid were purchased from Sigma Chemical Co.(St.Louis, MO, USA) and Shinyo Pure Chemical Co. Ltd. (Osaka, Japan) and were in food grade.
Preparation of microcapsule
Microcapsules of iron were made by PGMS, which was selected as a major coating material from our previous study (Kwak et al., 2001).Also, ferric ammonium sulfate and L-ascorbic acid were selected (Kim et al., 2003).Other experimental factors were as follows: the ratio of coating material to core material was 5 g:1 g and 50 mL distilled water was additionally added because PGMS solution was highly viscous.The spray solution was heated at 55°C for 20 min, and stirred with 1,200 rpm for 1 min during spraying.An airless paint sprayer (W-300, Wagner Spray Tech.Co., Markdorf, Germany) nebulized a coating material-iron emulsion at 45°C into a cyclinder containing a 0.05% polyethylene sorbitan monostearate (Tween 60) solution at 5°C.The diameter of the nozzle orifice was 0.33 mm.The chilled fluid was centrifuged at 2,490×g for 10 min to separate unwashed microcapsule suspension.Microcapsules were formed as lipid solidified in the chilled fluid.The microencapsulation of iron and L-ascorbic acid were done in triplicate.
Efficiency of microencapsulation
For iron measurement, the dispersion fluid was assayed for untrapped iron during microencapsulation.One milliliter of the dispersion fluid was taken and diluted ten times and total iron content was measured at 259.94 nm wave-length by inductively coupled plasma spectrometer (ICP).Lactam 8440 Model spectrometer (Plasmalab, Victoria, Austrailia) was used.A sample measurement was run in triplicate.
Total L-ascorbic acid was analyzed spectrophotometrically using DNP (2,4-dinitrophenyl hydrazine) test described (Korea Food Code, 2002).Samples were prepared immediately before analyses and kept cold and protected against daylight during analysis.A L-ascorbic acid stock solution was prepared daily by dissolving 10 mg of L-ascorbic acid in 100 mL of deionized water (100 g/mL).It was diluted with deionized water to obtain the final concentration of 10, 20, 30, 40 and 50 g/mL.Total L-ascorbic acid was determined using the calibration graph based on concentration (g/mL) vs absorbance, prepared daily running fresh standard solutions:
Manufacture of Cheddar cheese
Cheese making process was described by Metzger and Mistry (1994).Regardless of microencapsulation, ferric ammonium sulfate and L-ascorbic acid were added right before rennet addition.After cheese manufacturing, cheeses were weighed, vacuum packaged in a barrier bag and ripened at 5°C for 0, 1, 3, 5 and 7 mo.The cheese samples stored in refrigerator for 12 h were 0 mo.The cheese making experiment was triplicate on different days using different batches of treatments.
Chemical composition and cheese yield
Cheese was analyzed for moisture, fat, protein and ash using the methods of Association of Official Analytical Chemists (AOAC, 1990).Cheese yield was determined as wt.cheese×100/wt.milk.
Thiobarbituric acid (TBA) test
Oxidation products were analyzed spectrophotometrically using the thiobarbituric acid (TBA) test (Hegenauer et al., 1979).The TBA reagent was prepared immediately before use by mixing equal volumes of freshly prepared 0.025 M TBA (brought into by neutralized with NaOH) and 2 M H 3 PO 4 /2 M citric acid.Reactions were terminated by pipetting 5.0 mL of yogurt sample containing iron microcapsules into a glass centrifuge tube and mixed throughly with 2.5 mL TBA reagent.The mixture was heated immediately in a boiling water bath for exactly 10 min, and then cooled on ice.Then 10 mL cyclohexanone and 1 mL of 4 M ammonium sulfate were added and centrifuged at 2,490×g for 5 min at room temperature.The orange-red cyclohexanone supernatant was decanted and its absorbance at 532 nm was measured spectrophotometically in an 1 cm light path.All measurements were run in triplicate.
Analysis of short-chain free fatty acid
Cheese samples (1 g) were removed periodically and extracted with diethylether and hexane for 2 hr and eluted through a 10mm i.d.glass column containing neutral alumina as described by Kwak et al. (1990).A Hewlett-Packard Model 5880A GC equipped with a flame ionization detector was used.The preparation of FFA was achieved using a 15 m×0.53 mm i.d.Nukol fused-silica capillary column (Supleco Inc., Bellefonte, PA, USA).The GC was operated with helium carrier gas at 2 ml/min, hydrogen gas 37 ml/min, and air at 300 ml/min.The column oven was programmed as an initial holding for 1 min at 110°C and first level holding to 180°C at 5°C/min for 10 min and holding for 20 min.Both temperatures for injector and detector were 250°C.All quantitative analyses were done by relating each peak area of individual FFA to the peak area of tridecanoic acid as an internal standard.Each FFA was identified by the retention time of standard.
Analysis of neutral volatile compounds
Samples of cheese (40 g) were removed periodically and added with 10 ml distilled water.Two ml of each distillate was used to take headspace gas sample as described by Bassette & Ward (1975).A Hewlett-Packard Model 5880A GC equipped with a flame ionization detector was used.Headspace gas samples were analyzed on a capillary column (Supelcowax 10, 30m×0.32mm I.D. Bellefonte, PA, USA).The was operated with nitrogen carrier gas at a flow rate of 1.2 ml/min; hydrogen gas flow rate was 30.0 ml/min; air was 300.0 ml/min.Temperature for both injector port and detector was maintained at 230°C.The column oven was programmed at three temperature levels: initial holding for 5 min at 35°C/min and heating to 140°C at 15°C/min, holding for 30 min.The concentrations of volatile compounds were estimated by analyzing cheese samples that contained the known concentrations and those of containing no added standards.The difference between the two treatments was used for the estimation of concentrations of individual volatile compounds.
Sensory analysis
Seven trained sensory panelists evaluated randomly coded cheeses.Texture was evaluated on a 5 point scale (1=poor to 5=excellent).Typical Cheddar cheese flavor, acid, and bitterness were scored on a 5 point scale (1=low intensity to 5=high intensity).
Statistical analysis
Data from the determination of optimum conditions of the cheese, one-way ANOVA (SAS Institute Inc., Cary, NC, USA 1985) was used.The significance of the results was analyzed by the least significant difference (LSD) test.Differences of p<0.05 were considered to be significant.
Microencapsulation
In the present study, the yields of iron and L-ascorbic acid microencapsulation were 72 and 94%, respectively.In our laboratory, PGMS was appeared to be hard to spray, therefore, we found the optimum ratio of PGMS to deionized water to reduce the viscosity of PGMS solution.
In our previous study, the ratio of PGMS to iron to distilled water was 5:1:50 (w/w/v), efficiency of the microencapsulation was 75% as the highest value (Kwak et al., 2001).In addition, Kim et al. (2003) reported the efficiencies of microencapsulation for iron was 73 and 95% for L-ascorbic acid.
The size of microencapsulated iron or L-ascorbic acid with PGMS was irregular from nano to micrometer, and the average size was in the range of 2 to 5 µm (pictures not shown).Microscopic examination of microcapsules revealed spherical particles.Microcapsules containing iron or L-ascorbic acid had smooth surfaces and evenly distributed pockets.The shape of the microcapsules was likely affected by encapsulated conditions.Magee and Olson (1981a), and Braun and Olson (1986) found that lipid and cooling fluid temperatures affected the shape of microcapsule by controlling the cooling rate of lipid coatings.They observed that microcapsules were cylindrical when the lipid coating was rapidly cooled and spherical when the lipid was slowly cooled.
Chemical composition and yield of Cheddar cheese
The composition of the cheese was presented in Table 1.Moisture content of cheese was ranged from 35.0 to 36.0%, fat from 25.0 to 26.4%, protein from 18.0 to 19.1% and ash from 3.7 to 4.0%.In the composition of the Cheddar cheeses, no difference was found between control and ironfortified cheese.The yield of the treated cheese (8.8%) was lower than the control (10.6%).Ferric ammonium sulfate was 13.14 mg/100 g cheese and L-ascorbic acid was 77.3 mg/100 g cheese.
TBA test during ripening
The effect of iron fortification in Cheddar cheese on chemical oxidation (as measured by the TBA test) during 7 mo ripening was shown in Figure 1.In uncapsulated iron added group (I), TBA value increased dramatically from 0.31 (0 mo) to 0.53 (7 mo).In Comparison of microencapsulated iron added groups (MI, MIUC and MIMC), no difference was found as 0.12 (0 mo) and 0.16 (7 mo).When compared with microencapsulated iron with uncapsulated L-ascorbic acid (MIUC) and microencapsulated iron with microencapsulated L-ascorbic acid (MIMC), the TBA value was not significantly different during 7 mo ripening periods.
In this experiment, TBA absorbance was significantly lower in capsulated groups than those in uncapsulated group, regardless of iron and L-ascorbic acid, during storage.These data indicated that oxidation process may be faster in cheese samples containing uncapsulated iron than in those containing microencapsulated iron.
Our previous study (Kwak et al., 2001) showed the effect of iron fortification in milk on chemical oxidation during 15 d storage.We reported that TBA absorbance was significantly lower in capsulated group than that in uncapsulated group at 15 d.Similar result was observed in another study (Kim et al., 2003) indicating oxidation process may be faster in yogurt samples containing 62.1 a 85.9 a 86.7 a 85.2 ab 319.9 ab MIMC6 7 68.3 a 93.7 a 95.1 a 97.8 ab 354.9 a uncapsulated iron than in those containing microencapsulated iron.Jackson and Lee (1991) indicated that samples containing uncapsulated iron (ferrous sulfate and ferric chloride) showed 2-3 times high in fatty acid production, compared with those containing microencapsulated iron complex when milk fat was used as a coating material.The reason why iron fortification caused several modifications in dairy products could be explained that added iron may interact with casein, resulting in iron-casein complexes and the presence of O 2 acts as a preoxidant, therefore, lipid oxidation in Cheddar cheese can be accelerated.
Production of short-chain free fatty acids (FFA)
It is well known that short-chain free fatty acids (C 4 through C 10 ) constitute the backbone of Cheddar flavor (Lin and Jeon, 1987).Therefore, the production of short-chain FFA profiles was considered to be an important aspect in this study.The productions of short-chain FFA in control and experimental cheeses ripened at 7°C for 7 mo were shown in Table 2.Among control and treatments, no difference was found (p>0.05) at every period points.
During 7 mo ripening period, the total release of short-chain FFA production was not significantly different from 0 and 1 mo ripening period, however, the release increased from 3 mo ripening in groups C, MI, and MIUC.Total amount of short-chain FFA was in the range of 324.5 to 428.0 ppm.These results indicated that lipolysis process, which contributes the development of the short-chain FFA, in ironfortified cheese was not different from in control.
Production of neutral volatile flavor compounds
The production of neutral volatile compounds was observed in iron-fortified cheese in Table 3.In groups containing no L-ascorbic acid (C, I, and MI), acetaldehyde production increased steadily up to 0.50-0.62ppm at 7 mo.In comparison, L-ascorbic acid containing groups, regardless of microencapsulation (MIUC and MIMC), acetaldehyde production was 0.26-0.35at 7 mo ripening.
Ethanol production was the highest among flavor compounds measured and showed a similar trend in all groups.Also, the ethanol production increased dramatically after 1 mo upto 7 mo in all groups.
Other neutral flavor compounds detected were acetone,
Sensory analysis
The sensory characteristics in five treatments were shown in Table 4.For bitter taste, it was not significantly different among treatments during 7 mo ripening.However, Group I, which was uncapsulated iron-fortified cheese, showed a significant increase in bitter taste at 1 mo and thereafter.Also, microencapsulated iron-fortified Cheddar cheese (MI) showed a higher score at 3 mo ripening, compared with those of other groups.
For acidic taste, L-ascorbic acid added groups (MIUC and MIMC, regardless of iron microencapsulation) showed significantly higher scores at 3 mo, and at 7 mo, respectively, than those of others.For astringency, uncapsulated iron-fortified group (I), and microencpaulated iron and L-ascorbic acid-fortified group (MIMC) showed higher scores at 3 mo and thereafter than other values.
For metallic taste, iron-fortified groups without Lascorbic acid (I and MI), regardless of microencapsulation, showed a higher score.Especially, uncapsulated ironfortified group (I) increased dramatically the metallic taste even at 1 mo.
The major difference among control and experimental groups was observed in color.Uncapsulated iron and Lascorbic acid added groups (I and MIUC) showed a profound color change to yellowish green.Group I was highly changed from 0 mo upto 7 mo ripening, while group MIUC showed color change at 5 mo ripening and thereafter.Interestingly, no difference was found between control and microencapsulated iron-added groups regardless of Lascorbic acid (MI and MIMC) through all ripening periods.
Cheddar flavor was developed without a difference with ripening period in all groups, except for uncapsulated ironfortified cheese (I) ripened at 5 and 7 mo ripenings.In uncapsulated iron-fortified Cheddar cheese, Cheddar flavor decreased with ripening time.Texture score increased with ripening period in control, however, those decreased in all experimental cheeses.
For overall preference test, control (C) and microencapsulated iron (MI) and/or L-ascorbic acid (MIUC and MIMC) containing treatments showed a high consumer preference in all storage periods.However, the scores of uncapsulated iron containing group (I) were dramatically lower compared with those of other treatments in all ripening periods.This result indicated that the process of microencapsulation was very effective to mask off-taste and off-flavor of iron and L-ascorbic acid.
The sensory quality of iron-fortified dairy foods has been shown to be effective by the microencapsulation of both iron and L-ascorbic acid.Two major off-flavors have been associated with dairy products: oxidized flavor resulted from catalysis of lipid oxidation by iron, and sourness contributed by L-ascorbic acid.
Iron is known to catalyze lipid oxidation resulting in rancidity with development of an unpleasant odor and flavor.The TBA test has been extensively applied to food in which the absorbance of TBA reaction products correlates positively with sensory evaluation.Fortification with iron complex causes oxidized off-flavor and high TBA number.To avoid oxidized and metallic flavors and color changes, microencapsulation techniques were needed (Gaucheron, 2000).
CONCLUSION
The present study demonstrated that the ratio of 5:1:50 (w/w/v) as coating (PGMS) to core material (iron complex or L-ascorbic acid) to distilled water showed a high efficiency of microencapsulation as 72% and 94%, respectively.Our results indicated that lipid oxidation process measured by TBA test was significantly slower in capsulated iron than in uncapsulated iron-fortified Cheddar cheese.In sensory, we need to point out that no significantly adverse effects was found in microencapsulated iron and Lascorbic acid-fortified Cheddar cheese during 7 mo ripening in this experiment.Therefore, the present study provides an important evidence that microcapsules of iron and Lascorbic acid were an effective means of fortification, and can be applied to Cheddar cheese without any changes in sensory aspects.
Table 1 .
Mean chemical composition of iron fortified Cheddar
Table 3 .
The production of neutral volatile flavor compounds in iron and/or L-ascorbic acid fortified Cheddar cheese ripened at 7°C for 7 mo 1
Table 2 .
Concentrations of short-chain fatty acids in iron and/or L-ascorbic acid fortified Cheddar cheese ripened at 7°C for 7 mo 1
Table 4 .
Sensory characteristics in iron and/or L-ascorbic acid fortified Cheddar cheese ripened at 7°C for 7 mo 1 | 2017-08-27T15:17:56.214Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "54961a8ac51626630dfeae06cb0ef8c672b80067",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/16_178.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "54961a8ac51626630dfeae06cb0ef8c672b80067",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
210917697 | pes2o/s2orc | v3-fos-license | Tracheal Adenoid Cystic Carcinoma Presented with Chronic Asthma Diagnosed by Bronchial Washing Cytology
Adenoid cystic carcinoma is a tumor that mainly arises from salivary glands and is present rarely in airways with nonspecific symptoms. Diagnosis based on bronchial washing cytology is rarely reported because this tumor is usually lined by normal mucosa. A 35-year-old woman was referred to our center as a case of unresponsive asthma and hemoptysis for the past year. CT scan showed tracheal mass. Bronchoscopy was done followed by bronchial washing cytology and biopsy. Cytology smears revealed sheets and three-dimensional clusters of small cells, and some of them arranged around hyaline mucoid globules. Cell block and biopsy showed classic pathological findings of adenoid cystic carcinoma. Adenoid cystic carcinoma of the airways can be manifested with nonspecific symptoms and should be considered in the differential diagnosis of airway diseases and asthma. This tumor is rarely seen in the bronchial washing specimen. Characteristic cytological findings and using cell block preparation differentiate adenoid cystic carcinoma from other tumors.
Introduction
e incidence of primary tracheal tumors is infrequent. Adenoid cystic carcinoma (ACC) is a malignant tumor that mainly arises from major and minor salivary glands [1].
ACC of the lung arises from bronchial glands and mostly appears in the trachea and main bronchi [1,2]. It is a rare disease and constitutes about 0.04-0.2% of the primary lung tumors [1]. In the past, this tumor was called bronchial adenoma, which is a wrong terminology because ACC is not a benign tumor [2].
ere is limited epidemiologic data about ACC of the lung. Based on few studies, the mean age is 46 years (range of 22-73 years) and male to female incidence ratio is different in various studies [1,2]. Symptoms are nonspecific, including cough, shortness of breath, and occasionally hemoptysis. Some patients might be misdiagnosed as asthma [1,2]. Characteristic findings for ACC are not reported in CT scan or bronchoscopy, and thus the diagnosis is based on pathology.
Fine needle aspiration cytology is a routine method for sampling of ACC of salivary glands, but it is used rarely in airways. Bronchial washing and brushing is the popular cytology method in airways and the lung [3].
In the trachea and bronchi, ACC is usually covered by intact mucosa, and tumor cells are not found in bronchial cytology specimens. In rare cases, tumor invades to overlying mucosa, and tumor cells can be seen in bronchial washing or bronchial brushing specimens [4][5][6][7].
Case Report
A 35-year-old female presented with cough and shortness of breath for the past year. She was treated as asthma but did not respond, and hemoptysis appeared. CT scan was performed, and soft tissue mass arising from the posterior and Case Reports in Medicine lateral wall of the trachea with mediastinum extension was reported. e patient was undergoing bronchoscopy, and tracheal mass 3 cm above carina with invasion to the lateral wall of trachea was detected. Bronchial washing and biopsy were performed. Alcohol-fixed smears from the bronchial washing specimen were prepared.
Smears were stained with the modified Papanicolaou method. Cytology showed hypercellular smears composed of loosely cohesive sheets, three-dimensional clusters, and dispersed cells. Cells were relatively small and uniform with round nuclei, small nucleoli, and scant cytoplasm. roughout the smears, acellular hyaline materials with globule formation in different sizes were seen, and some of them were enveloped by tumor cells (Figure 1).
A cell block was prepared using the thrombin method. In the cell block, nests and strands with tubular-like structures and a cribriform pattern containing homogenous acidophilic materials were seen ( Figure 2).
Biopsy revealed bronchial mucosa with infiltrative neoplastic lesions composed of tubular and cribriform structures with acidophilic materials (Figure 3). e final diagnosis was adenoid cystic carcinoma. e patient was undergoing resection surgery, but unfortunately, complete resection was not possible, and then she was referred for radiotherapy.
Discussion
Adenoid cystic carcinoma of the lung is a rare disease and presents with nonspecific symptoms. It can be misdiagnosed as asthma [1]. Imaging and bronchoscopy cannot differentiate ACC from other tumors such as carcinoid tumor or squamous cell carcinoma, and final diagnosis was based on pathology. In most cases, bronchial cytology is negative because the tumor is covered by intact mucosa. Sometimes overlying mucosa is ulcerated and makes the diagnosis based on bronchial washing cytology possible [4][5][6][7]. Cytology smears show monolayer sheets, three-dimensional clusters, and many isolated cells. Tumor cells are small and uniform with round nuclei, small nucleoli, and minimal cytoplasm. In the background, acellular hyaline materials and spherical globules are seen. Some of these hyaline globules are surrounded by tumor cells; this finding is a significant diagnostic feature of ACC. Using the cell block shows characteristic features of ACC, including tubular structures and a cribriform pattern with acidophilic materials and facilitates the diagnosis of ACC.
Carcinoid tumor and small cell carcinoma are made of small cells; therefore, they are in differential diagnosis with ACC in cytology. Characteristic granular chromatin and absence of hyaline globules favor carcinoid tumor [8]. Features of small cell carcinoma, including granular chromatin, crush artifact, necrosis, and apoptotic bodies, are absent in ACC [9].
Tumor cells surrounding hyaline materials may be misdiagnosed as true glands, so well-differentiated adenocarcinoma enters in the differential diagnosis. Cytonuclear atypia of adenocarcinoma and absence of hyaline globules differentiates it from ACC [5].
Conclusion
e incidence of ACC in the trachea is infrequent, but it is crucial to consider this tumor if a patient does not respond to antiasthmatic therapy. In these cases, proper imaging with careful attention to cytologic criteria and preparing the cell block makes the accurate diagnosis possible.
Consent
A formal written consent was obtained from the patient.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-01-23T09:06:54.372Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "4bb7936719f243156cce086fd6740000c10df222",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crim/2020/6543097.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b49201cf8ed906b76ed9c1e8d8458741b0389576",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
146809070 | pes2o/s2orc | v3-fos-license | An Ecological Assessment of Isaria fumosorosea Applications Compared to a Neonicotinoid Treatment for Regulating Invasive Ficus Whitefly
A pilot study was conducted on a weeping fig, Ficus benjamina shrub hedge in a Florida urban landscape to determine the efficacy of a fungal biopesticide, PFR-97™ which contains blastospores of Isaria fumosorosea, and a neonicotinoid treatment (Admire Pro™) applied against the invasive ficus whitefly pest, Singhiella simplex (Singh). Post treatment, an ecological assessment of the study was conducted by observing the impact of the fungal biopesticide and neonicotinoid treatment on natural enemies, e.g., predators, parasitoids and enzootic fungal pathogens occurring in the whitefly-infested hedge. Both treatments provided a significant reduction in the whitefly population compared to control and were compatible with the natural enemies present. Various natural enemies including fungal entomopathogens were identified associated with the whitefly population infesting the weeping fig hedge. The parasitoids, Encarsia protransvena Viggiani and Amitus bennetti Viggiani & Evans combined parasitized a similar mean number of whitefly nymphs in both treatments and control; however, the number parasitized decreased over time. Natural enzootic fungi isolated from the ficus whitefly nymphs were I. fumosorosea, Purpureocillium lilacinum and Lecanicillium, Aspergillus and Fusarium species. Results from this pilot study suggest there is much potential for using repeated applications of the fungal biopesticide, PFR-97™ as a foliar spray compared to a neonicitionid as a soil drench for managing S. simplex on Ficus species for ≥28 days.
Introduction
The ficus whitefly, Singhiella simplex (Singh), is an invasive species that has become a major pest in Florida feeding on Ficus shrubs and trees [1,2]. This exotic species of whitefly endemic to the South Asian region, i.e., Myanmar, China and India [3][4][5], was first discovered in the United States in 2007 from Miami-Dade County in Florida [1], and since then it has become a problem for homeowners, residential community managers, landscapers, growers, businesses and government J. Fungi 2019, 5, 36 2 of 18 officials throughout the State [6]. Within a few years of its introduction, it has been reported damaging Ficus sp. in 16 counties of Florida [7]. Feeding by this pest not only turns host leaves yellow, but heavy infestations often lead to leaf drop, branch dieback and complete defoliation.
In Florida, the ficus whitefly has been most commonly found infesting weeping fig (Ficus benjamina), but has also been seen on F. altissima, F. aurea, F. bengalensis, F. lyrata, F. macllandii and F. microcarpa [6,7]. Weeping figs are commonly used as hedges but are also grown as ornamental trees; particularly in the southern part of Florida. Due to the severity of damage and aesthetics or economic loss associated with this pest, current control strategies rely heavily on the use of chemical insecticides. Pest management professionals recommend soil or trunk applications of a neonicotinoid compound. Foliar sprays are also suggested to treat "hot spots" or to obtain quick knockdown in addition to the soil applications [6,7]. Although when applied appropriately, systemic insecticides in the neonicotinoid class can provide sufficient control of whitefly for 9-12 months; however, use of chemical insecticides cannot be ultimately considered a sustainable management approach for any pest. Risks of chemical insecticide use in urban landscape areas may include: (1) insecticide drift from foliar sprays [8], (2) leaching and runoff of insecticides into the water sources or drainage systems [9], (3) possibility of insecticide resistance development in the whitefly population due to prolonged use of the same chemical group [10,11], and (4) the negative impact on the non-target organisms, e.g. humans, domestic animals, natural enemies and pollinators [8,12,13].
In the Florida landscape, several natural enemies including enzootic entomopathogenic fungi have been observed attacking ficus whitefly and can play an important role in long-term control of this pest [2]. Awareness of these natural enemies is very important when making appropriate management decisions so as not to adversely affect them [14]. For instance, Torres-Barragán et al. [15] reported that a variety of fungal entomopathogens were responsible for managing the greenhouse whitefly, Trialeurodes vaporariorum (Westwood) population in the agricultural area at "El Eden" Ecological Reserve, Quintana Roo, Mexico. In another related study, Nielsen and Hajek [16] found that enzootic fungal entomopathogens were important in controlling invasive soybean aphid populations (Aphis glycine L.) and an epizootic of these insect pathogens was associated with the decline in the pest population. In addition, these authors reported that several species of parasitoids and predators (compatible with fungal entomopathogens) played a significant role and showed an additive effect in suppressing the aphid populations over time. Thus, the pest management strategy recommendations for ficus whitefly must be made considering its potential long-term detrimental effect on the naturally occurring biocontrol agents in the region, including fungal entomopathogens.
Previous studies have indicated that fungal entomopathogens are important ecological regulatory factors in managing insect populations [15][16][17][18][19][20]. Some of the commercially available entomopathogenic fungi in formulated products which are frequently used for whitefly control are Isaria fumosorosea Wize [21,22], Beauveria bassiana (Balsamo) Vuillemin [23], Lecanicillium muscarium (Petch) Zare and Gams [24], and Ashersonia aleyrodis (Webber) [25,26]. In 1986, a strain of I. fumosorosea named Apopka 97, was discovered and isolated from Phenacoccus sp. (Hemiptera: Pseudococcidae) in Apopka, Florida [27], and is now registered in the USA with a tolerance-exempt residue status under the registered commercial name PFR-97 TM 20% WDG [= Paecilomyces fumosoroseus Apopka strain 97 (ATCC 20874)] by the manufacturer Certis USA, Columbia, MD. This fungus has a worldwide distribution and is efficacious against many pestiferous arthropods, especially whiteflies which is well documented [17,[21][22][23][27][28][29][30] and it also has been demonstrated to be compatible with many beneficial arthropods, which include parasitoids and predators [22,29,[31][32][33][34][35]. In addition to efficacy, the advantages of using fungal entomopathogens are numerous and include safety for humans and other non-target organisms, reduction of pesticide residues, preservation of other natural enemies, and an increased biodiversity in managed ecosystems [17]. The fungus grows optimally between 25-28 • C in the southern USA and it tolerates temperatures between 32-35 • C [27]. Thus, in Florida, the use of this formulated commercial product containing the fungus, I. fumosorosea may be an alternative for the management of the ficus whitefly in the urban landscape.
A severe ficus whitefly infestation occurred in a residential community on a F. benjamina hedge in Fort Pierce, Florida and needed immediate attention because of the constant leaf drop ( Figure 1A,B). In our efforts to support the residents, we planned a pilot study to evaluate the efficacy of the entomopathogenic fungus, I. fumosorosea (strain Apopka-97) contained in the formulated bioinsecticide product, PFR-97™ 20% WDG and the neonicotinoid systemic insecticide (imidacloprid) against ficus whitefly. We also, evaluated the effect of these two insecticides on the natural enemies and the naturally occurring enzootic fungal entomopathogens present in the affected area. Thus, the objective of this pilot study was to determine the potential of the fungal biopesticide, PFR-97 TM compared to a neonicotinoid treatment used for management of the ficus whitefly, and to assess its ecological impact on the natural enemies in an urban landscape residential field setting. A severe ficus whitefly infestation occurred in a residential community on a F. benjamina hedge in Fort Pierce, Florida and needed immediate attention because of the constant leaf drop ( Figure 1A,B). In our efforts to support the residents, we planned a pilot study to evaluate the efficacy of the entomopathogenic fungus, I. fumosorosea (strain Apopka-97) contained in the formulated bioinsecticide product, PFR-97™ 20% WDG and the neonicotinoid systemic insecticide (imidacloprid) against ficus whitefly. We also, evaluated the effect of these two insecticides on the natural enemies and the naturally occurring enzootic fungal entomopathogens present in the affected area. Thus, the objective of this pilot study was to determine the potential of the fungal biopesticide, PFR-97 TM compared to a neonicotinoid treatment used for management of the ficus whitefly, and to assess its ecological impact on the natural enemies in an urban landscape residential field setting. (B) view south; (C) eggs (left side: magnified 20×) and nymphs (right side: 16×) of the ficus whitefly found on the leaves; (D) Plastic coverslips pinned to either the abaxial (ab) or adaxial (ad) side of randomly chosen leaves used for spore deposition studies; (E) Leaf disks placed on moist filter paper in Petri dish for counting; (F,G) Recognition of parasitism of ficus whitefly nymphs by Encarsia protransvena (23×) and (G) Amitus bennetti with parasitoid developing inside the translucent nymphal whitefly case (31×); (H) ficus whitefly pupa exuviae after emergence of the parasitoid, A. bennetti with an exit hole (38×); (I) sample ficus whitefly nymph flattened and infected with a naturally occurring enzootic fungal entomopathogen, Lecanicillium species (31×).
Study Area
The layout for the study area (northern 27 22 50.94 N × 80 22 00.22 W and southern 27 22 48.96 N × 80 22 00.25) was a randomized complete block design with four replications. Each plot measured 5 m of a F. benjamina hedge (~10-15 plants) which ran along a concrete block wall (~1.2 m tall) located at a residence in Fort Pierce, Florida was~1 m tall. Each hedge segment divided by a cement driveway was naturally infested with ficus whitefly ( Figure 1C) with the northern side more severely infested than the southern. Prior to application of treatments, all plots were raked free of leaf litter and 0.2 kg of granular fertilizer (Rite Green 6-6-6; Sunniland Corporation, Sanford, FL, USA) was spread (183-243 cm in diameter) around the base of each shrub. The shrubs were watered thoroughly by a drip irrigation system as needed. Field environmental conditions throughout the study were monitored using the weather data website (www.wunderground.com).
Treatment Application
Treatments were as follows: I. fumosorosea (PFR-97 ® 20% WDG; Certis USA, Columbia, MD) at 10 9 colony forming units (CFUs)/g, neonicotinoid (Admire Pro™) and an untreated check (where the same volume of distilled water was sprayed. For the neonicotinoid treatment plots, each shrub was drenched at label rate with 120 mL of Admire Pro at the base by pouring the solution around each shrub. The fungal suspension was prepared by mixing the dry powder formulation of PFR-97 20% WDG approximately 2 hours prior to application in a clean bucket (18.9 L) in order to initiate the germination process and then transported to the spray site. When on site, the bucket cover was removed, and the fungal suspension was stirred again for two minutes. The suspension was then poured into a stainless-steel hand pump sprayer and the pressure was established at 30 pumps per plot. The foliage was sprayed to runoff at a rate of 2.4 g L −1 (2.2 × 10 7 blastospores mL −1 ) for the 1st and 2nd application at 0 DAT and 15 DAT, respectively. The fungal suspension was applied in the evening at dusk from~6:15-6:30 pm.
To determine the deposition of fungal blastospores mm −2 in the fungal treatment plots (PFR-97), five plants were chosen at random and 10 plastic microscope cover slips (Fisherbrand ® 22 × 22 mm, Fisher Scientific, Pittsburgh, PA, USA) were pinned to either the adaxial (5 coverslips) or abaxial (5 coverslips) side of a randomly chosen leaf per plant for the initial spray treatment ( Figure 1D). The pin was made secure by sticking it into a styrofoam packing peanut on the opposite side of the leaf. On 14 DAT, prior to the second application, eight plants were chosen at random to determine the blastospore density mm −2 sprayed on the leaves as described above. Sprayed cover slips were allowed to dry for~12 h overnight and then brought back to the lab for assessment of spore deposition. The pin was removed and each cover slip was placed upside down on a glass microscope slide in a 50 µL drop of acid fuschin 1% stain. Blastospore density was assessed with a compound light microscope (400×) using a 10 mm reticule grid (Hunt Optic and Imaging, Pittsburg, PA, USA).
The viability of blastospores was assessed using two Fisherbrand ® Petri dishes (Thermo Fisher Scientific, Waltham, MA, USA) containing potato dextrose agar (PDA) each sprayed at a rate of 2.2 × 10 7 spores mL −1 . Plates were then sealed with Parafilm ® (Bemis, Neenah, WI, USA) and incubated for 12 h at 25 ± 1.0 • C, 100% relative humidity (RH) under a 16 h photophase. After this duration, each plate was viewed under a compound microscope and the percent viability was determined by observing a total of 200 spores (50 spores in each quadrat of the plate). Spores were considered to have germinated if a germ tube had formed. This procedure was repeated for each fungal spray application, and the percent viability ranged between 87%-89%.
A total of 10 leaf samples/plot/treatment were placed into pre-labeled individual re-sealable plastic bags and brought back to the lab for examination under a binocular microscope (40×) to record the number of live and dead whitefly nymphs. Leaf disk samples were punched out with # 5 cork borer (50.3 mm 2 ) in the center of each leaf on either side of the midrib ( Figure 1E). One disk was used for assessment of dead and/or parasitized ficus whitefly nymphs; the other from the same leaf was used for leaf washes (described below). Recognition of parasitism was assessed by observing the development of the parasitoid inside the translucent nymphal case, a blackened nymphal case due to melanization or an exit hole ( Figure 1F-H). Once disks used for counting whitefly nymphs were observed and recorded, they were placed on moistened filter paper in a Fisherbrand ® Petri dish (100 × 15 mm), covered, and sealed with Parafilm ® . Sealed dishes were then placed in a growth chamber under the same conditions as described above for 14 days to allow for mycosis and determine percent mortality due to I. fumosorosea and other fungal species present (if any).
Fungal Identification on Leaf Phylloplane
Ten single new leaf disk/plot/treatment punched from the opposite side of the midrib (50.3 mm 2 ) as described above were placed together into a 15 mL plastic centrifuge tube containing 10 mL of Triton X-100 solution (0.01%) and then vortexed for 1 minute. Aliquots (100 µL) of each suspension were removed and spread on five plastic Fisherbrand ® Petri dish plates (100 × 15 mm) containing PDA-dodine (modified as a selective media for entomopathogenic fungi), streptomycin sulfate and chloramphenicol [36,37]. Plates were sealed with Parafilm ® , placed in the same growth chamber under the same conditions described above and incubated for 14 days to allow the growth of CFUs on the plates. This procedure was repeated four times for a total of 20 plates per treatment. The CFUs were used for identification of enzootic fungal entomopathogens present on the leaf phylloplane and to assess the viability of I. fumosorosea. Voucher fungal entomopathogen in vitro culture isolates were identified by Svetlana Gouli and deposited at the University of Vermont, Invertebrate Pathology and Microbial Pest Control Laboratory in Burlington, VT. In vitro voucher fungal entomopathogen isolates of Isaria species were sent to both Dr. Richard Humber at the USDA-ARS Collection of Entomopathogenic Fungal Cultures, in Ithaca, NY and Dr. Rob Samson at the CBS-KNAW Fungal Diversity Centre, Utrecht, in The Netherlands for identification and were deposited with each institution.
Identification of Enzootics Isolated from Ficus Whitefly
Five dead and flattened nymphs ( Figure 1I) per plot/treatment were randomly chosen and removed from the semi-desiccated leaf disks using a sterile insect pin. A total of 20 cadavers per treatment from leaves collected for both treatments 0, 14 and 35 days post-treatment were placed on 1% water agar as described by Hall and Nguyen [38] in Fisherbrand ® Petri dish plates (100 × 15 mm). All agar plates were sealed with Parafilm, and then placed in the same growth chamber under the same conditions described above and incubated for 7 days. After mycosis of the insect was evident, the fungal spores and/or hyphae were isolated and grown on fresh PDA plates (100 × 15 mm) for identification. These inoculated plates were sealed, transferred to the growth chamber and incubated as described above. Voucher fungal entomopathogen in vitro culture isolates were identified as described above.
Data Analysis
The treatment effect on the total number of whitefly nymphs and percent mortality per treatment on the leaf disk per sampling day were assessed using a one-way ANOVA (α = 0.05) with mean separation by an LSD test. The percentage of nymphs on the leaf disks infected with the fungus, I. fumosorosea and other fungal species was determined and the number of CFUs isolated from leaf washes for I. fumosorosea on PDA-dodine agar plates for each treatment over time was compared.
The effect of the treatments on the corrected mean percent mortality of the whitefly nymphs was determined using the Sun-Shepard's formula [39]: Corrected % = Mortality % in treated plot ± Change % in control plot population 100 ± Change % in control plot population * 100 Change % in control plot population = Population in control plot after treatment − Population in control plot before treatment Population in control plot before treatment * 100 The effect of treatments on the percent parasitism was assessed and compared using an ANOVA (α = 0.05) with mean separation by a Tukey's HSD test. Data was square root (n + 0.01) arcsine transformed to remove zeros prior to analysis and untransformed numbers are presented in the table. The total percent mortality of the whitefly nymphs due to fungal entomopathogens plus other biotic and/or abiotic factors and parasitization in the different treatment plots was determined and compared over time. All statistical analyses were conducted using SAS 9.4 for WINDOWS 2012 (Cary, NC).
Pre-Treatment
During the first application of PFR-97, ambient temperature was 31 • C with a dew point of 20 • C and wind was calm with an occasional slight SW breeze; RH was 52%, and partly cloudy. The humidity increased steadily overnight reaching 78% RH by midnight with a temperature of 31 • C; RH increased to 93% by morning and temperature decreased to a low of 21 • C. During the second application the temperature was 28 • C with a dew point of 23 • C. Again, the wind was calm, with an occasional ENE breeze; RH was 76%, and partly cloudy. The humidity increased steadily overnight reaching 84% RH by midnight with a temperature of 26 • C; RH increased to 100% by morning (5:43am) and temperature decreased to a low of 22 • C.
Post-Treatment
The humidity reached a high of 100% on days 9-11, 14-15, 18, 27, 30-31, and 34 and a low of 39% on day 0 post-treatment. The overall mean ± SEM humidity was 78 ± 2% for the duration of the study. The highest air temperature (34 • C) occurred on days 0, 6, 8, 15-16 and the lowest air temperature (8 • C) occurred on day 19 post-treatment; the overall mean ± SEM air temperature was 25 ± 0.5 • C for the duration of the study. A total of 30 mm of rainfall occurred during the duration of the study and the highest (11 mm) and second highest (10 mm) level occurred 7 and 34 days post-treatment, respectively. Rainfall of ≤ 2 mm was intermittent and occurred on days 10, 13, 17, 21-23, 26 and 35 post-treatment. The overall mean ± SEM total amount of rainfall for the duration of the study was 0.8 ± 0.4 mm.
Insect Pests and Natural Enemies
The insect pests observed infesting the F. benjamina hedge included the ficus whitefly, S. simplex, another whitefly infesting ficus in Florida Tetraleurodes fici Quaintance & Baker and the weeping ficus thrips, Gynaikothrips uzeli Zimmerman (Table 1).
Various natural enemies were identified managing the ficus whitefly populations that infested the weeping fig. The parasitoids, Encarsia protransvena Viggiani and Amitus bennetti Viggiani & Evans were observed after parasitization or emergence from the whitefly pupal case ( Figure 1F-H). The lady beetles, Curinus coeruleus Mulsant, and Harmonia axyridis (Pallas) were observed roaming on the leaves of the F. benjamina hedge, and eggs and larvae of the green lacewing, Chrysopid sp. were observed on the leaves from all plots. The natural enzootic populations of entomopathogenic fungi inhabiting the leaf phylloplane and/or infecting the ficus whitefly were identified as follows: I. fumosorosea, Purpureocillium lilacinum (Thom) Luangsa-ard, Hou-braken, Hywel-Jones & Samson; Lecanicillium ( Figure 1I), Fusarium, and Aspergillus species.
Blastospore Deposition
The mean ± SEM spore deposition of I. fumosorosea blastospores mm −2 for the initial spray application was higher for the adaxial (391 ± 63) compared to the abaxial (168 ± 45) side of the leaves; the mean deposition per leaf was 280 ± 43. For the second spray application, the mean ± SEM spore deposition of I. fumosorosea blastospores mm −2 was again higher on the adaxial side (376 ± 135) compared to the abaxial side (215 ± 44); mean deposition on each leaf was 295 ± 72 blastospores mm −2 .
Population Density of and Treatment Effects on Whitefly Nymphs
The mean number ± SEM of live nymphs observed on the leaf disks pre-treatment (day 0) was not significantly different (p > 0.05) for PFR-97
Blastospore Deposition
The mean ± SEM spore deposition of I. fumosorosea blastospores mm −2 for the initial spray application was higher for the adaxial (391 ± 63) compared to the abaxial (168 ± 45) side of the leaves; the mean deposition per leaf was 280 ± 43. For the second spray application, the mean ± SEM spore deposition of I. fumosorosea blastospores mm −2 was again higher on the adaxial side (376 ± 135) compared to the abaxial side (215 ± 44); mean deposition on each leaf was 295 ± 72 blastospores mm −2 .
Population Density of and Treatment Effects on Whitefly Nymphs
The mean number ± SEM of live nymphs observed on the leaf disks pre-treatment (day 0) was not significantly different (P > 0.05) for PFR-97 (4.8 ± 2.0), Admire Pro (5.0 ± 1.0) and control (4.4 ± 1.5) (Figure 2). The mean number of nymphs observed on the leaf disks pre-treatment and for the duration of the study were not significantly different amongst the treatments. The number of nymphs decreased in number and showed a downward trend over time.
The mean percent mortality of the ficus whitefly nymphs 7 days post-treatment after the 1st PFR-97 application was higher compared to the control treatment (F = 5.16; df = 2, 6; p < 0.05) (Figure 3). The mean percent mortality of the ficus whitefly nymphs 7 days post-treatment after the 1 st PFR-97 application was higher compared to the control treatment (F = 5.16 ; df = 2, 6, ; P < 0.05) (Figure 3). On day 14, nymphal mortality for both treatments were significantly higher (F = 14.4; df = 2, 9; P = 0.005) compared to the control. There were no significant differences in percent mortality for the next 2 weeks (F = 0.80; df = 2, 6; P = 0.493; F = 4.16; df = 2, 9; P = 0.074), until 35 days post-treatment. At day 35, nymphal mortality for both treatments was significantly (F = 7.41; df = 2, 9; P = 0.024) higher than the control. When corrected for control using the Sun-Shepard's formula (39), the nymphal percent mortality (89 ± 3.3) in the fungal treatment was 21% higher than the chemical treatment (68 ± 3.0) 7 days post-treatment. After that time, the percent mortality for both pesticide treatments was similar for the duration of the observation period.
Effects of Treatment on the Occurrence of Enzootic Fungal Species
The mean percent of enzootic entomopathogenic fungi isolated from nymphs per leaf disk varied over time (Table 2).
When dead nymphs were randomly removed from leaf disks collected pre-treatment (day 0) and then incubated under high humidity, Aspergillus sp. occurred 55%, 50% and 40% and Fusarium sp. 45%, 45% and 60% of the time from PFR-97, Admire Pro, and control treatments, respectively. In addition, in the Admire Pro treatment plots, 5% of nymphs were infected with Lecanicillium sp. On day 14, Aspergillus sp., I. fumosorosea and Fusarium sp. were isolated from 35%, 5% and 60% of the nymphs from PFR-97 treatments prior to the second spray application, respectively. In the control and Admire Pro treatments, 85%, 15%, 0% and 39%, 16%, 45% of the nymphs were infected with Aspergillus sp., P. lilacinum and Fusarium sp., respectively. At the end of the pilot study nymphs in the PFR-97, Admire Pro and control treatments were infected by Aspergillus sp. 55%, 70%, and 65% of the time, and Fusarium sp. 30%, 30% and 35% of the time, respectively. Nymphs in the PFR-97 treatment were infected 15% of the time with Lecanicillium sp. On day 14, nymphal mortality for both treatments were significantly higher (F = 14.4; df = 2, 9; p = 0.005) compared to the control. There were no significant differences in percent mortality for the next 2 weeks (F = 0.80; df = 2, 6; p = 0.493; F = 4.16; df = 2, 9; p = 0.074), until 35 days post-treatment. At day 35, nymphal mortality for both treatments was significantly (F = 7.41; df = 2, 9; p = 0.024) higher than the control. When corrected for control using the Sun-Shepard's formula (39), the nymphal percent mortality (89 ± 3.3) in the fungal treatment was 21% higher than the chemical treatment (68 ± 3.0) 7 days post-treatment. After that time, the percent mortality for both pesticide treatments was similar for the duration of the observation period.
Effects of Treatment on the Occurrence of Enzootic Fungal Species
The mean percent of enzootic entomopathogenic fungi isolated from nymphs per leaf disk varied over time (Table 2). When dead nymphs were randomly removed from leaf disks collected pre-treatment (day 0) and then incubated under high humidity, Aspergillus sp. occurred 55%, 50% and 40% and Fusarium sp. 45%, 45% and 60% of the time from PFR-97, Admire Pro, and control treatments, respectively. In addition, in the Admire Pro treatment plots, 5% of nymphs were infected with Lecanicillium sp. On day 14, Aspergillus sp., I. fumosorosea and Fusarium sp. were isolated from 35%, 5% and 60% of the nymphs from PFR-97 treatments prior to the second spray application, respectively. In the control and Admire Pro treatments, 85%, 15%, 0% and 39%, 16%, 45% of the nymphs were infected with Aspergillus sp., P. lilacinum and Fusarium sp., respectively. At the end of the pilot study nymphs in the PFR-97, Admire Pro and control treatments were infected by Aspergillus sp. 55%, 70%, and 65% of the time, and Fusarium sp. 30%, 30% and 35% of the time, respectively. Nymphs in the PFR-97 treatment were infected 15% of the time with Lecanicillium sp.
The mean number of CFUs removed from leaf disk washes varied considerably per fungal species for each treatment over time (Table 3). Table 3. Fungal species isolated from ficus whitefly nymphs and leaves of Ficus benjamina various days post-application. Overall, the highest number of CFUs days post-application (DPA) for Aspergillus sp., Lecanicillium sp., I. fumosorosea, P. lilacinum, Fusarium sp., Penicillium sp., and Trichoderma sp. isolated from leaves collected for all treatments was 190 ± 45, 90 ± 66, 1010 ± 1010, 750 ± 279, 320 ± 135, 80 ± 65, and 10 ± 10, respectively. In the PFR-97 treatment, the number of CFUs for Aspergillus, Lecanicillium, P. lilacinum and Fusarium species decreased to zero on 15 DPA, except for I. fumosorosea which increased by 16.8 times from 1 DPA. The number of CFUs in the Admire Pro treatment for Aspergillus and P. lilacinum sp. increased 2.7 and 8 times after 15 DPA compared to 1 DPA, respectively, while the Fusarium sp. decreased by 1.5 times; all the other fungal species were not isolated from the leaves collected either 1 or 15 DPA. In the untreated control, the number of CFUs isolated for Aspergillus and P. lilacinum species decreased by 2 and 4.3 times from leaves collected 15 DPA compared to 1 DPA, respectively, while P. lilacinum sp. decreased to zero. No CFUs of I. fumosorosea, Lecanicillium sp., Penicillium sp. and Trichoderma sp. were isolated from leaves collected 15 DPA in the untreated plots. At 28 DPA, isolated from leaves collected in the PFR-97 plots, the number of CFUs of Aspergillus sp., P. lilacinum, Fusarium sp. and Penicillium sp. increased from 0 at 15 DPA to 20 ± 11, 20 ± 20, 30 ± 20 and 80 ± 65 at 28 DPA, respectively; however, I. fumosorosea decreased by 50.5 times to 20 ± 11. From leaves collected in the Admire Pro plots 28 DPA compared to those at 15 DPA, the number of CFUs of Aspergillus sp., P. lilacinum and Fusarium sp. decreased by 6.3, 2.7, and 3.1 times. Also, CFUs of Lecanicillium sp., I. fumosorosea and Penicillium sp. were isolated from leaves collected at 28 DPA. In the untreated control plot, CFUs of all fungal species except Penicillium sp. were isolated from leaves collected at 28 DPA. Trichoderma sp. CFUs were observed only in the untreated control plots at 28 DPA.
Isaria Fumosorosea: Ecological Assessment
The mean number of CFUs mm −2 of the leaf surface area for a naturally occurring enzootic population of I. fumosorosea was not observed in any of the plots prior to spray (Figure 4).
At 28 DPA, isolated from leaves collected in the PFR-97 plots, the number of CFUs of Aspergillus sp., P. lilacinum, Fusarium sp. and Penicillium sp. increased from 0 at 15 DPA to 20 ± 11, 20 ± 20, 30 ± 20 and 80 ± 65 at 28 DPA, respectively; however, I. fumosorosea decreased by 50.5 times to 20 ± 11. From leaves collected in the Admire Pro plots 28 DPA compared to those at 15 DPA, the number of CFUs of Aspergillus sp., P. lilacinum and Fusarium sp. decreased by 6.3, 2.7, and 3.1 times. Also, CFUs of Lecanicillium sp., I. fumosorosea and Penicillium sp. were isolated from leaves collected at 28 DPA. In the untreated control plot, CFUs of all fungal species except Penicillium sp. were isolated from leaves collected at 28 DPA. Trichoderma sp. CFUs were observed only in the untreated control plots at 28 DPA.
Isaria Fumosorosea: Ecological Assessment
The mean number of CFUs mm −2 of the leaf surface area for a naturally occurring enzootic population of I. fumosorosea was not observed in any of the plots prior to spray (Figure 4). In the PFR-97 plots, the number of CFUs pre-treatment was 0, and then increased 1-day posttreatment to 81 ± 39 CFUs mm −2 . After 7 days the number of CFUs mm −2 increased > 5.5 times to 3130 ± 1880 and on day 14, no CFUs mm −2 were isolated from the leaves collected in the PFR-97 plots. The mean number of CFUs mm −2 for I. fumosorosea was 101 ± 101 the day after the second fungal spray application (day 15), and increased to 150 ± 150 one week after application (day 21). On day 28, the mean number of CFUs isolated from leaf washes in the PFR-97, Admire Pro, and control plots was 220 ± 170, 80 ± 80 and 7 ± 7, respectively. No CFUs mm −2 of I. fumosorosea were isolated from leaves collected in any of the plots after that time. In the PFR-97 plots, the number of CFUs pre-treatment was 0, and then increased 1-day post-treatment to 81 ± 39 CFUs mm −2 . After 7 days the number of CFUs mm −2 increased >5.5 times to 3130 ± 1880 and on day 14, no CFUs mm −2 were isolated from the leaves collected in the PFR-97 plots. The mean number of CFUs mm −2 for I. fumosorosea was 101 ± 101 the day after the second fungal spray application (day 15), and increased to 150 ± 150 one week after application (day 21). On day 28, the mean number of CFUs isolated from leaf washes in the PFR-97, Admire Pro, and control plots was 220 ± 170, 80 ± 80 and 7 ± 7, respectively. No CFUs mm −2 of I. fumosorosea were isolated from leaves collected in any of the plots after that time.
Effect of Treatments on Parasitism Rate of Parasitoids
There was no significant effect of treatment (F = 2.69; df = 2, 6; p = 0.1466) on parasitism rate between the treatments and control observed on leaves sampled pre-treatment (day 0) ( Table 4). Total mean percent of whitefly nymphs parasitized per sampling day was not significantly different on day 14 (F = 3.97; df = 2, 6; p = 0.0798) or 35 (F = 1.00; df = 2, 6; p = 0.4219) amongst treatments for the duration of the study.
The total percentage of mortality due to either infection by EPF plus other factors (biotic: i.e., septicemia, predation and abiotic: i.e., desiccation) and also the nymphs parasitized per sampling day varied over the 35-day observation period for this study ( Figure 5).
Effect of Treatments on Parasitism rate of Parasitoids
There was no significant effect of treatment (F = 2.69; df = 2, 6; P = 0.1466) on parasitism rate between the treatments and control observed on leaves sampled pre-treatment (day 0) ( Table 4). Total mean percent of whitefly nymphs parasitized per sampling day was not significantly different on day 14 (F = 3.97; df = 2, 6; P = 0.0798) or 35 (F = 1.00; df = 2, 6; P = 0.4219) amongst treatments for the duration of the study.
The total percentage of mortality due to either infection by EPF plus other factors (biotic: i.e. septicemia, predation and abiotic: i.e. desiccation) and also the nymphs parasitized per sampling day varied over the 35-day observation period for this study ( Figure 5). The total mean percent mortality of the whitefly nymphs on day 7 due to FE+ and parasitization was 79.4 and 20.6, 97.1 and 2.9, and 98.5 and 1.5 for control, PFR-97 and Admire Pro treatment plots, respectively. On day 14, the percent mortality of the nymphs parasitized was higher (12.7%) in the control treatments compared to the fungal (5.2%) and neonicotinoid (1.8%) treatments. On day 21, the total percentage mortality due to parasitization for the control and fungal treatments was similar being 3.8% and 3.1%, which was at least 2× higher than that observed parasitized in the Admire Pro treatment plots. The percent mortality due to parasitization in the control plots remained the same for the rest of the observation period of the pilot study. There was no parasitization of nymphs observed on the leaves collected in the PFR-97 and Admire Pro treatment plots on day 35; therefore, 100% mortality of whitefly nymphs assessed in the treatment plots was due to infection by the fungal entomopathogens present, which included the addition of I. fumosorosea propagules on the leaf surfaces plus other factors. Of the fungal entomopathogens, mortality of the whitefly nymphs in the fungal treatment was primarily due to the dual application of PFR containing I. fumosorosea. In the neonicotinoid treatment, whitefly nymphal mortality was also due to the toxic lethal effect of the systemic insecticide present inside the leaves.
Discussion
The fungal spray application with I. fumosorosea blastospores had a higher efficacy for managing the whitefly population for the first 7 days, in comparison to the untreated control and the neonicotinoid treatment plots; however, after this time the percent mortality of the ficus whitefly population did not differ significantly between either treatment compared to the control until day 35. Mortality of nymphs after the second application (day 35) in the fungal treatment plots was significantly higher compared to the untreated control, suggesting that two applications of the fungi were compatible with the natural enemies present and helped suppress the whitefly population for 14 days. In the fungal treatment, the blastospore deposition and spray coverage for the initial application was not as uneven on both sides of the leaves as compared to the second spray. The concentration (10 7 spores mL −1 ) or deposition (~100-400 spores mm −2 ) used in this study was comparable to that used by other researchers for controlling aleyrodid insect pests [22,33,40,41]. In addition, the PFR-97 treatment applied against the ficus whitefly was found to be compatible with its natural biocontrol agents, including the parasitoids E. protransvena and A. bennetti. This compatibility finding is consistent with other researchers that have studied similar plant-pest/parasitoid-pathogen-predator or multi-trophic interactions with fungal entomopathogens, including I. fumosorosea and aleyrodid pests [22,33]. Parasitism of the ficus whitefly by E. protransvena and A. bennetti did occur in all the treated plots over time. This result suggests that the augmentation of I. fumosorosea or a neonicitinoid can be used compatibly with the parasitoids observed for management of the ficus whitefly under field conditions.
Throughout the duration of the study, a common trend became apparent where the number of parasitized nymphs decreased over time in all the plots. This scenario could be accounted for by having a lower number of nymphs available for E. protransvena or A. bennetti, to parasitize, due to leaf drop. Gerling et al. [42] indicated that all Encarsia sp. parasitize and emerge from the dead 4th instar whitefly hosts but attack mainly the 2nd-4th host instars. In contrast, A. bennetti prefers to oviposit in the 1st and 2nd nymphal instars of another whitefly Bemisia sp., but can also attack the 3rd and 4th instars [43][44][45]. However, in this study, the different rates of parasitization per nymphal instars of the ficus whitefly that was preferred by each parasitoid was not determined, but only the effect of the enzootic fungal entomopathogens, including the application of I. fumosorosea and a neonicotinoid on the overall parasitization by both parasitoids combined. The ratio of parasitization and the preference of nymphal hosts of the ficus whitefly by each parasitoid is unknown and warrants further research. Therefore, due to the lack of older or younger nymphal instars available because leaf drop occurred >14 days after spraying, the overall parasitization rate per treatment plots for the combined parasitoids would naturally decrease over time. In addition, the fungal entomopathogens in both treatment plots compared to the control which increased in total percent mortality of the whitefly nymphs over time would subsequently decrease the amount of susceptible nymphal hosts that were not infected by the fungal entomopathogens. Fransen et al. [26] reported that E. formosa could differentiate between greenhouse whitefly nymphs infected with the fungal entomopathogen, A. aleyrodis and preferred to oviposit in only healthy uninfected insects when given a choice. All these factors could have contributed to the rapid decrease in the number of nymphs being parasitized in both treatment plots over time.
It is apparent from the current study that both treatments did not have a negative impact on the enzootic entomopathogenic fungal growth and subsequent infection of the ficus whitefly nymphs over time. In fact, throughout this field pilot study,~95%-100% mortality of the whitefly nymphs assessed in the treatment plots was due to natural causes such as fungal entomopathogens and predators and other factors, with~5% or less being due to parasitization. The fungal species isolated from the mycosed nymphs were assumed to have caused the mortality of the insect; however, this hypothesis was not confirmed, but warrants further elucidation. From the samples collected in this study, Avery et al. [2] isolated and recorded the following hypocrealean fungi, I. fumosorosea, P. lilacinum, and Aspergillus, Lecanicillium, and Fusarium species from ficus whitefly, S. simplex nymphs. In addition, Penicillium and Trichoderma fungal species were also identified and recorded from the leaf wash samples from this study. Torres-Barragán et al. [15] isolated Aspergillus, Penicillium, Paecilomyces, Lecanicillium, Aschersonia, and Fusarium species from insects collected, including whiteflies from the agricultural area in Mexico. In another study, Scorsetti et al. [46] isolated I. fumosorosea, I. javanica and Lecanicillium species from whiteflies collected on organic and conventional horticultural crops in Argentina.
In this pilot study, two fungal species that had the highest percent occurrence infecting the ficus whitefly nymphs over time was Aspergillus and Fusarium. These fungal species are most likely saprophytic; however, a few authors have evaluated them as potential biocontrol agents for controlling various whitefly insect pest species [47][48][49][50]. Whether these fungal species were entomopathogenic to the whitefly is unknown and requires further testing. The other fungi, I. fumosorosea, and Lecanicillium species, not isolated as often from the whitefly nymphs, are common entomopathogenic fungi used as fungal biopesticides [51] for controlling whitefly in many crop systems [40,41,46,52]. In addition, I. fumosorosea and Lecanicillium sp. are also compatible with predators and parasitoids used in IPM programs for control of whitefly insect pests [22,24,[31][32][33]. Another fungal species, P. lilacinum, which is more commonly found in the soil and used for the biological control of nematodes, was recorded from the leaf wash samples. Although this is a rare occurrence for this fungus to infect the ficus whitefly, recently P. lilacinum has demonstrated much potential as a biocontrol agent of the greenhouse whitefly [41,53].
Although Aspergillus, and Fusarium species were isolated from the majority of dead insects removed from the collected leaves, Penicillium and Trichoderma species were only isolated and identified from leaf wash plated samples. In addition, I. fumosorosea CFUs were counted on leaf wash sample plates after spraying, but the increase after the second application was much less compared to the first. In a concurrent laboratory study, Avery et al. (unpublished data) found in vitro that Aspergillus, Fusarium, Penicillium and Trichoderma species were antagonistic and/or pathogenic to I. fumosorosea. Therefore, the germination and fungal growth of I. fumosorosea could have been inhibited by the presence of secondary plant compounds produced by F. benjamina plants which have antimicrobial properties [54][55][56]. Based on the in vitro bioassay results and possible antimicrobial properties, we speculate that due to the interspecific and intraspecific competition with the antagonistic fungal pathogens present on the leaf phylloplane, that the fungal spores or propagules of I. fumosorosea may have been inhibited from germination and growth, which subsequently would affect the efficacy of this entomopathogenic fungus contained in PFR-97 after application. However, these hypotheses need further evaluation to confirm the actual event. Competition between entomopathogenic fungal species is now considered an important biotic aspect to understand in order to increase the efficacy of biopesticides for managing arthropod pests [19,20,57].
Based on the CFUs isolated, it was evident that after the I. fumosorosea blastospores were applied, their numbers increased significantly from 0 to 81 CFUs mm −2 on day 1 to >5.5 times to 3130 on day-7 post-treatment. It has been observed that I. fumosorosea blastospores after application and contact to a leaf surface, will germinate and produce conidia in~7 days under high humidity (>70% RH) and temperatures (~25 • C) [58]. In addition, the environmental conditions i.e., mean temperatures and RH which remained at 24.4 • C and 75.6% RH, respectively throughout this 35-day study, were conducive for the growth of I. fumosorosea [27]. In the second application, the CFUs mm −2 were slightly higher (101) as the initial application, but increased only~1.5 times to 150 on day-7 post-treatment. However, the mean number of I. fumosorosea CFUs mm −2 isolated from the leaf washes diminished to zero 14 days and 21 days post-treatment for 1st and 2nd applications, respectively. The lack of CFUs mm −2 being observed may be accounted for by any of the following or a combination: (1) increased rainfall during that time, (2) biodegradation of conidia due to exposure to ultra violet rays or high diurnal temperatures [59,60], (3) intra-and interspecific competition with other fungal pathogens [57], (4) potential inhibition of fungal germination and hyphal growth due the presence of secondary compounds on the leaf surface [54][55][56], (5) lack of nymphal density to trap the spores from being washed off the waxy leaf surface, and (6) removal of inoculum due to extreme leaf drop.
It is interesting that 14 days after the second application, I. fumosorosea CFUs mm −2 were present in all the treatments. This suggests that the conidia were being spread around to all the treatments by wind, rain or other organisms present in the ecosystem e.g. pest insects, predators and parasitoids. Avery et al. [61] found that the fungal blastospores and conidia of I. fumosorosea could be spread from one leaf to another by a single adult Asian citrus psyllid, Diaphorina citri. Therefore, it is possible that if whitefly adults emerged and became contaminated with conidia on the leaf surface, they could potentially disperse and deposit I. fumosorosea spores and/or propagules on to other plants. The ladybird beetles, H. axyridis and C. coeruleus observed on the PFR-97 plots could have dispersed the fungal spores as they searched for whiteflies in the other treatment plots. For instance, Sànchez Barahona et al. [35] observed that the ladybird beetle, Thalassa montezumae Mulsant was able to disperse fungal spores or propagules of I. fumosorosea as it roamed and fed on the green croton scale insects infesting croton plants previously sprayed with PFR-97. Both adult whiteflies and their specific parasitoids may also have been involved in horizontal transmission of the spores or propagules. Fransen and van Lenteren [26] found that the parasitoid, E. formosa was responsible for transmitting the fungal spores of Aschersonia aleyrodis after probing the greenhouse whitefly nymphs to a limited extent. This hypothesis is interesting and warrants further research. The use and role of non-target organisms in dispersing entomopathogenic fungi in integrated pest management systems is reviewed in Skinner et al. [62].
Conclusions
Based on the current study, it can be concluded that the endemic population of predators, parasitoids and enzootic population of fungal entomopathogens must be considered as part of a multi-trophic ecosystem, and that there may be an interaction after the application of any pesticide. The fungal biopesticide, PFR-97 and neonicotinoid, Admire Pro were compatible overall on the natural enemies and more effective in managing the invasion of the ficus whitefly, S. simplex compared to the untreated control. Therefore, it is important to assess the long-term impact that the application of any pesticide will have on the ecosystem when managing this pest; especially, the ecological impact on all the natural enemies, which includes the enzootic fungal entomopathogens. Although the Ficus hedge is a man-made ecosystem, it is still very important that this ecological concept be considered when determining the best long-term, sustainable strategy employed by the homeowner for managing the ficus whitefly. Therefore, in the landscape it is extremely important to use the appropriate insecticides, methods, and timing in order to get the best control with the least amount of detriment to the natural enemies or the environment. Funding: This study was funded by grants from the Floriculture and Nursery Research Initiative #58-6618-5-0248, "Management of Whitefly Biotypes on Floral and Nursery Crops" and partial support from CSREES/NIFA, CRIS #6618 22000 037 00D IPM Technologies for Subtropical Insect Pests. | 2019-05-05T14:57:18.606Z | 2019-05-04T00:00:00.000 | {
"year": 2019,
"sha1": "64aced34914d3bc0ef5dc25035cfbbf457a0e99c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jof5020036",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64aced34914d3bc0ef5dc25035cfbbf457a0e99c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259685163 | pes2o/s2orc | v3-fos-license | Performance Analysis of a Spectral-Efficient High-Speed Hybrid PDM-MDM Enabled Integrated MMF-FSO Transmission
This article proposes a novel 2 × 4 × 10 Gbps hybrid multi-mode-fiber (MMF) free-space optics-communication (FSOC) system based on integrating two multiplexing techniques; polarization-division-multiplexing (PDM) and mode-division-multiplexing (MDM). Two polarization states are used, each of these states carries four different Hermite Gaussian (HG) modes while each HG modes carries 10 Gbps data. Performance analysis is investigated by considering fixed length of MMF cable and varying FSOC range in absence and presence of different atmospheric turbulences (weak turbulence (WT) and strong turbulence (ST)). Additionally, it is evaluated also by taking a fixed range for FSO link and considering ideal scintillation with different MMF lengths. Moreover, it is investigated under rainy weather of Alexandria city in Egypt, Pune city in India, and Jeddah city in the Kingdom-of-Saudi-Arabia (KSA). The link distance, beam-angle, eye-diagrams, and bit-error-rate are the parameters that used for evaluating the system's performance. The results after simulating model using optisystem reveal 80 Gbps overall transmission capacity at 1500 m (100 m MMF + 1400 m FSO link) in the presence of ST, while for fixed FSO link (100 m), the achievable transmission is 350 m. The overall ranges of 1300 m, 1200 m, and 1600 m are achieved for Alexandria, Pune, and Jeddah.
I. INTRODUCTION
F REE space optical communication (FSOC) links are important in data transfer applications. It requires line of sight (LOS) transmission and uses atmosphere as a media in transferring the data [1]. Immunity to electromagnetic waves interference, low power consumption, high speed data transmission, implementation in urban areas, license-free spectrum, and large bandwidth are advantages of FSOC [2], [3]. These advantages make FSOC a good solution for solving problem of data traffic. However, the major factor that degrades the performance of the transmission link in the FSOC system is the atmospheric turbulence due to variations in temperature [5], [6]. Nowadays, the improvements in the optical fiber communication (OFC) technology have become remarkable. The exponential growth in internet technology and the variation in the types of applications like gaming systems have prompted increasing demand for transmission quality and data capacity [7]. Although there are several techniques used in OFC like wavelength division multiplexing (WDM) [8], [9], [10], time division multiplexing (TDM) [11], [12], and frequency division multiplexing (FDM) [13], the data transmission rates and spectral efficiency are not satisfactory. The existing technologies of optical data transmission using OFC like single mode fibers (SMFs) will reach their capacity limits [14], [15], and that according to Shannon's limit [16]. So, the requirement of using multimode fiber (MMF) cables has increased as they provide high bandwidth.
Thus, using MMF with FSOC will result in enhancement in transmission capacity and will be reachable everywhere. Furthermore, for more capacity enhancement, different multiplexing techniques are used in FSOC systems like polarization division multiplexing (PDM) [17], orthogonal FDM [18], and mode division multiplexing (MDM) [1], [19].
PDM is a multiplexing technique where one or more polarization signals are combined leading to generation of new parallel state. The transmission data capacity is enhanced in PDM through allowing distinct signals transmitting their data in an orthogonal beam using same wavelength [20], [21].
MDM is a recent technique that is used in optical communication networks either MMF or FSOC due to its ability for providing high data transportation [22]. It uses eigen modes for transmitting independent channels at the same time. Modes like Laguerre-Gaussian [23], orbital angular momentum [1], This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ [24], and Hermite-Gaussian [25] are used MDM techniques for capacity enhancement.
The Hermite-Gaussian (HG) beams are the solutions to the paraxial wave equation (Cartesian coordinate system) that form set of functions with Gaussian beam which are orthogonal and complete and known as modes of propagations [26].
Recently, there are several studies using either MDM or PDM in FSOC transmission only, OFC transmission only, and both hybrid OFC with FSOC transmission. In [27], PDM is used with FSOC transmission system. Ten channels coded by random diagonal code of spectral amplitude coding optical code division multiple access (SACOCDMA) are transmitted on two polarization states and a transmission capacity of 50 Gbps is achieved. In [28], six channels assigned by diagonal permutation shift code of SACOCDMA system are used in hybrid PDM/FSOC system. The conducted results reveal a capacity enhancement of 60 Gbps. In [1] MDM is used with FSOC transmission system. Two distinct Laguerre-Gaussian (LG) beams are used; each carrying three different channels encoded by fixed right shift code of SACOCDMA system. An overall transmission capacity of 60 Gbps is achieved. Four different LG beams are used in MDM/FSOC transmission in [24] and a 40 Gbps transmission data capacity is achieved. In [29], two different modes are used in MDM/ FSOC transmission communication system and a successful transmission of 10 Gbps is reported.
In [30], MDM is used in hybrid MMF/FSOC transmission. Two distinct modes and a transmission capacity of 80 Gbps after 100 m MMF length and 2070 m FSOC range is achieved. In [31], MDM in OFC system using code division multiplexing is proposed. The linearly polarized (LP) modes are used and the results show that the system can transmit up to 42 km distance with a transmission capacity of 80 Gbps. In [32], the LP modes are used in MDM-passive optical network system with OCDMA system. Two fibers are used, SMF and two mode fiber (TMF). The obtained results reveal that the system achieves a transmission capacity of 80 Gbps with 40 km SMF length and 2 km TMF length.
In this article, for the first time according to our knowledge, we will use PDM with four HG modes in hybrid MMF/FSOC system for enhancing transmission capacity. The main contributions are: r Introducing a new high speed transmission of MMF-FSOC system based on combining PDM with MDM using four HG modes (HG00, HG01, HG10, and HG11).
r Investigate the effect of beam divergence and atmospheric turbulence, which are major factors that cause degradation to the FSOC link.
r Evaluate the performance of the suggested model based on actual metrological data from three cities located in different countries; Alexandria in Egypt, Pune in India, and Jeddah in KSA. In this article, a new hybrid MMF/FSOC system based on combining PDM with MDM is proposed for capacity enhancement. Two polarization states are used. The first one is used for transmitting the optical signals that carries the information data on X-polarization (X-PL) state. These signals are transmitted using four different HG beams (HG00, HG01, HG10, and HG11), which each beam transports 10 Gbps. The second polarization state is used for transmitting the information signal on Y-polarization (Y-PL) state. Also, this state is used for transmitting the data that uses four different HG beams. Thus, a capacity enhancement is achieved as the total transmission data is 2 PDM × 4 HG modes × 10 Gbps = 80 Gbps. The performance is investigated considering two cases, the first case is using fixed MMF cable length of 100 m and varying FSOC range. In this case, the effect of different beam divergence and the turbulences effects like weak turbulence (WT) and strong turbulence (ST) on the performance of the proposed model is investigated. As for the other case, a fixed FSOC of 100 m is used while the length of MMF is varied. Further, the performance is evaluated using real meteorological data from different cities in different countries.
The organization of the rest of the article is as follows. Sections II and III give a brief explanation of HG modes and MMF cable. A description of the proposed PDM/MDM based MMF/FSOC transmission is given in Section IV, followed by the performance analysis in Section V. Sections VI and VII show the simulation results with discussion and the main conclusion, respectively.
II. HG MODES
Lasers are used in different areas of optics. The transverse mode (TM) is the field distribution that is orthogonal to the direction of the laser's propagation, and when there are different spatial modes, there will be different TMs that correspond to them [33]. The HG modes are a full collection of spatial modes with orthogonal bases, and any spatially distributed pattern can be extended using the HG mode basis [34]. They are higher order modes of Gaussian beam which are generated by using the solutions of the higher-order of the paraxial equation with Hermite polynomials in rectangular coordinates [35], [36], [37], [38]. The HGmn modes are distinguished by two indices, m indicating the number of nodes in the horizontal axis and n indicating the number of nodes in the vertical axis. In Cartesian coordinate system, the electric field of the HG beam, E HGmn (x, y, z), is expressed as [39].
where H m (.) and H n (.) are Hermite polynomials of order m in x direction and n in y direction, respectively, k is a wave number and equal to ( 2π / λ ) where λ is the optical wavelength, R(z) is the beam curvature, and w indicates the beam spot size. The HG beams are used in FSOC systems applications due to its ability to enhance the capacity using MDM [40]. In this article, the vertical-cavity surface-emitting laser (VCSEL) component is used from optisystem software version 19 for generating different HG modes. The HG modes that considered in this study are HG00, HG01, HG10, and HG11 and their intensity is given in Fig. 1. For more capacity enhancement, PDM is used with HG modes. Fig. 2 illustrates how PDM combines with MDM using the HG 01 beam. Using the same HG01 beam, two information data are sent on a single wavelength (λ 1 ). The first data is sent on the X-PL signal, while the second data is sent on the Y-PL signal. Then both signals are multiplexed using a PDM multiplexer and sent together at the same time.
III. MMF CHANNEL
Recently, the development of OFC has enabled the development of different types of optical fibers like SMF and MMF as shown in Fig. 3. Optical fiber is classified into two types according to the number of modes. The SMF has small core diameter (8-10 μm) so it allows only one mode to propagate in its core. On the contrary, the MMF has large core diameters (50-100 μm), so multiple modes can propagate along the fiber leading to capacity enhancement [41].
Accordingly, in this work, the MMF cable is used in which its refractive index, n(r) is expressed as [42] where n ∞ , Δ, r, and α P , are maximum refractive index of the core, profile height parameter, normalized radial distance from the center of the core, and profile alpha parameter of the n(r), respectively. At the input of the MMF, the total incident spatial electric field, E i qd (r, θ, t) is given as [42] where q and d are azimuthal mode number and radial mode number, respectively, c qd is the power coupling coefficient, and e dq is the transverse electric field of the HG modes.
As for the output electric field, it is expressed as [42] where τ w is the time delay and β w is the propagation constant of the degenerate mode group w. The attenuation, γ MMF , is expressed as [43] where γ MMF o , I p , and η, are basic attenuation by all modes, p-th order modified Bessel function of the first kind, and scaling factor.
IV. PROPOSED PDM/MDM BASED MMF/FSOC TRANSMISSION Fig. 4 shows the schematic diagram that shows the layout of the MMF-FSOC system based on combining PDM with 4 HG modes. It consists of the central office (CO), MMF, FSO, optical wireless unit, and users. The CO contains the information data that is required to be transmitted to the user's premises. The data is first modulated onto optical wavelengths and then combined before being delivered to the propagation channels. Here, two channels are assumed, which are MMF cable and FSOC link, to be able to reach areas where implementation of OFC is difficult, like mountains. Further, an optical splitter is used to split the data, and then the data reaches its destination. As any communication system consists of three main parts: the transmitter, channel, and receiver, our proposed PDM/MDM based MMF/FSOC transmission has the same three parts as shown in Fig. 5. At the transmitter, a VCSEL source centered at 1550 nm is used to generate four HG modes (HG00, HG01, HG10, and HG11) first on X-PL by setting the azimuth angle at 0 o and second on Y-PL by setting the azimuth angle at 90 o . Each HG mode carries 10 Gbps data. This data is generated from a pseudo-random bit sequence generator (PRBSG) and then entered into a non-return-to-zero (NRZ) on-off keying electrical modulator. To be able to transmit this data through optical HG modes, it must first be converted to an optical signal by using a Mach-Zehnder modulator (MZM). The overall 40 Gbps that are transmitted on four different HGs using either X-PL or Y-PL are multiplexed by using an MDM multiplexer. A PDM combiner (PC) is then used to combine both the data signals that were transmitted on X-PL and Y-PL before transferring them to the propagation channel. Here, two channels are used with two cases. In the first case, a 100 m fixed length of MMF cable is used with varying propagation ranges, while in the second case, increasing lengths for MMF cable are considered with a fixed FSO propagation range of 100 m. At the receiver, the received signal is first split by a PDM splitter (PS) into an X-PL signal and a Y-PL signal. The received signal in either X-PL or Y-PL is further separated into four HG modes through the use of an MDM demultiplexer. A photodetector (PD) is used to detect the required information signal corresponding to the HG mode and to convert the optical signal to an electrical signal. A low pass filter (LPF) is then used for filtering the signal, and for where B op and are the optical bandwidth and the responsivity of the PD, respectively. Parameter S R indicates the received power and is expressed as [17] where S T is the transmitted power, d T and d R are transmitter and receiver aperture diameter, respectively, Φ indicates the beam divergence angle, L is the FSO range, and β is the atmospheric attenuation. As there is variation in temperature during the propagation of the information signal in the FSOC channel, so this leads to atmospheric turbulence which varies from weak turbulence (WT) to strong turbulence (ST). Models like gammagamma, log-normal, and K-distribution are used for modelling the channel under the effect of atmospheric turbulence [6], [45]. Under the effect of WT and CA weather conditions, log-normal model is used [6], while for ST, K-distribution model is used [45]. As for gamma-gamma distribution, it is commonly used as it can be used for both WT and ST [46]. Accordingly, gamma -gamma model is considered in this study. In gamma-gamma, the normalized intensity of light is defined by α and γ, which are large and small edges scales, respectively. The probability density function is given as [47], [48] P DF (I s ) = 2(∝ γ) where K j (.), and Γ(.) are the j th order of modified Bessel function and the Gamma function, respectively, and σ 2 r is the Rytov variance which differs according to the value of the refractive index,C 2 n and is expressed as [49] σ 2 R = 1.23C 2 n K 7/6 R 11/6 F SO (11) where K is the wave number and equal to (2π/λ The signal to noise ratio (SNR) is expressed as [17], [28]: where σ 2 Sh is the shot noise and equals to 2e υ < I>, (e is the charge of the electron and υ is the electrical bandwidth), and σ 2 th refers to the thermal noise and expressed as [28] where k B and T , are respectively, Boltzmann constant and absolute temperature of receiver, and r L is the load resistance. Finally, the BER is given as [28] where erfc refers to the error complementary function.
VI. RESULTS AND DISCUSSION
The proposed PDM/MDM based MMF/FSOC transmission system using 4 HG modes is simulated using Optisystem Software ver. 19 with the parameters given in Table I [4], [17], [27], [30]. The performance of the proposed model is evaluated in terms of beam divergence, MMF length, FSO propagation range, eye diagrams and log(BER). The simulation results are divided into four parts. The effect of difference beam divergence on system performance, is discussed in the first part. In the second part, the performance is investigated for the proposed PDM/MDM based MMF/FSOC transmission at a fixed length [4], [17], [27], [30] of 100 m of the MMF cable while varying the FSO range under clear air (CA) weather conditions. The third part shows impact of atmospheric turbulences on the performance of the received information signal, followed by the effect of different MMF lengths on the performance of the proposed model at a constant FSO range of 100 m in the fourth part. Finally, the fifth part shows the performance of the proposed model under the rainy weather for Alexandria city in Egypt, Pune city in India, and Jeddah city in KSA.
A. Effect of Beam Divergence on the Performance of the Proposed System
Beam divergence has an effect on the information signal during transmission in the FSO channel. Ideal scintillation (no atmospheric turbulence), a fixed MMF length of 100 m, and a fixed FSO range of 800 m are considered. Fig. 6 shows the log (BER) performance of the proposed PDM/MDM based MMF/FSOC system versus different angles of beam divergence. The performance of users 1-4 that transmitted on X-PL using four different HG modes is shown in Fig. 6(a), while the performance of users 5-8 that transmitted on Y-PL using the same HG modes is displayed in Fig. 6(b). It is clear that as the divergence angle increased, the performance is degraded for all users. As an example, at a beam divergence angle of 0.2 mrad., the log(BER) of user 1 is −17.19, which is increased to −6.6 when the beam divergence angle is increased to 0.5 mrad. Also, the large eye openings for the all users that are transmitted using four HG modes on two different polarization states at a beam divergence angle of 0.5 mrad. indicate reliable received data. Table II tabulates the log(BER) values for the all users at a beam divergence angle of 0.5 mrad.
B. Effect of Different FSO Propagation Range on the Performance of the Proposed System
In this part, the fixed MMF of 100 m, CA weather conditions, ideal scintillation, and various FSO ranges from 800 m to 1400 m are considered. The measured log(BER) for users 1, 2, 3, and 4 that transmitted using HG00, HG01, HG10, and HG11 modes, respectively on X-PL signal for proposed model versus FSO ranges is given in Fig. 7(a) while Fig. 7(b) depicts the log(BER) for other users (5-8) using same four HG modes transmitted on Y-PL. One can notice that as FSO range increments, the log(BER) also increments, while the users 1 and 5 that are using HG00 modes while propagating in FSO channel achieve the best performance when compared to other users that are using higher HG modes. Users 4 and 8 propagated using HG11, achieved a lower log (BER) than other users who propagated using lower order of HG modes. The log(BERs) of users 1, 4, 5, and 8 are −8, −7.6, −8.49, and −7.94, respectively, at the FSO link of 1400 m. As the acceptable value of log(BER) in FSO is approximately 6 [30], so all the information streams that were transmitted using different HG modes and two polarization states were successfully received at the transmission distance of 1500 m (100 m MMF + 1400 m FSO link), as all users had log(BER) less than −7. Additionally, the wider eye openings for all users at 1500 m (100 m MMF + 1400 FSO link) reveal reliable received data. Table III shows the log(BER) values for the all users at a 1500 m (100 m MMF + 1400 FSO link) under CA weather conditions.
C. Effect of Weak and Strong Turbulence on the Performance of Proposed Model
The variations in temperature cause atmospheric turbulence, which degrades the performance of the information signal during its transfer in the FSOC channel. The turbulence varies from WT that has C 2 n of 5 × 10 −16 m − 2 3 to ST that has C 2 n of 5 × 10 −14 m −2 3 [30], [48]. The effect of WT and ST on the performance of the proposed PDM/MDM based MMF/FSOC transmission in terms of log(BER) and different FSO is depicted in Fig. 8. A fixed MMF of 100 m is considered. It is clear that when the turbulence is strong, all the users that transmitted on either X-PL or Y-PL using four different HG modes (HG00, HG01, HG10, and HG11) achieved a propagation range of 1400 m with log(BER) approximately -4. On the other hand, the performance for the eight users using the proposed model becomes better when there is WT, as at the same FSO range of 1400 m, the log(BER) enhances and becomes ∼−6. As the acceptance limit for the log(BER) is < 6, so all the information data transmitted by the eight users (80 Gbps) is successfully received.
D. Performance of Proposed Model for Varying MMF Cable Length
In this part, the effect of the MMF cable length on the performance of the proposed PDM/MDM based MMF/FSO transmission is discussed. A fixed FSO link of 100 m, ideal scintillation, and CA weather conditions are considered in this case. Fig. 9 presents the relation between the performance in terms of log(BER) of all users and the MMF length. For eight users, a shorter MMF length gives better performance than a longer MMF length. As an example, at an MMF length of 150 m, user 1 transmitted using HG00 mode and on X-PL has log(BER) = −15.28, while this value increases to −7.30 when the length of the MMF is prolonged to 250 m. Also, the eye diagram for
E. Performance of Proposed Model for Cities Located in Three Different Countries
As to be able to show the availability of implement the proposed model in real environment, so the simulation parameters are chosen based on real practical studies and the real meteorological data for different cities have various geographical data are considered. The average rainfall intensities from years 2014 to years 2018 for Alexandria city in Egypt, Pune city in India, and Jeddah city in KSA are 1.14 mm/hr, 3.33 mm/hr, and 0.28 mm/hr according to meteorological data taken from www.worldweatheronline.com 'accessed on January 2023' and refs. [1], [45], [50]. As the relation between attenuation coefficient of rain, β r , in dB/km and rainfall intensity, R f , is [50]: So by using this relation, the attenuations are 1.17 dB/km for Alexandria, 2.4 dB/km for Pune, and 0.45 dB/km for Jeddah. Figs. 10-12 depicts the log(BER) performance versus different FSO ranges after a fixed 100 m length of MMF cable for Alexandria, Pune, and Jeddah cities, respectively. As Jeddah has the lowest attenuation, so the eight users that propagated on different HG modes and two polarization signals of the proposed model achieves the longest range of 1500 m when implemented in it. This range is decreased to 1200 m and 1100 m when applied under the weather of Alexandria and Pune, respectively, and that is expected as Pune has the highest attenuation. All these ranges are below log(BER) = −6.
A comparison between recent published works and present work is shown in Table VI.
VII. CONCLUSION
In this article, a novel 2 × 4 × 10 Gbps PDM/MDM based MMF/FSOC transmission system is proposed for high speed transmission capacity network. Two polarization states, X-PL and Y-PL, are considered. Each polarization signal carries four HG modes (HG00, HG01, HG10, and HG11). A 10 Gbps data is transmitted on each HG mode results in overall data transmission of 80 Gbps (as eight users are used). Hybrid channels, MMF cable and FSOC are used. The performance is investigated for two cases: in the first case, a fixed MMF cable length of 100 m and various FSO ranges in the absence and presence of WT and ST are considered. Different MMF cables lengths and fixed FSO link of 100 m are considered in the second case. Moreover, the performance is evaluated for the weathers of Alexandria, Pune, and Jeddah. The simulation obtained results show that, all users can transmit their data successfully for propagation ranges of 1500 m (100 m MMF cable + 1400 FSOC link) in the presence of WT with log(BER) less than −7 for the first case. In the second case, the existence of the nonlinearities in the MMF lead to performance restriction so, the overall transmission distance achieved by eight users is 350 m (250 m MMF cable + 100 m FSO link). Finally, the transmission ranges achieved by eight users are 1300 m for Alexandria (100 m MMF cable + 1200 m FSO link), 1200 m for Pune (100 m MMF cable + 1100 m FSO link), and 1600 m for Jeddah (100 m MMF cable + 1500 m FSO link). Subsequently, our suggested model is recommended to be used in high transmission hybrid wired/wireless networks and in next generation PON applications like smart homes, and smart health cares. It is imperative to conduct experimental demonstrations of our model to account for real-time losses and intermodal crosstalk, and we suggest integrating other multiplexing techniques with the suggested model, like OCDMA and OFDM, to enhance transmission capacity. | 2023-07-12T05:46:21.884Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "53867414ef251c490a0a8c1d7cc511f8ab554ab5",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4563994/4814557/10176353.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "eb58f704784e9656bbc01514f046e1f50285a5a9",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
9011481 | pes2o/s2orc | v3-fos-license | Activation of Acid Sphingomyelinase by Interleukin-1 (IL-1) Requires the IL-1 Receptor Accessory Protein*
The cytokine interleukin-1 (IL-1) plays an important role in inflammation and regulation of immune responses, but the mechanisms of its signal transduction and cell activation processes are incompletely understood. Ceramide generated by sphingomyelinases (SMases) is known to function as an important second messenger molecule in the signaling pathway of IL-1 and tumor necrosis factor. To investigate the activation of SMases by IL-1, we used an IL-1 receptor type I (IL-1RI)-positive EL4 thymoma cell line, which is defective in IL-1R accessory protein (IL-1RAcP) expression. In this cell line (EL4D6/76), tumor necrosis factor induced ligand/receptor internalization, NFκB nuclear translocation, IL-2 production, and the activation of neutral (N)-SMase and acid (A)-SMase. In contrast, stimulation with IL-1 resulted only in the activation of N-SMase whereas ligand/receptor internalization, NFκB translocation, IL-2 production, and activation of A-SMase were not detected. Transfection of this functionally defective EL4D6/76 with IL-1RAcP cDNA restored these functions. These data suggest that A-SMase activity is strongly linked with the internalization of IL-1RI mediated by IL-1RAcP and that A-SMase and N-SMase are activated by different pathways.
Interleukin-1 (IL-1) 1 and tumor necrosis factor (TNF) belong to a group of pro-inflammatory cytokines with overlapping biological activities, which might be brought about by common signaling mechanisms (1)(2)(3). In the past few years, several groups have reported the involvement of sphingomyelin breakdown in the signaling of IL-1 and TNF (4 -6). Ceramide generated by sphingomyelinases (SMases) is an important second messenger molecule in signal transduction pathways of IL-1 (7,8), TNF (9), and CD28 (10). Ceramides appear to be involved in cell differentiation, apoptosis, and cell cycle arrest (11)(12)(13), e.g. ceramide was able to mimic interferon-␥ and TNF effects in the differentiation of the monocytic cell line HL60 (14). Different types of cell-permeable ceramides induced apoptosis in various cell systems (15,16). In cell cycle studies, C 6 -ceramides have been demonstrated to induce growth suppression by dephosphorylation of Rb (17,18). For IL-1, the involvement of sphingomyelin hydrolysis to ceramide and stimulation of a ceramideactivated protein kinase has been reported (19,20). Synthetic cell-permeable ceramides or exogenous SMase have been shown to bypass IL-1 receptor activation in EL4 cells and mimic biologic activities of this cytokine (8). The activity of ceramide-activated protein kinase is directed to c-Raf-1 and appears to be activated by TNF and IL-1. Other targets of downstream signaling processes of ceramides are ceramideactivated protein phosphatase (21,22) and protein kinase C (23). Additional events in the signaling cascade of ceramides are the phosphorylation of mitogen-activated protein kinase and activation of the c-Raf-1 kinase (24,25).
Binding of TNF to the 55-kDa TNF receptor activated two different types of SMases, a membrane-associated neutral (N)-SMase and an endosomally located acid (A)-SMase (9). Structure-function analyses of the p55 TNF receptor revealed that the SMases are activated independently through different cytoplasmatic domains of the receptor (26). Diacylglycerol (DAG) generated by a phosphatidylcholine-specific phospholipase C (PC-PLC) has been reported to serve as important factor of activation of A-SMase, which, through the generation of ceramide, is a co-factor for the activation of NFB (27). A key event in NFB activation is the rapid degradation of the inhibitory protein IB. In a cell-free system, SMase and synthetic ceramide could directly induce IB degradation, strongly indicating the involvement of SMase in NFB activation (28). On the other hand, N-SMase seems to exert its signaling capacity via proline-directed protein kinases, like ceramide-activated protein kinase and mitogen-activated protein kinase, which acts in turn on phospholipase A 2 (6).
IL-1 activity is represented by three structurally related molecules (3,29,30). IL-1␣ and IL-1 act in an agonistic manner and are internalized after binding to the receptor. The IL-1 receptor antagonist (IL-1Ra) blocks the binding of the agonists and inhibits the internalization of the receptor (31). Two types of receptors have been described with molecular masses of 85 kDa for the type I receptor (IL-1RI) and 65 kDa for the IL-1 type II receptor (IL-1 RII) (32), but binding to only IL-1RI has been shown to induce cell activation. IL-1RII does not trigger a signaling cascade and presumably inhibits IL-1 activity by acting as a decoy target for IL-1 (33,34). Binding of IL-1 to the IL-1RI leads to the association of serine/threonine kinases (35,36).
Recently an IL-1RI accessory protein (IL-1RAcP) was described which does not bind IL-1 but associates with and increases the affinity of IL-1RI (37). We have previously described an IL-1RI-positive subclone of EL4 cells, EL4D6/76, * This work was supported by grants from the Deutsche Forschungsgemeinschaft (to W. F. and M. K.) and by grants from the Bundesministerium fü r Bildung, Wissenschaft, Forschung und Technologie. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
¶To whom correspondence should be addressed. which binds IL-1 with high affinity but does not respond with IL-1RI internalization or IL-2 production (38,39). This defect can be overcome by intracellular delivery of IL-1 (40) or by transfection with IL-1RAcP, which reconstituted the IL-1RI internalization and functional defects (41,42).
In the present study, therefore, we investigated the activation of SMase by different components of the IL-1R complex. Evidence is provided that IL-1RI internalization is required for the activation of the endosomal A-SMase. Ceramide produced by A-SMase provides an important signal for further downstream events like NFB activation or IL-2 production. The lack of A-SMase activation may thus explain the unresponsiveness of the IL-1RI internalization-defective cells.
EXPERIMENTAL PROCEDURES
Cell Culture and Biological Reagents-EL4 cells and corresponding transfectants were cultured in RPMI 1640 containing 10% fetal calf serum, 100 IU/ml penicillin, and 100 g/ml streptomycin at 37°C in air with 5% CO 2 . For stimulation, 2.5 ϫ 10 6 cells were seeded in 48-well plates at a density of 1 ϫ 10 6 cells/ml. Human (h) recombinant (r) IL-1␣ (rhIL-1␣) was kindly provided by Drs. A. Stern and P. Lomedico (Hoffmann-La Roche, Nutley, NJ). The specific activity was 5 ϫ 10 6 units/mg as determined by the lymphocyte activating factor assay and used at a concentration of 10 units/ml, representing a concentration of 150 pg/ml. Recombinant mouse (m) and human (h) TNF was a kind gift of Knoll AG (Ludwigshafen, Germany).
Cytokine Assay-IL-2 activity of the culture supernatants was quantified by enzyme-linked immunosorbent assay with the IL-2 Mini Kit (Biozol, Eching, Germany). The assay was performed according to the manufacturer's instructions. The IL-2 detection limit was Ն50 pg/ml.
Internalization Assay-To measure the internalization of 125 I-IL-1␣, 2 ϫ 10 6 cells were incubated for 4 h at 37°C or 4°C in 200 l of medium, pH 7.4, containing 500 pM 125 I-IL-1␣ (Amersham-Buchler, Braunschweig, Germany). Nonspecific binding was determined by adding a 100-fold excess of unlabeled rhIL-1␣. Cell surface-bound radioactivity was removed by washing the cells in medium, pH 3.0, for 2 min. Subsequently, the cells were centrifuged through a mixture of dibutyl phthalate and bis(2-ethylhexyl) phthalate (3:2) (Merck, Darmstadt, Germany). To determine the total cell-associated 125 I-IL-1␣, the cells were passed through the mixture of dibutyl phthalate and bis(2-ethylhexyl) phthalate without washing. Radioactivity in the cell pellets was measured using a ␥-counter.
For detection of internalized 125 I-TNF, 2 ϫ 10 6 cells were incubated for 1 h at 4°C with 1 ng/ml 125 I-TNF (recombinant TNF, NEN Life Science Products, specific activity 2160 kBq/g) to saturate cell surface receptors. Nonspecific binding was determined by adding an 200-fold excess of unlabeled TNF together with 125 I-TNF. After washing the cells three times in cold phosphate-buffered saline, temperature was shifted to 37°C to allow receptor internalization or kept at 4°C. To determine the amount of internalized 125 I-TNF receptor complexes, noninternalized ligand was removed by centrifuging (50 ϫ g) the cells through serial pH 3.0 gradients consisting of (a) 0.5 ml of culture medium supplemented with 20% Ficoll; (b) a second layer of 0.5 ml of 50 mM glycine-HCl, pH 3.0, 100 mM NaCl supplemented with 10% Ficoll; and (c) a third layer of 0.5 ml of culture medium containing 5% Ficoll. To determine the total amount of cell-associated 125 I-TNF, a second aliquot of cells was passed through a gradient, in which the second layer was replaced by phosphate-buffered saline, pH 7.3, containing 10% Ficoll. Radioactivity of the cell pellets was determined by counting in a ␥-counter.
Specific binding was calculated by subtracting nonspecific from total binding, and the amount of internalized 125 I-ligands was calculated as percent of specific binding determined at neutral pH.
Electrophoretic Mobility Shift Assay-Following stimulation of cells (5 ϫ 10 6 at 10 6 cells/ml density) for the times indicated in the figures, nuclear extracts were prepared according to Schreiber et al. (43). The protein concentration of the nuclear extracts was measured using a BCA assay (Pierce, Hamburg, Germany) with bovine serum albumin (Sigma, Deisenhofen, Germany) as standard protein. The doublestranded NFB specific oligonucleotide, containing two tandemly arranged NFB binding sites of HIV long terminal repeat enhancer (5Ј-ATCAGGGACTTTCCGCTGGGGACTTTCCG-3Ј) was end-labeled with [␥-32 P]ATP (Amersham-Buchler, Braunschweig, Germany) using T4 polynucleotide kinase (Boehringer Mannheim, Mannheim, Germany) and purified with Nick columns (Pharmacia, Freiburg, Germany). Nuclear extracts (10 g) were incubated for 15 min at room temperature in FIG. 1. A, IL-2 production by EL4 5D3 and EL4D6/76 cells. Cells were incubated in culture medium or stimulated with 10 ng/ml PMA, 10 ng/ml PMA ϩ 150 pg/ml rhIL-1␣, or 10 ng/ml PMA ϩ 100 ng/ml rmTNF. After 18 h, supernatants were collected and IL-2 production quantified by enzyme-linked immunosorbent assay. No IL-2 was detected after stimulation with 150 pg/ml rhIL-1␣ or 100 ng/ml rmTNF alone. Stimulation index ϭ PMAϩIL-1/PMA or PMAϩTNF/PMA. B, internalization of 125 I-IL-1␣ by EL4 5D3 and EL4D6/76 cells. Cells were incubated with 500 pM 125 I-IL-1␣ for 4 h at 37°C or 4°C. For determination of total cell-associated radioactivity, cells were centrifuged through an oil mixture (see "Experimental Procedures"). In a parallel reaction, surface-bound 125 I-IL-1␣ was removed by a pH 3 washing step and the internalized radioactivity was measured in the cell pellet after centrifugation through the oil mixture. C, internalization of 125 I-TNF in EL4 5D3 and EL4D6/76 cells. Cells were incubated in 1 ng/ml 125 I-TNF for 1 h at 4°C. After removing excess radioactivity, internalization of 125 I-TNF was allowed for 1 h at 37°C or 4°C. Subsequently, for determination of total cell-associated radioactivity, the cells were centrifuged through a pH 7.3 Ficoll gradient; for determination of internalized 125 I-TNF, cells were centrifuged through a pH 3.0 Ficoll gradient. 100% equals total specific cell-associated radioactivity.
binding buffer (5 mM HEPES, pH 7.8, 5 mM MgCl 2 , 50 mM KCl, 5 mM dithiothreitol, 10% glycerol, 50 mM poly(dI-dC), final volume 20 l). The 32 P-labeled double-stranded oligonucleotide was then added, and the reaction mixture was incubated for another 15 min. In competition experiments, an 200-fold excess of the unlabeled B oligonucleotide and Oct2A site (5Ј-GTACGGAGTATCCAGCTCCGTAGCATGCAAATCCT-CTGG-3Ј) was added. For supershift experiments, the binding reaction mix was incubated with the indicated amounts of antibodies for an additional 1 h. The samples were fractionated on a low ionic strength (0.25 ϫ TBE), 6% nondenaturing polyacrylamide gel, and the bands were detected by autoradiography.
Assays for Neutral and Acid SMase-EL4 cells (3 ϫ 10 6 ) in 1 ml of RPMI 1640 were stimulated in triplicate culture with 100 units/ml rhIL-1␣ or 100 ng/ml rmTNF-␣ or medium to determine basal activity for the times indicated in the figures. SMase activities were expressed as percent of basal SMase activities determined for each time point separately. At indicated times, treatment was stopped by immersion of the culture vials in a methanol-dry ice bath. Cells were centrifuged for 5 min at 4°C and washed with ice-cold phosphate-buffered saline. To measure neutral SMase, pellets were resuspended in a buffer containing 20 mM HEPES, pH 7. The produced radioactive phosphorylcholine was measured as described for neutral SMase assay.
RESULTS
To investigate the role of the different SMases in IL-1 signal transduction and their activation via the IL-1RI complex, we used two sublines of the murine thymoma cells EL4, EL4 5D3 and EL4D6/76, which differ in their response to IL-1 (38 -40). On the cell surfaces, both lines express normal numbers of IL-1RI, which show comparable affinity to IL-1 (39). Cloning of the IL-1RI cDNA from both cell lines and subsequent sequencing revealed that the nucleotide sequences of the IL-1RI of both cell lines are identical and correspond to the published sequence (data not shown). As shown in Fig. 1A, after binding of IL-1 only EL4 5D3 but not EL4D6/76 reacted with increased IL-2 production in presence of the tumor promoter PMA (39). To investigate whether this defect of EL4D6/76 was restricted to IL-1 cells were stimulated with TNF and/or PMA. In contrast to IL-1, co-stimulation with TNF and PMA led to increased IL-2 production in both cell lines (Fig. 1A), indicating the specificity of the functional defect in EL4D6/76 cells. The higher TNF responsiveness of EL6D6/76 cells compared with EL4 5D3 cells may be explained by the fact that EL4D6/76 cells bind more TNF under our assay conditions (data not shown). No differences in IL-2 production were observed when the cells were stimulated with saturating concentrations of rmTNF or rhTNF, suggesting that activation of the IL-2 production occurred via the 55-kDa TNF receptor (data not shown).
This defect in IL-1 responsiveness was shown to correlate with a defect in internalization of receptor-bound IL-1. Only EL4 5D3 cells were able to internalize IL-1, whereas EL4D6/76 cells were deficient (Fig. 1B). To investigate whether this defect of EL4D6/76 cells is also specific for IL-1, we tested both cell lines for their capability to internalize TNF. Fig. 1C shows that TNF was internalized in both cell lines to the same extent. Activation of the IL-1RI by IL-1 leads to the rapid activation of the transcription factor NFB. To examine whether the functional defects in IL-1 responsiveness correlated with defective NFB activation, cells were stimulated with saturating concentrations of IL-1 and TNF. As shown in Fig. 2, TNF but not IL-1 was able to trigger the rapid activation of NFB in EL4D6/76 cells, whereas the IL-1-responder EL4 5D3 responded to both stimuli equally well. Taken together, these data show the defect of internalization and function is specific for IL-1RI. Furthermore, the binding of IL-1 to the IL-1RI is not sufficient for triggering the nuclear translocation of NFB.
Induction of SMase activity was shown to be a very early event after TNF receptor or IL-1R triggering. To address the question whether the activity of the N-and A-SMase is coupled to a functional receptor, cells were stimulated with TNF or IL-1 and the activities of the N-and A-SMase were measured. In IL-1-stimulated IL-1-responsive EL4 5D3 cells, the activity of the N-SMase peaked after 90 s (Fig. 3A), whereas the maximum of A-SMase activity was detected after 3 min (Fig. 3B). Interestingly, IL-1 stimulated N-SMase activity in IL-1-nonresponsive EL4D6/76 cells (Fig. 3C), whereas no increase of A-SMase activity was detected (Fig. 3D). Again, stimulation with TNF led to activation of both N-SMase and A-SMase in both sublines (Fig. 3, A-D). As we have shown before, both EL4 5D3 and EL4D6/76 cells were able to internalize and respond to TNF. These data suggested that IL-1R internalization and activation of A-SMase but not N-SMase are functionally coupled. The enzymatic activities of the crude preparations of Aand N-SMase were analyzed. The enzymes showed classical Michaelis-Menten kinetics with IL-1 not significantly affecting K m , but increasing V max of both SMases (Fig. 4).
Recently, we found that expression of IL-1RAcP in EL4D6/76 reconstituted IL-1 responsiveness with respect to internaliza-
FIG. 2. Activation of NFB in EL4 5D3 and EL4D6/76 induced by IL-1␣ (A) and TNF (B).
Cells were left untreated or incubated with 150 pg/ml IL-1␣ and 100 ng/ml TNF for the indicated times. Nuclear extracts were prepared, and NFB binding was analyzed by electrophoretic mobility shift assay using a 32 P-labeled NFB binding site from HIV long terminal repeat. tion of IL-1 and IL-2 secretion (41). To investigate whether activation of A-SMase is linked to a functionally competent IL-1 receptor that is capable of receptor internalization, we used transfectants of the IL-1-nonresponding line EL4 D6/76, which stably expressed the IL-1RAcP. The ability to activate A-SMase in EL4D6/76 cells was also reconstituted in IL-1RAcP-transfectants. As shown in Fig. 5, IL-1 did not stimulate A-SMase in EL4D6/76 cells. The corresponding IL-1RAcP transfectants, however, showed the typical A-SMase activation pattern with maximum activity at 3 min after IL-1 stimulation. A-SMase through production of ceramide provides an important cofactor for NFB activation. When the IL-1-responsive EL4 5D3 and IL-1-nonresponsive EL4D6/76 were stimulated with IL-1, NFB was activated in EL4 5D3 but not in the nontransfected EL4D6/76 (Fig. 6A). Four corresponding IL-1RAcP transfectants, however, clearly showed activated NFB after stimulation with IL-1 (Fig. 6A). The identity of NFB was confirmed in competition experiments with a 200-fold excess of unlabeled B oligonucleotides in two representative IL-1RAcP transfectants (Fig. 6B). In contrast, a 200-fold excess of cold Oct2A oligonucleotide had no inhibitory effect on the formation of the NFB complex. In supershift experiments, an anti-RelA antibody specifically inhibited the formation of the NFB complex. An anti-RelB antibody was not able to replace the complex, indicating the involvement of the p65 rather than p68 subunit in the formation of the NFB complex (Fig. 6B).
DISCUSSION
During the last few years, the importance of ceramides as second messenger molecules generated by the breakdown of sphingomyelin has become evident (5,6). Previous studies indicated that the TNF signal activates two forms of sphingomyelinases, a membrane-bound N-SMase and DAG-dependent endo/lysosomal A-SMase (27). These two forms are triggered independently from each other and lead into different signaling pathways (9). A-SMase has been identified as a candidate for NFB activation. Raising the pH of the endo-/lysosomal compartments with monensin or ammonium chloride resulted in a loss of A-SMase activity and NFB activation selectively; neither N-SMase activity nor PC-PLC was affected (27).
As IL-1 is another potent activator of SMases and NFB, we, therefore, investigated the activation of A-and N-SMase by IL-1 and the relation to different components of the IL-1RI in two sublines of the EL4 thymoma cell line. The subline EL4 5D3 can be activated by IL-1, whereas EL4D6/76 cannot be activated although high affinity IL-1 binding sites are present (39). The defects in internalization of the IL-1R complex, in activation of A-SMase, and in nuclear translocation of NFB were all shown to be specific for IL-1R-mediated stimulation, since the IL-1-nonresponsive EL4D6/76 cells readily responded to TNF stimulation.
IL-1, however, activated N-SMase and A-SMase differentially. N-SMase was activated in both the IL-1-responder and IL-1-nonresponder lines by IL-1 and thus did not correlate with activation of NFB and IL-2 production. In contrast, A-SMase was not activated in IL-1-nonresponsive EL4D6/76 cells by IL-1, although TNF was readily able to activate A-SMase. Therefore, in the IL-1-signaling cascade, N-SMase activation is not sufficient for NFB activation. Thus, ceramide per se might not be an activator of NFB but only ceramide generated by A-SMase in a distinct cellular compartment. The importance of compartmentalization is underlined by investigations of Liu et al. (44). They have shown that IL-1 stimulated DAG and ceramide production only in caveolae fractions of fibroblasts. DAG induced by IL-1 in other cellular fractions was not coupled to ceramide production. In our experiments which are not shown in this paper, incubation with C 2 -and C 8 -ceramides did not activate NFB in both cell lines. Therefore, since A-SMase appears to be required for IL-1-induced NFB activation and ceramide analogs do not induce this event, ceramide might be a necessary but not sufficient co-signal for NFB activation. We also found previously that exogenous sphingomyelinases or sphingosine were not able to co-stimulate IL-2 production in EL4 cells (45). Therefore, it might be possible that small amounts of A-SMase-derived ceramide in specialized compartments contribute to activation of NFB or IL-2 production. In A-SMase-deficient Niemann-Pick fibroblasts, however, NFB activation is induced by IL-1, indicating that A-SMase activity is not essential for NFB activation (46).
In addition to internalization of IL-1 and IL-2 production, we found that IL-1-stimulated A-SMase activity was also reconstituted by transfection in four independent stable transfectants of EL4D6/76 cells (Fig. 5). Simultaneously, the activation of NFB was restored, strongly supporting the existence of a link between A-SMase activity and NFB activation. Thus, A-SMase activity, in contrast to N-SMase activity, correlated with internalization of a functional IL-1RI complex. The data suggest that A-SMase activation requires a functional receptor complex that is capable of receptor internalization. The need of internalization for cytokine action is controversially discussed in the literature. Endocytosis is reported to play a critical role in TNF-induced gene expression and induction of cytolysis (47)(48)(49). In EL4 cells, IL-1 signaling and internalization correlate (38,39) and an intracellular activation loop of IL-1 seems to be operative in EL4 (40). On the other hand, in Jurkat cells, internalization and nuclear localization of IL-1 was not sufficient for activation of the IL-2 promoter (50). Andrieu et al. (51) suggest that cytokine-receptor internalization is not required for activation of the sphingomyelin pathway because they found similar degradation of sphingomyelin and generation of ceramide in cells when endocytosis was blocked by low temperature and hypertonicity. These data, however, do not exclude the requirement for the internalization of the receptor complex to activate A-SMase, as there was no distinction made between ceramide produced by A-or N-SMase. It is therefore possible that the increased ceramide level results from N-SMase activity, which in our experiments did not require IL-1R internalization.
A possible mechanism for the activation of A-SMase is indicated by data obtained from studies with caveolae fractions. The caveola is a membrane domain that can undergo an internalization cycle. Invagination of the membrane is followed by the formation of plasmalemmal vesicles, which provide an optimal microenvironment for activation of A-SMase. IL-1 stimulated the production of DAG in a caveola-rich membrane fraction of whole fibroblasts. This was followed by a degradation of sphingomyelin and a concomitant increase of ceramide. Additionally, A-SMase activity could be detected in the caveolae fractions (44). In TNF signaling, the activation of A-SMase by 1,2-DAG generated by membrane-located PC-PLC has been reported (27). Thus, activation of A-SMase by IL-1 may occur via co-internalization of 1,2-DAG with the caveolae-associated IL-1/IL-1RI complex, if PC-PLC activation occurs in close vicinity to the membrane receptors.
In conclusion, the present study shows that the IL-1-induced increase in A-SMase activity and concomitant activation of NFB are dependent on the presence of IL-1RAcP. Ceramide 1G6, EL4 10B5, and EL4 10G12) expressing IL-1RAcP mRNA were stimulated for the indicated periods of time with 1.5 ng/ml IL-1␣ or left untreated for control and A-SMase activity was assayed as described under "Experimental Procedures." A-SMase activity is expressed in percentage of control. The standard errors were always lower than 4% of the mean.
FIG. 6. Reconstitution of IL-1-mediated NFB activation in EL4D6/76 when transfected with IL-1RAcP. The NFB complex was confirmed in competition experiments and supershift analyses. A, EL4 5D3, EL4 D6/76, and stably IL-1RAcP transfected EL4D6/76 cells (EL4 1F4, EL4 1G6, EL4 10B5, and EL4 10G12) were either left untreated or stimulated for 1 h with 150 pg/ml IL-1. After the indicated periods of time, nuclear extracts were prepared and NFB binding activity was detected by EMSA using the 32 P-labeled NFB binding site from HIV long terminal repeat. B, two transfectants (EL4 10B5 and EL4 10G12) were left untreated or stimulated with 150 pg/ml IL-1 for 1 h before nuclear extracts were prepared. Again, NFB binding activity was detected by EMSA. Additionally, competition experiments with unlabeled B and Oct2A oligonucleotides, respectively, were performed. The nuclear extracts were incubated with the radiolabeled NFB binding oligonucleotide, and either a 200-fold excess of unlabeled B or Oct2A site before the reaction mix was separated by gel electrophoresis. In supershift experiments, 1 g of anti-RelA or anti-RelB antibody was added to the reaction mix for an additional 1 h before separation by gel electrophoresis. produced by A-SMase, therefore, might represent the functional link between IL-1RI internalization and activation of NFB and IL-2 production. | 2018-04-03T00:05:57.086Z | 1997-10-31T00:00:00.000 | {
"year": 1997,
"sha1": "96a00b0c1cdf6eefa4bb929ae3519bf061fd4df2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/272/44/27730.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "045588b6a583b2dbd64b811f6b92a2e644b41e54",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
225044307 | pes2o/s2orc | v3-fos-license | SGOT and SGPT level of Wistar rat after the administration of Channa micropeltes extract
Amy Nindia Carabelly*1, Deby Kania Tri Putri2, Nadya Rezki3, Maharani Laillyza Apriasari4 1Departement of Oral Pathology, Faculty of Dentistry, Lambung Mangkurat University, Banjarmasin-Indonesia 2Departement of Oral Biology, Faculty of Dentistry, Lambung Mangkurat University, Banjarmasin-Indonesia 3Faculty of Dentistry, Lambung Mangkurat University, Banjarmasin-Indonesia 4Departement of Oral Medicine, Faculty of Dentistry, Lambung Mangkurat University, Banjarmasin-Indonesia
INTRODUCTION
Channa striata (Haruan ish) is one of the local peatland species (Huwoyon and Gustiano, 2013) . Majority of South Kalimantan society believe that Channa striata consumption may accelerate wound healing process due to the albumin content. Albumin is the highest protein to be found in plasma which reaches 60%, and it may accelerate wound healing process by the presence of antioxidant property (Nicodemus et al., 2014) ; (Alamsjah et al., 2014) ; (Agustin et al., 2016) . Channa striata capsule at a 0.7-gram dosage which may accelerate wound healing process has been widely distributed (Tawali et al., 2012) . However, the high price and the complexity of Channa striata cultivation emerge the necessity for alternative species such as Channa micropeltes (Toman ish) (Audina et al., 2018) .
Channa micropeltes may be produced in the form of a capsule and be used as an alternative herbal drug to accelerate wound healing of the oral mucosa. Yet, a further study of its safety is required to analyze the toxicity prior its consumption. One of the toxicity analysis used is a sub-chronic toxicity test (Hulla et al., 2014).This test should be performed for 28-90 days to identify hepatotoxicity effect by observing the in luence of a compound toward the change in Serum Glutamic Oxaloacetic Transaminase (SGOT) and Serum Glutamic Pyruvic Transaminase (SGPT) level of the liver (Singh et al., 2011); (Wahyuni et al., 2017). The liver is the main organ for drug metabolism. Several drugs may induce the destruction of the liver cell due to hepatotoxic property (Indahsari and Histopatologi, 2017). Hepatotoxicity term refers to liver dysfunction due to over-dosage of drugs or xenobiotic (Singh et al., 2011).
Serum Glutamic Oxaloacetic Transaminase (SGOT) is an enzyme found inside the body which immediately detected in peripheral circulation when necrosis occurred in a tissue. SGOT enzyme is commonly found in cardiac and liver, while Serum Glutamic Pyruvic Transaminase (SGPT) is frequently detected in the liver and effectively diagnosed the presence of hepatocellular destruction. This enzyme will be secreted by the liver when there is a destruction of liver cells which is depicted in the increase of SGPT level in blood plasma (Nasution et al., 2015); (Qodriyati et al., 2016). SGOT and SGPT level test should be performed to identify the presence of liver abnormality or destruction due to drug consumption. Average SGOT level is 6-30 IU/L while normal SGPT level is 6-45 IU/L (Nurminha and Gambaran, 2013); (Reza and Rachmawati, 2017). When SGOT and SGPT level is higher than normal, necro- sis of hepatocytes in the liver can be detected. The increase in both enzymes level will indicate that the compound should not proceed for further production as an alternative drug (Sari et al., 2015). Based on the background above, it is pivotal to conduct this study to analyze the effect of Channa micropeltes extract capsule at 0.7-gram dosage per oral upon the level of SGOT and SGPT of Wistar rat liver. At the beginning of the study, the experimental animal was adapted at the animal laboratory of Veterinary Centre (BVET) Regional V Banjarbaru for seven days by feeding them with BR2 and aqua dest ad libitum. A total of 12 rats were divided into three treatment groups, with four rats presented in each group. The groups comprised of negative control without given any treatment, positive control with the administration of Channa striata extract at 0.7gram dosage and treatment group with the administration of Channa micropeltes extract capsule at 0.7gram dosage. The administration of drugs was performed for 28 days each morning and noon per oral using a nasogastric tube.
Channa micropeltes extraction
Channa micropeltes was obtained from traditional market Martapura, Kalimantan Selatan, and used in this study had a total weight of 11 kg. The part utilized for the study was the lesh of Channa micropeltes. The extract was made at Pharmaceutical Laboratory, Faculty of Mathematics and Science ULM. Fish was cleaned from scale, blood, head and guts and the lesh were later weighed at 9.84 kg. The lesh was steamed inside a pan for 30 minutes under 70-80 o C temperature. Light yellow liquid was secreted from the lesh to be collected and separated in a total of 750 ml. Channa micropeltes lesh was later covered with lannel fabric and Whatman paper no 1 to be inserted into a hydraulic press for pressing. Channa micropeltes extract was then put into a reaction tube as much as 7.5 ml and centrifuge for 15 minutes on 6000 rpm speed. The supernatant liquid was collected from the centrifuged extract. A total of 700 ml of liquid was obtained to be separated from 50 mL sedimentation. Further, Channa micropeltes extract was evaporated in a rotary evaporator for 8 hours until thickened. The extract was evaporated a second time in a water bath until dried in the form of granules.
Channa striata extract capsule
In this study, the capsulate of 0.7 gram Channa striata extract distributed in the market was employed.
Below presents the formulation of 0.7 gram Channa striata extract capsule is shown in Table 1 .
Formulation of Channa micropeltes extract capsule
Dried Channa micropeltes extract was inserted into a mortar and mixed with aerosol, talk, Mg stearate and amylum. All compounds were crushed using stamper until homogenous. Granules were then weighed using an analytical scale and put on a parchment to be inserted inside a gelatinous capsule shell. The capsule was then stored inside dark bottle glass. Formulas of Channa micropeltes extract capsule are presented in Table 2 (Nurani et al., 2017).
Animal treatment
Experimental rats were randomly selected and were administered with standard dosage given orally for 28 days every morning and noon. Calculation of the dosage was obtained from human dose, which is converted by multiplying it with 0.018 (Togubu et al., 2013). The capsule of Channa micropeltes extract was divided into two in which one capsule contained 500 mg granules. Thus, a dosage conversion for rat obtained: 500 mg x 0.018 = 9 mg/g BW.
One capsule of 0.7 gram Channa striata extract in a weight of 750 mg might present a conversion of dosage as 750 mg x 0.018 = 13.5 mg/g BW.
These were the treatment given for each group: Group A (negative control) in which four rats were given BR2 feed for 28 days each morning and noon; Group B (positive control) in which four rats given Channa striata extract capsule at 13.5 mg/g BW dosage dissolved in aqua dest for 28 days each morning and noon using nasogastric tube; Group C (treatment group) in which four rats given Channa micropellets extract capsule at 9 mg/g BW dosage dissolved in aqua dest for 28 days every morning and noon using nasogastric tube.
Collection of blood plasma
On day 29, rats were sacri iced to collect their blood. Each rat was sacri iced by putting it in a container of cotton fumed with 5 ml diethyl ether. The container was covered tightly so that the diethyl ether would not evaporate. After waiting for several minutes until the rat was unconscious, blood was then obtained by an intracardial technique using a syringe. Blood was centrifuged until blood plasma was secreted. Plasma was then separated into a microtube.
Identi ication of SGOT and SGPT level
SGOT and SGPT analysis was conducted at Toxicology Laboratory of Veterinary Centre (BVET) Regional V Banjarbaru with IFCC methods and interpreted using Genesis 20 spectrophotometry with 365 nm wavelength. Blood plasma mixed with reagent kit under 37 o C room temperature. Blood plasma was mixed in a total of 100 µL with a reagent kit in a total of 1000uL. After mixed homogenously, absorbency was observed in minute 1, 2 and 3. Data were presented in the result of absorbance (A). Thus, the result of SGOT and SGPT level activity (IU/L) should be obtained by multiplying the average subtraction from absorbance (A) minute 1, 2 and 3 with 3235 factors. The measurement of activity employed this formula: { ((△A minute 1 and 2)+(△A minute 2 and 3)) 2
} x 3235
The result then would be input to computer software SPSS 23.0 for Windows.21
Statistical analysis
Data were analyzed using Saphiro-Wilk test and then proceeded into variance homogeneity test of Levene's. It was revealed that the data were normally distributed and homogenous (p>0.05) thus Oneway ANOVA parametric test with a 95% con idence level (α=0.05) was performed. Data analysis was then followed by Post-Hoc Bonferroni test.
RESULTS AND DISCUSSION
Average SGOT level is 6-30 IU/L, while normal SGOT level is 6-45 IU/L. Graphic of SGOT dan SGPT level mean value in Wistar rat is presented in Figure 1. Based on Figure 1, it can be concluded that the average value for SGOT and SGPT level in group A, B and C are at a normal level. Data were then examined using Saphiro-Wilk and Levene's Test, which resulted in normal distribution and homogeneity among three groups (p>0.05). Data were further analyzed using one-way ANOVA test and a signi icant value obtained for SGOT level was 0.006 (p<0.05) while SGPT level was 0.308 (p>0.05) thus presenting a signi icant difference between each treatment. Data were then continued with Posthoc Bonferroni analysis which can be observed in Table 3.
Serum Glutamic Oxaloacetic Transaminase (SGOT) and Serum Glutamic Pyruvic Transaminase (SGPT) enzyme are two enzymes which may detect the destruction of liver cell (Nasution et al., 2015). In this research, there is no signi icant difference between the SGOT level of negative control and Channa micropeltes extract treatment group. Hence, as between positive control of Channa striata extract group and Channa micropeltes extract treatment group. This reveals that Channa striata extract may increase the SGOT level but still in the normal range; thus, no toxicity effect resulted in the liver. No signi icant difference was observed on the impact of Channa striata, and Channa micropeltes extract toward SGPT level among all groups, which describes that Channa striata and Channa micropeltes extract do not induce hepatocellular destruction.
Channa micropeltes (Channa micropeltes) contains omega-3 fatty acids, omega-6 fatty acid, zinc, vitamin C and albumin (Nicodemus et al., 2014); (Firlianty, 2016); (Irwanda et al., 2015). Albumin content in Channa micropeltes reaches 5.35% (Fajriani et al., 2018).This albumin content will undergo distribution and metabolism (Throop et al., 2004). At a metabolic stage, albumin is synthesized at the liver cell, speci ically hepatocytes, and it is converted into preproalbumin (Arroyo et al., 2014). Preproalbumin will then be imported into the endoplasmic reticulum, and ission of N-terminal prepropeptide will present as it is assisted by serine protease to be released into interstitial of the liver, sinusoid and liver vein (Arroyo et al., 2014); (Kebamo and Tesema, 2015). The aerobe route of albumin metabolism in the liver cell will form a by product of oxygen molecules which is classi ied as Reactive Oxygen Species (ROS) (Lee and Wu, 2015); (Li et al., 2015). Albumin possesses the antioxidant property to bind free radical produced by ROS and stimulate antioxidant enzyme such as superoxide dismutase (SOD) through the activation of nuclear factor-erythroid-2 related factor 2 (NRf2) (Widayati et al., 2012); (Ma, 2013); (Cahyani and Rustanti, 2015). NRf2 functions as the irst defence against oxidative stress in the cytoplasm. Unless induced by the presence of oxidant and electrophile, NRf2 will be presented in the inactive form (Vriend and Reiter, 2015); (Layal, 2016). It will bind with receptor molecule such as kelch like ECH association protein 1 (Keap-1) and later formulate NRf2-Keap1 complex. In the presence of an oxidant, Nrf2 will be translocated to the nucleus and will form Antioxidants Response Element (ARE) and will be able to stimulate antioxidant enzyme activity such as superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx) which can neutralize ROS component (Widayati et al., 2012); (Vriend and Reiter, 2015).
The increase of superoxide dismutase (SOD) level has a role against free radical inside mitochondria by inducing or changing anion peroxide (O 2− ) into hydrogen peroxide (H 2 O 2 ), a form of free radical (Widayati et al., 2012); (Fukai and Ushio-Fukai, 2011). Hydrogen peroxide (H2O2) will then be transformed into water (H 2 O) and oxygen (O 2 ) by GPx and CAT (Tsutsui et al., 2011);Werdhasari, 2014;(Qu et al., 2016). The decrease of intracellular or extracellular ROS level will affect the biochemical process, including the protection against microorganisms and the function of the liver cell. When ROS decrease, the occurrence of oxidative stress can be prevented. Thus liver cell remained living and freed from radical (Hardiningtyas et al., 2014). Liver cells which are free from radical will halt cell destruction. Thus, SGOT and SGPT enzyme as identifying marker for cytoplasm and mitochondria destruction in the liver cell will be presented in the normal level (Rachmawati and Ulfa, 2018); (Giannini, 2005).
Channa micropeltes contains several hepatoprotective compounds other than albumins, such as zinc, omega-3 fatty acid and vitamin C. Omega-3 fatty acid is proven to heal liver injury, stabilize and also decrease SGOT and SGPT level (Sukarsa and Studi, 2004); (Chavan et al., 2013). Zinc is shown to reduce SGOT and SGPT level, thus de late liver cell destruction effect (Unsal et al., 2008). Vitamin C possesses an antioxidant property which depicts a hepatoprotective effect by binding free radical, which decrease oxidative stress in the liver cell (Sabiu et al., 2015).
CONCLUSIONS
It can be concluded from this study that there is no effect of Channa micropeltes extract capsule at 0.7-gram dosage per oral upon SGOT and SGPT level changes in Wistar rat liver. This result should be deployed as the foundation of Channa micropeltes extract capsule development as an alternative herbal drug to accelerate wound healing of the oral mucosa with no destructive effect upon the liver. | 2020-10-22T22:23:46.029Z | 2020-09-26T00:00:00.000 | {
"year": 2020,
"sha1": "47fe9743f2ed6616dbd4eca72ba16bec1352df5a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26452/ijrps.v11i4.3169",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "47fe9743f2ed6616dbd4eca72ba16bec1352df5a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
248441954 | pes2o/s2orc | v3-fos-license | Fibrosis of Peritoneal Membrane as Target of New Therapies in Peritoneal Dialysis
Peritoneal dialysis (PD) is an efficient renal replacement therapy for patients with end-stage renal disease. Even if it ensures an outcome equivalent to hemodialysis and a better quality of life, in the long-term, PD is associated with the development of peritoneal fibrosis and the consequents patient morbidity and PD technique failure. This unfavorable effect is mostly due to the bio-incompatibility of PD solution (mainly based on high glucose concentration). In the present review, we described the mechanisms and the signaling pathway that governs peritoneal fibrosis, epithelial to mesenchymal transition of mesothelial cells, and angiogenesis. Lastly, we summarize the present and future strategies for developing more biocompatible PD solutions.
Introduction
All over the world, it is estimated that 2 million people suffer from end-stage renal disease (ESRD), and this number continues to increase every year, representing an important economic problem [1]. The ideal treatment for ESRD would be kidney transplantation, but, in the absence of this availability, most patients undergo dialysis. Peritoneal dialysis (PD) is a well-established renal replacement treatment that several studies have shown to be safe and as efficacious as hemodialysis [2][3][4].
With respect to hemodialysis, PD has a series of advantages: it is home-based and thus cost-saving [5], allowing a superior quality of life, it preserves better the residual renal function, while at the same time it produces a gradual and continuous solute and fluid exchange with minimal cardiac stress [6][7][8].
Although PD has a strong potential, the proportion of ESRD patients treated with this technique in developed countries is consistently lower compared to hemodialysis (about 13% in Europe and 10% in the USA) [6,9,10], well below the optimal estimated utilization rate of 25-30% [11]. On the whole, the sub-optimal utilization of PD might be due to financial and economic reasons that favor hemodialysis, lack of patient information about this renal replacement therapy option, or fear of complications and side effects [12]. Another reason is the concern about the durability of the technique as it may be limited by peritoneal membrane integrity and capacity to sustain the treatment over time. It has been proved that peritoneal membrane dysfunction is responsible for about 30% of technique failure [13], and clinical studies showed that peritoneal ultrafiltration (UF) gradually declines 2-4 years after The unphysiological characteristics of PD solutions and the uremic status are considered as main factors leading to the functional decline of the peritoneal membrane [25]. These factors induce a chronic peritoneal inflammation that can be worsened by episodes of peritonitis. Structural changes of the peritoneal membrane, including loss of mesothelial cells monolayer, sub-mesothelial fibrosis, angiogenesis, and hyalinizing vasculopathy, are the consequence of reparative processes to inflammatory insults [26,27].
Indeed, inflammation induces neoangiogenesis that increases the surface area available for solute diffusion, and on the other hand, fibrotic thickening of the peritoneum increases flux resistance and consequently reduces water flow. Therefore, initial UF decline is related to increased solute transport and consequent dissipation of the osmotic gradient. Moreover, the onset of fibrosis and neovascularization contribute to increased small-solute transport and UF failure [28].
Peritoneal fibrosis is a slow process, but functional alterations are detectable well before structural changes [29]; moreover, some authors reported that sub-mesothelial thickening and vascular changes could be present even without signs of mesothelial cell layer loss [26].
The first features of peritoneal fibrosis in PD patients were described in the 1980s [28] and subsequently, it has been proved that uremic condition and PD duration are responsible for the development of peritoneal deterioration [26,30]. Macroscopically, the peritoneum exposed to dialysate is brownish or tanned, and it displays texture alterations such as the loss of surface moisture [31]. Histologically, the first alterations occur in the mesothelial layer with distinctive cytoplasmic inclusion and signs of focal defoliation [32]. The subsequent alteration involves the sub-mesothelial compartment. Importantly, sub-mesothelial thickness and vascular alteration are associated with the duration of PD and UF failure [26].
Over the last twenty years, it has been proved that fibrosis, inflammation, angiogenesis, and epithelial-to-mesenchymal transition (EMT) are tightly interconnected in the pathogenesis of UF failure [23,33]. EMT is a common process during physiological situations such as development and wound healing but also in pathological events such as cancer and organ fibrosis [34]. In the peritoneum, the correct definition of EMT is a mesothelial-to-mesenchymal transition (MMT).
MMT represents a complex phenomenon of cellular trans-differentiation that converts the mesothelial phenotype into a mesenchymal one, with the loss of epithelial characteristics and the acquisition of mesenchymal features [35]. MMT was initially thought of as irreversible, but several studies have shown that it is potentially reversible [36]. During MMT, mesothelial cells lose cell polarization, undergo the disassembly of cellular contacts such as adherent and tight junctions, and, at the same time, acquire a fibroblastic shape characterized by higher motility and the capacity to produce and secrete extracellular matrix (ECM). Given these characteristics, mesothelial cells that underwent MMT can migrate to the sub-mesothelial zone and secrete ECM, thus contributing to fibrosis [35]. However, there is still an ongoing debate about the individual contribution of MMT-derived fibroblasts to the pool of sub-mesothelial activated fibroblasts with respect to the activated resident stromal fibroblasts [23]. Moreover, also endothelial-to-mesenchymal transition (EndoMT) could contribute to the pool of activated sub-mesothelial fibroblasts [37], as it occurs in the onset of fibrosis in other districts [38].
The earliest event in MMT involves the loss of cell-to-cell contact, which is associated with the downregulation of epithelial markers such as E-cadherin, cytokeratin, and zonulaoccludens-1 (ZO-1) [39,40]. Tight junction proteins such as claudins and occludins are para-cellular components which regulate the transport in the peritoneal mesothelium and their expression and localization are altered in PD patients [41]. In addition, loss of mesothelial layer integrity induces sub-mesothelial tissue to come into contact with bio-incompatible PD solutions as well as inflammatory cytokines [42]. However, it must be kept in mind that as mesothelial cells are of mesodermal origin, they co-express in basal conditions with both epithelial and mesenchymal markers. This may explicate their higher plasticity. Regarding epithelial markers, these cells express a high amount of epithelial cytokeratins, such as cytokeratin 8-18, and proteins of tight and adherens junctions, such as junctional adhesion molecule 1 (JAM1) and ZO-1. E-cadherin is expressed on the membrane and cytoplasm of mesothelial cells [43]. Like mesenchymal cells, mesothelial cells express the intermediate filaments vimentin and desmin constitutively [35,44]. Ecadherin downregulation is due to the induction of Snail, a master factor of EMT, directly inhibiting the E-cadherin transcription [35].
Other possible causes of bio-incompatibility of PD solutions are hypertonicity for the generation of crystalloid osmosis [45], glucose degradation products (GDPs) formed during heat sterilization [46,47], as well as advanced glycation end products (AGEs) formed in the peritoneal cavity [48,49].
UF failure is associated with the increased vascular surface area due to neo-angiogenesis. Vascular wall thickening and augmented permeability increase small solute permeability [50][51][52]. Experimental studies proved increased VEGF production was associated with the use of standard PD solutions and the time of dialysis vintage [53]. Interestingly, VEGF levels decreased when patients were switched from a glucose-based to a glucose-free PD solution (icodextrin, glycerol, and amino acid-based dialysis solutions), suggesting a central role of high glucose concentration in the upregulation of peritoneal VEGF production [54].
The connection between angiogenesis and EMT is well recognized. EMT in mesothelial cells is associated with increased levels of peritoneal VEGF [55][56][57]. Expression of VEGF is firmly controlled at several steps: transcription, mRNA stabilization, alternative splicing, and translation [58,59]; in addition, different factors and cytokines are usually upregulated during PD (IL-1b, IL-6, IL-17, oxidative stress) can regulate its production [60,61]. In particular, TGF-β, a master supervisor of EMT, increases VEGF expression in mesothelial and fibroblasts cells. Moreover, TGF-β inhibition decreased peritoneal fibrosis and VEGF production in a murine model [62]. VEGF signaling is also regulated by the expression of VEGF receptors and co-receptors [58] and they are modulated in mesothelial cells EMT [61].
TGF-Beta/Smad/Non-Smad/Glucose
TGF-β is part of a superfamily that includes different signaling proteins such as bone morphogenic proteins, activins, and TGF-β isoforms [62], which are involved in several physiological and pathological processes, including proliferation, apoptosis, embryonic development, and organ fibrosis [63]. TGF-βsignaling represents a common mediator of peritoneal fibrogenesis induced by glucose, GDPs, and AGEs in bioincompatible PD solutions [64]. Exposure of mesothelial cells to a high glucose dialysate is associated with a higher synthesis of TGF-β [65]. Moreover, TGF-β signaling is amplified after glucose exposure due to the up-regulation of TGF-β receptor types I and II (TGFR1, TGFR2) in mesothelial cells [66]. Protein kinase C-α (PKC-α) is the common signaling pathway driving TGF-β upregulation in mesothelial cells) [67].
GDPs have also been implied in altering mesothelial cell function and proliferation [68], increasing TGF-β expression and extracellular matrix deposition in the peritoneal wall [69]. Clinical studies reported that TGF-β production correlates with PD vintage [70,71], and in-vivo studies prove that exogenous TGF-β overexpression induces peritoneal fibrosis, increases vessel density, and deteriorates solute transport as well as for UF capacity [72,73].
TGF-β1 can transduce signals through Smad-dependent and Smad-independent pathways, even though most profibrotic actions of TGF-b1 run via Smad signaling. In the classical pathway, Smad2/3 are phosphorylated by PKC, activated by TGFR1, and activin receptor I-β (ACTR1B). Subsequently, they are released from the receptor complex to form a heterotrimeric complex with Smad4 and translocate into the nucleus. Here, they regulate the transcription of target genes in collaboration with various coactivators and corepressors [74,75].
Smad7 is a type of inhibitory Smad, which inhibits Smad2/3 phosphorylation by blocking access to TGFRs. Some works highlighted the positive role of Smad7 on peritoneal failures, such as attenuation of PD-induced peritoneal fibrosis, angiogenesis, and inflammation [76][77][78]. On the other side, BMP-7 exerts antagonistic effects on TGF-β as in PD fluid-instilled rats and co-administration of BMP-7 ameliorated peritoneal fibrosis and increased capillary density [79]. Besides, Smad3 inhibition in uremic-PD rat models treated with recombinant BMP7 decreased peritoneal fibrosis, sub-mesothelial capillary density, and increased UF capacity [80,81]. It has also been proved that mesothelial cells constitutively express BMP-7 and that BMP-7-dependent Smads1/5/8 are reduced in response to conventional PD solutions [79].
Moreover, NF-κB inhibition has been linked to TGF-β signaling inhibition [87]. Finally, high glucose concentration in PD solutions is tightly connected with TGF-β signaling and UF failure. It has been proposed that the degradation of up-taken glucose induces changes in the intracellular NADH/NAD+ ratio, like hypoxia. Exposure to high levels of glucose stimulates the formation of mediators such as TGF-β and plasminogen activator inhibitor-1. This effect is also associated with a higher expression of glucose transporter 1 (GLUT-1). The increased amount of GLUT1 further enhances intracellular glucose uptake and thereby stimulates the vicious loop, including dialysate glucose exposure, peritoneal fibrosis, and UF failure [88].
Other Signaling Pathway: CTGF, NLRP3/IL-1b, and Cytokines
Connective tissue growth factor (CTGF) is a downstream mediator of TGF-β [89] and induces similar effects: ECM production, cell proliferation, adhesion, and migration [90]. In detail, CTGF expression is activated by TGF-β via a responsive element in the promoter region of the CTGF gene [91] and mediated by Smad3 and Smad4 [92]. Its profibrotic properties have been shown in multiple mesenchymal cells, in which CTGF is a downstream effector of TGF-β [93].
Clinical data demonstrate that CTGF is upregulated in PD patients with UF failure [94,95]; its expression is regulated by glucose [96] and correlates with peritoneal membrane thickness in PD patients with and without EPS [97].
Studies in mouse models proved that also AGEs and GDPs act via CTGF in peritoneal fibrosis, angiogenesis, and inflammation [96,98,99]. Although CTGF is involved in peritoneal fibrosis, additional studies will be necessary to characterize its potential as a pharmacological target as it lacks a specific receptor, it has several isoforms, and it interacts with multiple factors (bone morphogenic factors, VEGF, Wnt, integrins, heparan sulfate proteoglycans, and epidermal growth factor receptor) [100].
Recent data suggest that the NOD-like receptor protein 3 (NLRP3) inflammasome is involved in peritoneal inflammation and consecutive fibrosis. NLRP3 intracellular complex is a component of the innate immune system that mediates caspase-1 activation and regulates the release of pro-inflammatory cytokines IL-1β and IL-18 in response to microbial infection and cellular damage [101]. It has been shown that high glucose-based PD solution activates NLRP3/IL-1β peritoneal mesothelial cells [102,103] and that genetic deficiency of NLRP3 complex or IL-1β reduces inflammatory and peritoneal fibrosis model in mice [104].
IL-6 is a crucial actor in modulating peritoneal inflammation. Intraperitoneal IL-6 is associated with increasing peritoneal solute transport rate [110] and intraperitoneal IL-6 production is proportional to dialysate glucose concentration [111]. IL-6 and soluble IL-6 receptors induce the synthesis and secretion of MCP-1, which attracts monocytes and lymphocytes [112]. A recent study proved that IL-6 leads to peritoneal inflammation and fibrosis development via a STAT3-dependent pathway [113]. IL-6 inhibition ameliorated EMT in human peritoneal mesothelial cells in vitro and ameliorated high glucose-mediated peritoneal fibrosis development in vivo via inhibiting STAT3 phosphorylation [113].
Another cytokine involved in the peritoneal inflammation is IL-17, which strongly affects mesothelial cell cytokine production such as CXCL1 [114]. Moreover, IL-17 is present in the peritoneum of PD patients and correlates with both the duration of PD and the extent of peritoneal inflammation and fibrosis [115]. Recent surveys showed that treatment with alanyl-glutamine on rats and mice exposed to PD fluids resulted in a reduction of peritoneal fibrosis associated with reduced peritoneal IL-17 expression [116].
The Role of Metabolism
Glycolysis, glutaminolysis, and fatty acid oxidation are metabolic processes that supervise the deposition and breakdown of collagen and other ECM components, which may result in fibrosis [117]. Hyperglycemia upregulates TGF-β and hypoxia-inducible factor 1 subunit alpha (HIF1α) expression [118] by increasing the glycolytic rate and inhibiting pyruvate dehydrogenase complex (PDH), promotes the production of lactic acid [119]. Glycolytic intermediates are important in the synthesis of amino acid substrates for collagen synthesis [120] and lactic acid promotes lactylation of lysine residues in extracellular proteins, favoring the conversion of macrophages to an inflammatory phenotype [121]. In summary, hyperglycemia increases TGF-β and HIF1α expression, which in turn elevates rates of glycolysis and lactic acid production with the consequently increased collagen synthesis, acidification, expansion, and lower degradation of ECM. In essence, a pathway promoting and sustaining fibrosis. The substrate glutamine also has a role in EMT. This amino acid is important for collagen synthesis, but it can be converted to glutamate and then to ketoglutarate, which provides a substrate for the generation of NADH and FADH2 and consequently ATP through oxidative phosphorylation [122][123][124] (Figure 1).
PD Technique
In PD, the peritoneum, the membrane covering the entire peritoneal cavity, is used as a dialysis membrane because it is highly vascularized ad has a large surface area. The parietal peritoneum comprises a single layer of mesothelial cells and a sub-mesothelial area. The mesothelial cells line the peritoneal cavity. Below, the sub-mesothelial zone contains the interstitium, which is a gel-like matrix containing fibroblasts, mast cells, collagen, and other extracellular matrix material. The third layer contains a network of capillary endothelium, endothelial basement membrane, and a capillary fluid film overlying the endothelium [19,20].
PD removes excess water by osmosis and electrolytes as well as metabolic waste products by diffusion across a concentration gradient between the capillary blood and the PD fluid infused into the peritoneal cavity via an implanted intra-abdominal catheter. Usually, two liters of PD solutions are infused into the peritoneal cavity and the effluent is drained after some hours (4 to 8 h are typical dwell times). This procedure is then repeated manually about four times daily (continuous ambulatory PD; CAPD) or using a cycler during the night (automated PD; APD). The solute and water transport across the peritoneal membrane is explained via the three-pore model [21]. In brief, solute and water transport occurs across the vascular endothelium through three pores of varying sizes: Figure 1. Graphical representation of metabolic control of fibrosis. Several metabolic processes such as glycolysis, glutaminolysis, and fatty acid oxidation contribute to the deposition and other ECM components. High glucose levels activate glycolysis directly by increasing HIF-1a expression, which in turn amplifies the production of TGF-b1. The latter would not only further sustain high glycolytic rates but also fibrogenesis. The increased production of lactate sustains macrophages polarization toward an inflammatory phenotype, which worsens the fibrosis. In addition, the increased glycolytic rate makes glycolytic intermediates available in larger quantities, contributing to the synthesis of amino acid substrates for collagen synthesis. Moreover, glutamine has a role in collagen synthesis, but it can also contribute to ATP production through oxidative phosphorylation. A key metabolic switch in maintaining a chronically activated fibrogenic state is the pyruvate dehydrogenase complex (PDH).
Effluent Biomarkers to Monitor PD Efficiency
Prognostic biomarkers have been proposed in PD patients to evaluate the peritoneal membrane deterioration. The ideal PD biomarker could be directly accessible in PD effluents to allow one to identify PD patients who are at a high risk of complications. The main biomarkers currently used in PD are IL-6, a marker of chronic peritoneal inflammation, and cancer antigen-125 (CA-125), which is an expression of mesothelial cell mass [125]. IL-6 increases in the effluent of patients with acute bacterial peritonitis and may be used to evaluate the bacterial clearance during the infection. Furthermore, IL-6 in PD effluent correlates with subclinical infections (e.g., biofilms on PD catheter) [126]. Notably, experimental studies suggest persistent peritoneal IL-6 is associated with membrane change/fibrosis and angiogenesis. Other interleukins (IL-8, IL-17) are investigated to evaluate a potential role as an inflammatory marker [125].
The peritoneal membrane undergoes progressive remodeling over PD time resulting in the accumulation of extracellular matrix and fibrosis, included in a complex process called Peritoneal MMT in which mesothelial cells are transformed into fibroblast-like cells leading to inflammation, fibrosis, and angiogenesis. Peritoneal levels of CA-125 have been proposed to estimate mesothelial cell mass as a surrogate parameter for the peritoneal membrane status. The change over time in CA-125 has been proposed as a marker of MMT though the findings are not conclusive [127].
Micro RNAs M (MiR) are small non-coding RNA molecules (18-24 nucleotides), which work as a post-transcriptional regulator in several cellular processes. In PD, microRNA-21 and microRNA-31 were recently proposed to evaluate MMT, but their role is still debated [128].
Recently, PD effluent biomarkers, identified by "omics" technologies, especially proteomics and metabolomics, could predict the onset of peritoneal membrane dysfunction. The metabolic profile in PD effluent might be the expression of a healthy membrane and its change over time predict technique survival [129].
Interestingly, more recently, water channel Aquaporin 1 (AQP1) excreted by the mesothelium has been studied as a biomarker in PD effluent. AQP-1 levels in the effluent correlate with ultrafiltration and free water transport (sodium sieving) evaluated by the peritoneal equilibration test [130].
However, there is no evidence of association of any PD biomarkers with relevant clinical outcomes and their use in the clinical practice is modest.
Low GDPs and Neutral pH
New glucose-based solutions with a neutral-or physiological-pH and low-GDP content (using multi-chamber bags) have been developed to increase the biocompatibility of PD dialysate [131]. The use of lactate or bicarbonate as a pH buffer has significantly reduced systemic GDPs and AGEs. However, the clinical superiority of neutral-pH, low-GDP PD solutions has been questioned [132,133]. In detail, neutral-pH, low-GDP PD solutions seem to be better at preserving the peritoneal endothelial glycocalyx compared to conventional acidic solutions during prolonged PD [134]. However, biopsies in children showed early peritoneal inflammation, hypervascularization, fibroblast activation, and epithelial-mesenchymal transition, which affected PD membrane-transport function [135].
Glucose Sparing
The high glucose content of the PD solution is the main culprit for peritoneal damage over time. In addition, exposure to high glucose concentration leads to systemic adverse effects such as hyperglycemia, insulin resistance, diabetes, and cardiovascular diseases [136,137]. Numerous compounds have been tested as alternatives to glucose, but only two osmotic agents are currently available in clinical practice: icodextrin and amino acids. Unfortunately, these compounds can only be used in a single daily peritoneal exchange [138,139], reducing daily glucose load by only 30-50% [140] Icodextrin is a water-soluble glucose polymer derived from starch. The use of icodextrin-containing PD solution is associated with improved peritoneal UF and fewer episodes of fluid overload [141]. However, the low pH of the icodextrin solution may induce an increased local and systemic inflammation and activation of the EMT process [142].
PD solutions containing amino acids (e.g., Nutrineal ® ) have a pH of 6.7 and are free of GDPs. This PD solution may improve the nutritional status of some malnourished PD patients by increasing muscle amino acid uptake [143]. Peritoneal ultrafiltration rate and small solute clearance over a 6-h dwell did not show any main difference between amino acid-based PD fluid and equimolar glucose-based solutions [144,145]. However, the biocompatibility of these PD solutions, which influences the peritoneal function over time, is debated. Indeed, while some experimental studies showed a better biocompatible profile compared to standard glucose-based PD solutions, others reported increased generation of nitric oxide in human mesothelial cells cultured with a PD solution containing amino acids, a finding that may have pathophysiological relevance [146].
Other compounds that have been tested for a potential use in the PD solution as osmotic agents to replace glucose include taurine and hyperbranched polyglycerol, but they are under experimental development.
Use of Metabolically Active Osmolytes
The osmo-metabolic approach uses osmolytes in the PD solution, which may offer a bioactive glucose-sparing by reducing intraperitoneal glucose load without compromising UF and mitigating the underlying systemic negative metabolic effects caused by the glucose load.
L-carnitine (LC) and xylitol may be used as osmo-metabolic agents in PD dialysate. LC is a naturally occurring compound involved in fatty acid oxidation [147]. The mode of action of LC relates to its ability to modulate intra-mitochondrial acetyl-CoA levels, a key metabolic intermediate able to affect both muscle glucose disposal and liver glucose production [148]. Xylitol, a five-carbon sugar alcohol, is a physiologic metabolic intermediate of the glucuronate-xylulose cycle, a pathway very active in the liver and intimately interconnected with the pentose monophosphate shunt at the level of D-xylulose-5-phosphate [149,150]. Interestingly, xylitol is a very poor insulin secretagogue compared to glucose [151]. A key attribute of xylitol is that it does not undergo a Maillard reaction as it usually happens between traditional reducing sugars (i.e., glucose) and amino acids/proteins, a reaction also commonly responsible for the formation of AGEs [152].
LC and xylitol are characterized by molecular weight similar to glucose, high water solubility, and osmotic properties [148]. The good biocompatibility of LC-and xylitolcontaining solutions has been demonstrated in several in-vitro and in-vivo models [153,154] In addition, clinical studies have demonstrated excellent tolerability and feasibility of xylitol [155] and L-carnitine solutions [156], as well as a better preservation of urine volume, compared to controls (treated with standard glucose-based PD fluids) over a 4-month period [157]. Clinical use of xylitol-or LC-containing dialysate in CAPD patients was associated with positive metabolic effects such as improving glycemic control.
PD solution containing LC, xylitol, and low glucose, has been designed to achieve a favorable synergistic combination of the two osmo-metabolic agents. In-vitro studies provide further evidence that this novel formulation of PD solutions better preserves the integrity of the mesothelial cell layer compared to conventional PD solutions reducing fibrogenic features and inflammation [158,159].
Preliminary results obtained from a phase II, prospective, open, multicenter study to investigate the tolerability and the efficacy of osmo-metabolic agent-based PD solutions in CAPD patients (NCT04001036) confirmed that these novel solutions are well tolerated and no serious adverse reactions were reported. Non-inferiority of the osmo-metabolic agent-based PD solutions compared to standard solutions in terms of peritoneal transport and adequacy also was demonstrated as targets [160].
Use of Pharmacological Agents Added to Conventional PD Solutions
To counteract the adverse effects of conventional PD solutions, several compounds have been tested in-vitro and in-vivo. Unfractionated heparin, low-molecular-weight heparins, and sulodexide showed a different response in clinical trials, probably due to their different capacity to inhibit complement [161][162][163][164]. With the same objective to inhibit complement, sodium citrate has been tested in association with heparin [165].
A recently tested strategy is the addition of pharmacological doses of alanyl-glutamine (Ala-Gln) to glucose-based PD solutions and phase II clinical trials indicate that Ala-Gln supplementation in PD solution improves biomarkers of peritoneal membrane integrity, immune competence, and systemic inflammation when compared to non-supplemented PD solution with neutral pH and low-glucose degradation probably via an antioxidant mechanism [166][167][168]. However, use in clinical practice remains still debated.
Another element that has been tested as a possible pharmacological agent to add to the DP solution is molecular hydrogen (H2) [169]. Its antioxidant and anti-inflammatory properties have been tested in various animal models [170]. Molecular hydrogen, added to a standard PD solution, has also been tested in humans (6 patients), confirming a reduction in oxidative stress at both the peritoneal and systemic levels in the absence of adverse events [171]. In addition, recent in-vivo studies indicate that molecular hydrogen could preserve mesothelial integrity and reduce the progression of glucose-induced fibrosis [172,173]; thus, future clinical studies will be necessary to evaluate the efficacy and safety of this therapeutic solution.
Finally, a recent study proposes the addition of lithium chloride to conventional PD solutions for preserving peritoneal membrane integrity [174]. In detail, lithium chloride could reduce apoptosis, peritoneal membrane fibrosis, and angiogenesis by regulating the activity of some kinases, such as glycogen synthase kinase 3 and protein kinase 2. Although these findings may be hopeful, the real benefits will have to be demonstrated, considering that lithium chloride is an antidepressant and potentially nephrotoxic agent.
Even if the addition of pharmacological agents can improve the characteristics of traditional dialysis solutions, the glucose concentration remains very high, and consequently, the harmful effects associated with it would remain present [132,146]
Glycolytic and Pyruvate Metabolism as Targets to Control Peritoneal Fibrosis
An alternative pharmacological strategy to control fibrosis could be based on a fatty acid or pyruvate oxidation or even by inhibiting glycolysis. For instance, an increase of PDH, a key enzyme in coupling glycolysis with the Krebs cycle, can be achieved through the inhibition of pyruvate dehydrogenase activity kinase (PDHK), a potent inhibitor of PDH, using dichloroacetate (DCA) [175]. DCA has been shown to be very effective in inhibiting fibrosis in various experimental models [175][176][177][178][179]. Another means to keep more active PDH is by reducing the intramitochondrial pool of acetyl-CoA, a potent allosteric activator of PDHK, by supraphysiological concentrations of L-carnitine [148]. The latter mechanism involves the freely reversible reaction catalyzed by carnitine acetyltransferase in transferring the acetyl-residue esterified to Coenzyme A to L-carnitine to form acetyl-carnitine. Indeed, as this enzymatic reaction is very sensitive to the mass action effect of L-carnitine, the intramitochondrial concentration of acetyl-CoA will be significantly reduced, translating into a less active PDK1 and, hence, a more active PDH [180]. L-Carnitine administration has been shown to mitigate the induction of fibrosis in various experimental models. A third option may be the inhibiting glycolysis by 2-deoxyglucose, a glucose derivative that acts as an inhibitor of hexokinase 2 and hence of glycolysis [181]. As TGF-β1 is a key facilitator of the EMT transition by switching cellular energy provision from oxidative phosphorylation to substrate-level phosphorylation through aerobic glycolysis [182], the reduction of high glycolytic fluxes with 2-deoxyglucose could reduce peritoneal fibrosis [183,184]. However, according to the mode of action of DCA and L-carnitine, their anti-fibrotic effects may not necessarily require a reduction of glycolytic flux but rather an efficient coupling of such flux with an active PDH. In addition, it remains to be established whether the inhibition of glycolysis is a safer strategy compared to the diversion of pyruvate metabolism towards oxidative phosphorylation [185] (Figure 2). the catheter [16,17], whereas, in the long period, the principal problem is the bio-incompatibility of PD solutions which do not preserve the integrity and functionality of peritoneal membrane [18]. Consequently, novel strategies to slow peritoneal membrane deterioration are desirable to allow a significant diffusion of PD, considering its higher economic and environmental sustainability than HD [6]. Metabolic strategies to control fibrosis could be based on inhibiting glycolysis, fatty acid, and pyruvate oxidation. Glycolysis can be modulated by inhibiting hexokinase 2 by 2-deoxyglucose, though the safety of this approach must be proved. The alternative strategy is coupling glycolysis with the Krebs cycle by inhibiting PDH Kinase using DCA or increasing PDH activity by reducing the intramitochondrial acetyl-CoA pool using L-carnitine. | 2022-04-30T15:08:06.825Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "84470b18c5f84e0190c4194ef584191db0c76dd2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/9/4831/pdf?version=1651134991",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1e1789f134c33a9e3cdfdfa01dd3999941aa365",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253491073 | pes2o/s2orc | v3-fos-license | Nutritional and mineral analysis of the ultimate wild food plants of Lotkuh, Chitral, the Eastern Hindukush Pakistan
Wild food plants (WFPs) are designated as functional foods owing to their nutritional potential and as a source of bioactive compounds vital for human health. In times of geopolitical upheaval and nutritional imbalance in mountainous areas of the world, the contribution of WFPs is extraordinary. Lotkuh is a remote mountainous region in the Eastern Hindukush that supports distinctive global plant biodiversity. The documentation and nutritional analysis of the wild edible plants have not yet been subjected to scientific investigation, even though WFPs make up a significant component of the inhabitant’s diet. The current study is the first scientific investigation of the nutritional profile of 16 WFPs in the Hindukush region of Pakistan. Plants were collected from different parts of the study area and were subjected to proximate analysis adhering to the standard protocols of AOAC international. Proximate analysis revealed higher moisture in Rheum webbianum (91.5 g/100 g FW) and Oxyria digyna (90.5 g/100 g FW), while Elaeagnus angustifolia had the lowest (25.4 g/100 g FW). Mentha longifolia and Pinus gerardiana had (23.2 g/100 g) and (14.0 g/100 g) protein, whereas Berberis lyceum contained (3.6 g/100 g). Pinus gerardiana had the highest lipid (56.50 g/100 g), followed by Hippophae rhamnoides (45.50 g/100 g), and Berberis lyceum (0.91 g/100 g). Crataegus songarica with high carbohydrate (87.50 g/100 g) was followed by Eremurus stenophyllus (80.83 g/100 g), whereas Berberis lyceum had the least (18.51 g/100 g). High crude fiber (19.33 g/100 g) was found in Ziziphora clinopodiodes followed by Cotoneaster nummularia with (15.50 g/100 g). Pinus gerardiana and Prunus prostrata had low fiber of 1.387 and 1.377 g/100 g. Vitamin C was high in Mentha longifolia (90.63 mg/100 g), Eremurus stenophyllus (86.96 mg/100 g), and Ziziphora clinopodiodes (90.45 mg/100 g). Ca concentration was (948.33 mg/100 g) in Oxyria digyna followed by Cotoneaster nummularia whereas the lowest Ca (20.03 mg/100 g) was recorded in Diospyros lotus. Mg was high in Oxyria digyna (994.00 mg/100 g) and lowest (10.01 mg/100 g) in Diospyros lotus. Berberis lyceum (54.30 mg/100 g), Oxyria digyna (34.33 mg/100 g), and Rheum webbianum (26.04 mg/100 g) had the maximum iron. Mn was high in Berberis lyceum (14.33 mg/100 g), Pinus gerardiana (6.33 mg/100 g), and Elaeagnus angustifolia (4.60 mg/100 g). Prunus prostrata (12.16 mg/100 g), Oxyria digyna (10.30 mg/100 g), and Pinus gerardiana (4.16 mg/100 g) were the leading in Zn concentration whereas Ziziphora clinopodiodes (0.22 mg/100 g). The current study establishes the hitherto unidentified nutritional profile of the WFPs in the area. The prospect of nutritional research on WFPs in the Eastern Hindukush is established by this study.
Wild food plants (WFPs) are designated as functional foods owing to their nutritional potential and as a source of bioactive compounds vital for human health. In times of geopolitical upheaval and nutritional imbalance in mountainous areas of the world, the contribution of WFPs is extraordinary. Lotkuh is a remote mountainous region in the Eastern Hindukush that supports distinctive global plant biodiversity. The documentation and nutritional analysis of the wild edible plants have not yet been subjected to scientific investigation, even though WFPs make up a significant component of the inhabitant's diet. The current study is the first scientific investigation of the nutritional profile of 16 WFPs in the Hindukush region of Pakistan. Plants were collected from different parts of the study area and were subjected to proximate analysis adhering to the standard protocols of AOAC international. Proximate analysis revealed higher moisture in Rheum webbianum (91.5 g/100 g FW) and Oxyria digyna (90.5 g/100 g FW), while Elaeagnus angustifolia had the lowest (25.4 g/100 g FW). Mentha longifolia and Pinus gerardiana had (23.2 g/ 100 g) and (14.0 g/100 g) protein, whereas Berberis lyceum contained (3.6 g/100 g). Pinus gerardiana had the highest lipid (56.50 g/100 g), followed by Hippophae rhamnoides (45.50 g/100 g), and Berberis lyceum (0.91 g/100 g). Crataegus songarica with high carbohydrate (87.50 g/100 g) was followed by Eremurus stenophyllus (80.83 g/100 g), whereas Berberis lyceum had the least (18.51 g/100 g). High crude fiber (19.33 g/100 g) was found in Ziziphora clinopodiodes followed by Cotoneaster nummularia with (15.50 g/100 g). Pinus gerardiana and Prunus prostrata had low fiber of 1.387 and 1.377 g/100 g. Vitamin C was high in Mentha longifolia (90.63 mg/100 g), Eremurus stenophyllus (86.96 mg/100 g), and Ziziphora clinopodiodes (90.45 mg/100 g). Ca concentration was (948.33 mg/100 g) in Oxyria digyna followed by Cotoneaster nummularia whereas the lowest Ca (20.03 mg/100 g) was recorded in Diospyros lotus. Mg was high in Oxyria digyna (994.00 mg/100 g) and lowest (10.01 mg/100 g) in Diospyros lotus. Berberis lyceum (54.30 mg/ 100 g), Oxyria digyna (34.33 mg/100 g), and Rheum webbianum (26.04 mg/100 g) had the maximum iron. Mn was high in Berberis lyceum (14.33 mg/100 g), Pinus gerardiana (6.33 mg/100 g), and Elaeagnus angustifolia (4.60 mg/100 g). Prunus prostrata (12.16 mg/100 g), Oxyria digyna (10.30 mg/100 g), and Pinus gerardiana (4.16 mg/100 g) were the leading in Zn concentration whereas Ziziphora clinopodiodes (0.22 mg/100 g). The current study establishes the hitherto unidentified nutritional profile of the WFPs in the area. The prospect of nutritional research on WFPs in the Eastern Hindukush is established by this study.
Introduction
The inhabitants of the Hindukush region along the Pakistan-Afghanistan border have a distinctive food system of wild food plants (WFPs), to sustain human life in the rugged terrain of high altitudes. People in this territory collect edible plants from the wild to use them directly as food, sell them in the market, or both. The documentation of the food system, particularly that pertaining to WFPs, is severely lacking. The nutritional status of the wild edible plants in this region is yet to undergo scientific inquiry [1]. The ecosystems in the tribal belt of Hindukush reflect absolute diversity, being composed of alluvial fans, forests, pastures, and graze lands offering diverse wild palatable fruits and vegetables [2]. These WFPs, have significant therapeutic potential in addition to being vital from a caloric and nutritional standpoint [3]. In many parts of the world, WFPs have been investigated for their potential pharmacological benefits and future medicinal perspectives [4,5].
Owing to the nutritional vacuum they can fill; a variety of WFPs have been labeled as "functional food" in recent nutritional studies [6]. These plants not only serve as a healthy diet but also cure ailments [7]. Most WFPs, according to recently conducted studies, provide a sustainable source of biologically active compounds including complex sugars, vitamins, and vital fatty acids, and as a result, significantly contribute to addressing the growing problem of malnutrition [2]. Numerous studies have highlighted their significance in earning revenues, reducing poverty, improving nutritional balance, ensuring food security, and diversifying agriculture [8]. One of the landmark contributions of WFPs is their role in food security by providing a broad spectrum of food diversity and alternative food sources to the local communities [9]. WFPs have played a vital role in ensuring human survival during times of geopolitical instability and famine [10].
Pakistan has a lower-middle economic level and is the sixth most populous country in the world. It has a wide variety of wild resources, particularly wild plants, and experiences all four seasons of the year but ranks as the 11th most food-insecure country in the world [11,12]. About 60% of the population of the country is under the stress of food insecurity. Due to the distance from cities, the semi-arid climate, and border disputes, the food insecurity in the distant Hindukush regions is significantly worse. Due to its geopolitical location, the Pakistan-Afghanistan border in the Hindu Kush Mountain range has historically been a source of conflict. The other main causes of food insecurity and poverty in the tribal belt include man-made disasters, the sharp rise in the human population, restricted access to food, and local livelihood practices. For the local communities of mountainous origin, wild food plants serve as an important natural resource to alleviate hunger and malnutrition if managed on a sustainable basis.
In the current global food system, increased food scarcity is accompanied by a lack of nutritious food availability, which leads to a vulnerable human health system. Therefore, in some parts of the world, WFPs can be a crucial part of people's diets and offer increased dietary diversity to the residents of the mountains. Some species of food plants are consumed for their medicinal benefits, and many others are frequently used in traditional phytotherapy to treat a variety of illnesses [13].
WFPs have historically served as the primary dietary components in rural communities and continue to be an essential component of the world diet today. In addition to being a vital component of the regional cultural heritage, wild plants are frequently used for their socioeconomic viability and environmental sustainability. To put it another way, their consumption and collection can provide cultural ecosystem services [13]. When evaluating the ecosystem services provided by wild food plants in a region, it is essential to understand the nutritional makeup of these plants. Enhanced attention has been paid, in recent decades, concerning the use of wild food plants and their products among rural communities on a sustainable basis. Many floristic inventories and nutritional profiling regarding the use of wild food plants have been produced in Europe, the Americas, Africa, and Asia [14].
A little research work has been conducted on the wild food system of Pakistan and hence the WFPs [15]. It is critical to investigate potential WFPs, the nutrient composition, and primary bioactive compounds of WFPs since they are regarded as a potential source of natural health products [16]. Uncovering the mountain ecosystem services provided by the WFPs of the Eastern Hindukush is one of the objectives of the current research. The well-established knowledge of WFPs has a potential impact on the agricultural system of marginal areas like Lotkuh where it is crucial to enhance food crops tolerant to extreme environmental conditions in the sensitive scenario of climate change. Here, it is noteworthy that the research's conclusions can be crucial in achieving the 2030 Agenda for Sustainable Development of the United Nations. Understanding how to combine wild foods with other ingredients to improve diet can be achieved by becoming familiar with the nutritional profiles of the species in question. This study will highlight nutrient-dense wild plants that can become the focus of conservation efforts and propagation to satisfy the agenda of sustainable development goals in the escalating situation of food security.
Geographical location of the study area
The Lotkuh region, which serves as the research area, occupies the northwest of Pakistan's Khyber Pakhtunkhwa province. Geographically, the study area is stretched between 35 • 47 ′ 52 ′′ to 36 • 29 ′ 10 ′′ north latitudes and 71 • 11 ′ 52 ′′ to 71 • 54 ′ 42 ′′ east longitudes. The valley has a rugged landscape and is located next to the Wakhan Corridor. The majestic Eastern Hindukush's vast biodiversity is reflected in the territory. The Terich Mir (7692 m a.s.l.) is the highest peak in the Hindu Kush range, and it is located on the eastern of the research area. Throughout the year, these huge mountain ranges are blanketed in perpetual snow and glaciers. The elevation of the study area ranges from 1600 m to 7000 m. The research area is subdivided into the three sub-valleys viz, Karim Abad, Arkari, and Garam Chashma [17]. The geographic location is further illustrated by Fig. 1.
Plant sample collection and identification
From the beginning of March through the end of December, a field survey was conducted. Various plant specimens were collected in the year 2021. For identification, the collected samples were dried and mounted on herbarium sheets. The specimens were identified by consulting the flora of Pakistan [18] and "the World Flora Online" (http://www.worldfloraonline.org/). The collected plants were assigned voucher specimen numbers and submitted to the Herbarium Department of Botany, University of Peshawar. The voucher specimen numbers provided to the plants are given in Table 1.
Sample collection plan for nutritional analysis
Samples were collected according to the following procedure.
Collection sites
Specimens were collected from different regions of the study area. Individuals of a species were gathered from those regions where it was in abundance. Thus, different wild species were obtained from different parts and different locations of the study area. The geographic coordinates are provided for each species in Table 1.
Time of collection
Samples were collected based on the parts utilized. The parts were harvested throughout the year. The parts were collected when they were mature. The plant parts were collected keeping in view that all parts were enough mature to gain maximum nutritional value.
Quantity of collection
Depending on the plant parts used, different amounts were collected for various plants. The crude samples were gathered in textile bags, and it took 6 h for them to go to the dryer. 500 g of fresh biomass was collected for seeds, whereas 1000 g was taken for leaves, stems, and fruits.
Analytical sample preparation
From healthy individuals of the same species, the edible sections including leaves, stems, fruits, and seeds were collected. The edible parts obtained from the wild plant were dried in an oven (CARBOLITE GERO-301) at 70 • C for 48 h. The dried plant parts were ground using a Silver Crest electric grinder. The resulting powder was weighed for each sample using an analytical balance (ME204E) Mettler Toledo. The powdered material was then subjected to various nutritional analyses.
Replication of nutritional analysis
From the same homogenized samples, three different samples were obtained, and the process was repeated. Thus, three repeats
Proximate composition analysis
The estimation of proximate composition adhered to Ref. [19] standard protocols. For proximate analysis, the various edible portions of the WFPs were used. The following methods were used to investigate the various biochemical components of the edible parts.
Moisture content
The oven-dry method was used to get dry plant samples. The fresh biomasses of the samples were recorded first. The samples were subsequently dried for 24 h at 70 • C in an ISOTHERM® laboratory oven, and the dry biomass was quantified. The total water content of the material was computed using the equation given.
Moisture % =
Fresh weight of the sample − Dry weight of the sample Fresh weight of the sample × 100 (1)
Crude protein
With the use of Kjeldahl analysis, total nitrogen and hence protein quantity was investigated [19]. The percent nitrogen and crude protein was calculated as follows;
Crude lipid
To analyze the percentage of crude fat in the samples Soxtec® 8000 was used. The submersion method also called the gravimetric method was employed for the assessment of ether extract. This method is the most efficient and recommended method for the assessment of crude lipids of animal forage and feed [20]. In this method, the extraction of a non-polar moiety of the fats was carried out in three steps. The steps involved were immersion, washing, and drying [21]. The following formula was used to calculate the percent crude fat (lipid content) of the samples.
Crude fibers
The percentage of crude fiber was determined by the protocol introduced by the [19]. In this method, the sample was digested in sulfuric acid for about 30 min. The sample was then allowed to react with NaOH. The insoluble fraction of the plant material was dried and weighed for further calculations.
Crude Fiber (%) = Reduction in weight after ignition Weight of sample × 100
Vitamin-C
The vitamin-C content of wild edible plants was determined using the HPLC-UV method following sample extraction with 4.5% phosphoric acid, as described by Ref. [23]. A liquid chromatograph (Micron Analytica, Madrid, Spain) with a Sphereclone ODS (2), 5 m Phenomenex column, isocratic pump (model PU-II), an AS-1555 automatic injector (Jasco, Japan), and a UV-visible detector (Thermo Separation Specta Series UV100) operating at 245 nm for AA and 215 nm for organic acids was used. Vitamin C was determined in milligrams per 100 g of dry weight.
Mineral analysis
The mineral content of the samples was determined using the wet digestion method with per-chloric acid as the solvent. The digested samples were filtered through glass filters and diluted to a final volume of 100 ml with distilled water. The samples were then subjected to Atomic Absorption Spectrometry (AAS) to get the mineral concentration of the samples [24]. The following formula was used to make the calculations.
Statistical analysis
At the 0.05 level of probability, the statistical analyses were performed using Mstat-C GenStat software for ANOVA and LSD tests [25].
Wild edible flora
According to the floristic analysis of edible wild plants, a total of 16 wild plants were used as food in the research region. The plants were distributed among 10 families. The families included Rosaceae, Berberidaceae, Elaeagnaceae, Asphodelaceae, Apiaceae, Lamiaceae, Polygonaceae, and Pinaceae as shown in (Table 1).
Limitations of the study
The investigation was carried out in the isolated, arduous, high-altitude Eastern Hindukush region. It was challenging to reach every portion of the study area due to the topography, which consists of high mountains and cliffs with tiny, tight pathways. Due to the distance of the collecting sites, specimen collection and storage were always difficult tasks that might affect the study. The nutritional status of the samples could slightly alter while they were being stored. Another tough task was maintaining edible plant tissues under specified conditions so that the real nutritional value is retained. Table 2 shows the proximate composition of the WFPs. The findings on moisture, protein, lipid, carbohydrate, crude fiber, and vitamin C are discussed as under.
Moisture content
The moisture content was calculated following eq (1) with the results displayed in Table 2. With different plant species, the average moisture content exhibited substantial variation. The moisture level of these plants ranged from 25.4 to 91.5%. The highest moisture content (91.5 g/100 g FW) was found in Rheum webbianum, followed by Oxyria digyna (90.5 g/100 g FW), while Elaeagnus angustifolia contained (25.4 g/100 g FW) moisture. Plant tissue, species, and habitat all affect how moist a plant is. The moisture content of the studied plants is consistent with earlier studies on WFPs [26][27][28][29]. The amount of water that plant tissues can hold is determined by their moisture content. Foods with relatively higher water contents are essential for many vital biological processes and keep the body away from dehydration [30]. Some of the WFPs used in this investigation have enough moisture to meet the water requirements of human tissues. On the moisture content of edible wild plants, other researchers' findings are more in line with our results [31]. Leafy cultivated vegetables, on the contrary, have shown slightly higher moisture content than wild allies [32].
Crude protein
Crude protein content was calculated based on eq (2). Mentha longifolia was discovered to have the highest protein content (23.2 g/ 100 g) of the WFPs. Pinus gerardiana has the next-highest level of protein (14.0 g/100 g). However, Berberis lyceum possessed the lowest (3.6 g/100 g) total protein. In general, most of the studied plants have a higher level of proteins above (3.6 g/100 g). In general principal plants with higher protein contents are preferred in the edible basket. In this study, Mentha longifolia and Pinus gerardiana were potent sources of protein. The amount of protein can differ depending on the species, the climate, the edaphology, and other environmental conditions. The wild plants in this study have shown more protein content than most of the earlier studies, indicating that wild plants of the Hindukush region are protein-rich [26,29,33]. Our results, however, are closer to the findings of [28,34]. It was discovered that the crude protein level of these wild species is higher than the crude protein content of the most important vegetables, such as lettuce, cabbage, spinach, and pepper.
Crude lipid
The lipid fraction was calculated following eq (3) for the WFPs is presented in Table 2. The highest lipid (56.50 g/100 g) was obtained from Pinus gerardiana followed by Hippophae rhamnoides (45.50 g/100 g) total lipid content. The lipid contents of the 6 plants in the group were significantly different from each other. A minimum lipid content (0.91 g/100 g) was observed in Berberis lyceum. The nuts of Pinus gerardiana are rich in lipid contents and were collected profoundly by the Hindukush inhabitants throughout history. Hippophae rhamnoides berries are a rich source of plant oil and are mostly collected in autumn and early winter. Lipid contents varied with plant and different parts of the plant. In our study 11 plant species had more than 2% of crude lipid with the highest reaching 56.50 g/100 g. In dietary analysis, the content of lipids keeps a pivotal role because lipids are a vital source of energy. Oil contents ranging from 6.12 to 67% have been reported in 32 species of oil-containing wild edible plants of the Himalayan region [35]. Comparably, a lipid content of 44.3% and more have been reported in seed kernels and berries of wild plants [36]. Table 2 depicts the total carbohydrate contents of the WFPs calculated based on eq (4). Analysis of the carbohydrate fraction of the wild edible plants showed significant differences. The highest level of carbohydrate (87.50 g/100 g) was obtained in Crataegus songarica followed by Eremurus stenophyllus carrying (80.83 g/100 g) of total carbohydrate. Both species differ in the parts used. Fruits are the edible parts in the former while leaves are cooked to eat in the latter. The lowest value of carbohydrate (18.51 g/100 g) was detected in Berberis lyceum. The leaves of this plant are edible in all stages of growth. The values obtained suggested that most of the wild plants in this study have a sufficient carbohydrate proportion. Similar results have been shown by other investigators dealing with wild edible plants [37].
Crude fiber
Dietary fiber is a type of non-starch polysaccharide that is a member of the carbohydrate family. It cannot be digested in the small intestine but may be fermented into a form of short-chain fatty acid in the colon [38]. Dietary fibers have more important health implications starting from the usual indigestion issue to more complicated risks of cardiovascular diseases, cancer, and chronic diabetes mellitus [31,39]. Table 2 shows the crude fibers of the wild food plants calculate as per eq (5). A significant difference was seen among plants where the highest level of crude fiber (19.33 g/100 g) was seen in Ziziphora clinopodiodes while Cotoneaster nummularia is the next highest fiber (15.50 g/100 g) containing plant among all. Pinus gerardiana and Prunus prostrata contained (1.387 and 1.377 g/100 g) crude fiber contents respectively. The fiber contents of edible plants vary based on several factors such as type of plant, variety, growth stage, and seasonal and environmental factors [40].
Vitamin-C
The vitamin-C contents of the wild plant showed that Mentha longifolia, Eremurus stenophyllus, and Ziziphora clinopodiodes have the highest level of vitamin C, (90.63 mg/100 g, 86.96 mg/100 g, and 90.45 mg/100 g) respectively. The edible wild plants revealed a relatively higher level of vitamin C and could be used as a vital component of the diet, especially the rural families. According to earlier research, the intake of vitamin C for adults should be 95 mg/day for women and 110 mg/day for men. 150 g of the fresh wild plant can make an adequate contribution to the daily need for the vitamin [41].
The presence of a sufficient quantity of vitamin C in the Hindukush wild plants is quite encouraging that the inhabitants do not face the risk of low vitamin C in their food. It has been observed that L-ascorbic acid increases the uptake of iron in the intestine. The enhanced iron absorption in the intestine because of the ascorbic acid in nutrition resulted in raising the RDA of vitamin C from 45 to 60 mg. Vitamin C is needed for wound repair, the healing process, and collagen synthesis for healthy hair and skin [42].
Mineral analysis
An analysis of 5 minerals (Ca, Mg, Fe, Mn, Zn) was made for the 16 WFPs wild edible plants expressed as mg/100 g of the dry weight. The quantification of the minerals was made using eq (6) and the result is depicted in Table 2. The ANOVA revealed significant variation among the mineral contents of the plants.
Calcium (Ca)
The analysis showed that calcium was present in all wild edible plants. The maximum value (948.33 mg/100 g) of calcium was recorded in Oxyria digyna. The plant is utilized directly or cooked. The next higher level (878.33 mg/100 g) was seen in Cotoneaster nummularia. The minimum value (20.03 mg/100 g) was recorded in Diospyros lotus. The quantity of calcium ranged from (20.03 mg/ 100 g-948.33 g/100 g) which manifests that these plants are efficient sources of calcium. Significant variations in the calcium composition of these plants may be due to the differences in genus and species level, geographical variation, growth stage, soil, and weather conditions [43,44]. Previous studies related to wild plants have shown that the calcium content ranges from 27.0 mg to 75 mg/100 g [29,33,45]. The calcium content of wild plants in this study has shown a similar trend as the previous works mentioned above. The role of calcium is pivotal in the coagulation of blood, nerve impulse transmission, cell permeability, and disposal of cellular toxins. A high level of calcium in food is recommended during infancy, pregnancy, and lactation. The DRI (Dietary Reference Intake) value of calcium for adults is 1000 mg/day [46]. The study revealed that wild plants are a better source of calcium than some conventional vegetables.
Magnesium (Mg)
Based on species, it was observed that magnesium varied considerably. The contents of Mg for different plants are depicted in Table 2. The highest magnesium content was determined to be (994.00 mg/100 g) in Oxyria digyna, whereas the lowest potassium content was found to be (10.01 mg/100 g) in Diospyros lotus. The WFPs like Cotoneaster nummularia Mentha longifolia Oxyria digyna Prunus prostrata Ziziphora clinopodides carry Mg content of more than 200 mg/100 g which indicates that they have as much magnesium as or more than most typical commercial vegetables [32].
Magnesium is a part of multiple metabolic reactions in the body and is crucial in cardiovascular and nerve activities. Cellular metabolism like protein synthesis and other vital activities also needs the presence of an adequate quantity of magnesium. The DRI value recommended for magnesium is 310-420 mg/day for the adult group [33]. It is thus evident that most of the plants analyzed are promising to fulfill the daily requirement of magnesium for the boy.
Iron (Fe)
Berberis lyceum, Oxyria digyna, and Rheum webbianum had the highest iron concentration of all the plants studied, with (54.30 mg/ 100 g, 34.33 mg/100 g, and 26.04 mg/100 g), respectively. The iron content varied largely among the plants and a significant difference was observed (P ≤ 0.05). Six plants in the group showed relatively higher contents of iron, more than 10 mg/100 g. The iron fraction of the investigated plants was higher than most of the commercially grown vegetables [47]. A bunch of previous workers has shown iron contents of wild edible plants in the range of 4.3-119.1 mg/100 g [45], 0.17-4.88 mg/100 g [33], 18.33-48.86 mg/100 g [48], 2.51-55.62 mg/100 g [27].
Iron is a trace element and an essential part of hemoglobin to transport oxygen for the oxidation of carbohydrates, proteins, and fats. Millions of people in the world face the problem of anemia and other blood-related disorders because of a deficiency of iron [49]. The presence of an adequate amount of iron in the diet of nursing mothers is essential to fulfilling the iron needs of the feeding baby. The DRI (Dietary Reference Intake) value for females is 18 mg for women, 8 mg for men, and 27 mg for pregnant and nursing women [50]. From the study of these wild plants, it becomes evident that the inhabitants of Hindukush using these plants may not encounter iron deficiency in their food. The plants can thus be used to reduce the iron deficiency of most of the population in the countryside.
Table 3
Mineral analysis of wild food plants of Lotkuh, Chitral, the Hindukush region of Pakistan.
Manganese (Mn)
A large variation in the manganese content of different plants was seen. The sample contained Mn contents ranging from 0.23 mg/ 100 g-14.33 mg/100 g ( Table 3). The highest Mn content was possessed by Berberis lyceum (14.33 mg/100 g) followed by Pinus gerardiana, and Elaeagnus angustifolia with Mn contents of 6.33 mg/100 g and 4.60 mg/100 g respectively. The plants showed a significant difference in the contents of Mn. A similar result in wild plants was obtained by Refs. [27,29,51]. However, the values of manganese obtained in this study are appreciably higher than that of some cultivated vegetables and wild edible plants (0.04-1.27 mg/100 g) reported by Ref. [33].
Manganese is one of the microelements crucial for human health. It acts as the activator of many of the enzymes and performs a pivotal role in the production of energy and protection of the body by supporting the immune system. It also helps in blood clotting and blood sugar regulation [52,53]. Manganese DRI for adults is 2.3 mg for men and 1.8 mg for females. All these wild plants are good suppliers of this trace mineral.
Zinc (Zn)
The zinc contents of the wild plants are illustrated in Table 3. The three highest Zn-containing plants in the group are Prunus prostrata (12.16 mg/100 g), Oxyria digyna (10.30 mg/100 g), and Pinus gerardiana (4.16 mg/100 g) respectively. The lowest zinc content (0.22 mg/100 g) was recorded in Ziziphora clinopodiodes. The range of zinc contents for the wild group of plants in this experiment ranged between 0.22 and 12.16 mg/100 g. Zinc values in several wild edible plants ranged from 0.1 to 9.7 mg/100 g in earlier investigations [27,33,45] and 0.08-0.9 mg/100 g in some commercial vegetables [29].
Zinc is required for the synthesis of protein, genomic DNA metabolism, glucose metabolism, immune system function, disease recovery, and normal growth and development [54]. Growth failure, malnutrition, diarrhea, pneumonia, immunological impairment, increased child mortality, disrupted neurophysiological performance, and prenatal development anomalies are among the symptoms of its insufficiency [55]. Zinc insufficiency affects up to one-third of the world's population. Zinc's DRI value for adults is 11 mg for men and 8 mg for females [46]. Regular and enough consumption of these plants in the diet may help to prevent the negative effects of zinc deficiency, such as anemia.
Conclusions and recommendations
The effort necessary for a comprehensive analysis of a specific food plant is entirely justifiable. Researchers must have access to reliable empiric data for their examination if wild food plants are to be used to diversify diets during times of hunger and food insecurity. With the rising worldwide population, people are becoming more interested in adding wild plants to their meals, and the importance of dietary minerals in disease prevention is clear. When compared to most commercial veggies, most of the wild plants in this study were comparable and some of them were more nutrient-dense. Mentha longifolia, Pinus gerardiana followed by Hippophae rhamnoides had higher protein and lipid content. Crataegus songarica and Eremurus stenophyllus are rich in carbohydrates. Ziziphora clinopodiodes and Cotoneaster nummularia make the high crude fiber group. Mentha longifolia, Eremurus stenophyllus, and Ziziphora clinopodiodes have the highest level of vitamin C. Oxyria digyna and Cotoneaster nummularia can contribute to the supply of calcium while Oxyria digyna the potassium. Berberis lyceum, Oxyria digyna, and Rheum webbianum had the highest iron concentration. The highest Mn content was possessed by Berberis lyceum and Pinus gerardiana. Prunus prostrata, Oxyria digyna, and Pinus gerardiana are valuable sources of zinc. The WFPs studied in the current study have sufficient mineral nutrition to be included in the human diet. They are inexpensive and can be harvested seasonally. Finally, it is recommended to conserve, propagate, and sustainably use these plants. Future research should focus on the medicinal and pharmacological effects of these plants.
Declarations
Author contribution statement HAFIZ ULLAH: Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. Lal Badshah: Conceived and designed the experiments; Analyzed and interpreted the data.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data will be made available on request.
Declaration of interest's statement
The authors declare no conflict of interest. | 2022-11-13T16:02:31.263Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "8bc17a2de55b5b976122711cdfaf52f549f9d369",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e14449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f6e827d2fee56744bbce4ffb36aedc38311af1d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16293894 | pes2o/s2orc | v3-fos-license | Interactions of malnutrition and immune impairment, with specific reference to immunity against parasites
KEY POINTS Clinical malnutrition is a heterogenous group of disorders including macronutrient deficiencies leading to body cell mass depletion and micronutrient deficiencies, and these often coexist with infectious and inflammatory processes and environmental problems. There is good evidence that specific micronutrients influence immunity, particularly zinc and vitamin A. Iron may have both beneficial and deleterious effects depending on circumstances. There is surprisingly slender good evidence that immunity to parasites is dependent on macronutrient intake or body composition.
INTRODUCTION
It is well recognized that the relationship between malnutrition and infection is an intimate one, and it is often assumed that this is because of impaired immune function. Management guidelines for treatment of malnutrition in children explicitly recognize that treatment of overt and occult infection is a first step in breaking the cycle of infection, malnutrition, and immune impairment. In this review, we shall explore one direction of this complex interaction by trying to answer the question 'what is the effect of malnutrition on immunity?' We will deal only with undernutrition, not with the immunological consequences of overnutrition. We must also point out that there are simply too few data to permit us to analyse the impact of each type of nutritional deficiency on the many pathways involved in immunity against parasites. Instead, we will try to draw broad conclusions from such information as does exist.
We can restate the above question by considering some recent observations on the pathogenicity of two protozoa. In the course of a randomized controlled trial of the effect of an elemental diet on the outcome of severe diarrhoeamalnutrition in Zambian children (1), we submitted faecal samples for parasitological analysis at the beginning and at the end of the trial. For 1 month, 200 children were treated with either routine nutritional rehabilitation or an elemental diet (i.e. a diet in which all the macronutrients are broken down to amino acids, oligosaccharides and simple lipids). At the beginning of the trial all these children had persistent diarrhoea, which was an entry criterion. At the end of the trial all 161 survivors were free of diarrhoea. But the prevalence of pathogenic protozoa was only modestly reduced at the end of the trial compared to the baseline coprological analysis. Initially the prevalences of Cryptosporidium parvum and Giardia intestinalis were 24% and 6%, respectively, but after treatment they were 13% and 8%, respectively (M. Mwiya and S. Sianongo, unpubl. obs.). In other words, children with persistent diarrhoea who had had pathogens at the beginning of the trial became convalescent carriers. This recovery from diarrhoea was very likely to have been due to a nutritional intervention even though the protozoa were still present. There is very good evidence to attest to the fact that these species are pathogenic and in this and other studies C. parvum has been shown to be an independent predictor of mortality. We are led to conclude that improving nutrition restored some aspect of host defence, and this somehow improved the barrier function of the intestinal mucosa against potential pathogens. Thus, the expression of virulence is to some extent determined by host defences, and this can be modulated by nutritional status.
So our question becomes three. First, what are the major immunological defects in malnutrition that might increase susceptibility to parasitic infection? Second, what is it in the immune response that improves on nutritional rehabilitation? Third, which nutrients are most important for any of these effects? We will begin with a sketch overview of immunity against parasites and what we mean by 'malnutrition', then consider these three questions in turn.
OVERVIEW OF PARASITE IMMUNITY
While other articles in this edition will cover much of this subject in more detail, we will sketch out the salient features of immunity against parasites in order to provide a framework in which failure in malnutrition can be considered. Prevention of infection relies predominantly on barrier function and innate immunity, whereas clearance of an established infection requires either a successful humoral response (e.g. trypanosomiasis) or a successful cell-mediated immune response (e.g. schistosomiasis).
Parasite immunity builds up gradually, with the most severe complications generally apparent in the youngest and immunologically naïve. As immunity develops through repeated exposure, disease and infection become less common. For example, in malaria, immunity leads to protection from death by 5 years, but infection leading to asymptomatic parasitaemia occurs well into adult life (2). The mechanisms of immune-mediated resistance to disease and to infection remain an area of great interest but of limited understanding. The initial encounter between host and parasite usually involves inoculation or penetration into the bloodstream, or contact with a mucosal surface. In the first of these, the blood-or lymph-dwelling forms are exposed to soluble molecules and phagocytic cells. In the second, penetration into mucosal cells, for example in the intestine, immediately exposes the penetrating stage of the parasite to epithelial cells and to dendritic cells (DCs) in the Peyer's patch.
The initiation of immune responses requires recognition of ligands on the parasite by receptors of the innate immune system or of the adaptive immune system. The sensory arm of innate defence pathways includes receptors on cells such as macrophages, neutrophils, and natural killer (NK) cells, including macrophage mannose receptor, scavenger receptors and Toll-like receptors (TLRs), and soluble receptor molecules (mannose-binding lectin, MBL, and complement). MBL deficiency has been associated with cryptosporidiosis (3) and probably with malaria (4). The sensory arm of the adaptive immune system includes the B-cell receptor (immunoglobulin) and the T-cell receptor for which the ligand is antigen presented by HLA molecules on antigenpresenting cells. Ligands of innate immune receptors (i.e. T-cell-independent immune responses) are molecules characterized by repeated motifs that are recognized in a class-specific manner, so-called pathogen-associated molecular patterns (PAMPs). Thus, innate immune receptors recognize prokaryotic and viral molecules that exhibit these repetitive patterns. The receptors that recognize these motifs are referred to as pattern recognition receptors and they are 'hard-wired' into the genome.
Ligands of adaptive immune receptors do not display such molecular conformity and can distinguish more subtle nonself molecular characteristics such as specific protein sequences. Dendritic cells possess multiple innate receptors and interact with multiple cell types before committing specific T cells to activation. DCs are thus at the interface of innate and adaptive immunity and their signalling to T cells determines the type of immune response that will be generated. Their capacity to receive signals from both systems largely explains how adjuvants (which are ligands for innate receptors) augment adaptive immunity. There is emerging evidence that parasite components can interact with TLRs and thus with innate immunity. Schistosoma egg antigen regulates DC activation in response to TLR activation (5), Plasmodium haemozoin activates TLR9 (6), and C. parvum interacts with TLRs 2 and 4 (7).
Innate effector mechanisms include phagocytosis, NK cell killing, complement-mediated lysis and opsonization, and antimicrobial peptides. Much more remains to be learned about all these elements in defence against parasites as they have received much less attention from immunologists than adaptive immune responses. What is beyond doubt is that the capacity to overcome or subvert innate host defences (for example, complement by parasites (8)) is an important element of pathogenicity (9). At mucosal surfaces, this can be recognized as crossing the epithelial barrier. With certain exceptions, such as toxin-secreting organisms, a pathogen could be defined as an organism that has escaped the compartmentalization of host and commensal through by-passing innate defences.
Adaptive immune elements include antibody responses, particularly IgG that has a modest role in clearance and protection against Plasmodium, Toxoplasma, Trypanosoma , and CD4 T-cell responses that are the dominant responses in clearance and protection against the above and against Leishmania and Schistosoma spp. IgE-dependent mast cell and eosinophil responses play a major role in expulsion of helminths. CD4 T cells in small intestinal epithelium have been shown, at least in adoptive transfer experiments in mice, to play a major role in clearance of C. parvum (10). Both innate and adaptive systems can lead to inflammation, but inflammation is not the dominant response to metazoan parasites, which relies on specific pathways of clearance.
MEANING OF 'MALNUTRITION'
The term 'malnutrition' describes any disorder resulting from an inadequate or unbalanced diet, or a failure to absorb or assimilate dietary elements. It is a broad term and can even refer to overnutrition. But in terms of the health of populations in tropical countries and their susceptibility to infectious disease, we are interested in the effects of inadequate intake or absorption /assimilation of macronutrients and micronutrients. Macronutrients, present in the diet in gram or kilogram quantities, are the constituents of body tissues, carbohydrate, proteins, fats and nucleic acids (though deficiency of nucleic acids has not been described). Micronutrients are present in much smaller quantities (milligram or microgram) and are required for specific metabolic functions. Examples of micronutrients are vitamins, and minerals such as calcium, iron, zinc, copper, selenium, and iodine.
Assessment of nutritional status is a complex subject beyond the scope of this article, but it can be divided into three elements: assessment of diet, assessment of body composition, and assessment of micronutrient status. As malnutrition has such a profound effect on functional performance, many nutritionists would also add that assessment of function (such as muscle strength, cognitive ability, quality of life) should be included.
Separate syndromes of severe malnutrition are recognized: severe wasting (marasmus), oedematous malnutrition (kwashiorkor), and the coexistence of oedema with severe wasting (marasmic kwashiorkor). Severe malnutrition is a result of two dominant processes: primary malnutrition (food deprivation) usually as a result of conflict or famine, and secondary malnutrition resulting from infectious, inflammatory, or malignant disease leading to anorexia and/ or increased nutrient demand. In both of these situations, peripheral oedema may supervene but its pathogenesis is not understood and it is not clear if the presence of oedema has any implications for host defence. As medical/scientific research is carried out in peacetime in fairly stable settings, most of the work on host defence in malnutrition in humans has failed to dissect out the consequences of macronutrient and/or micronutrient depletion and the infectious and inflammatory processes that gave rise to it. This is a serious problem in the literature.
WHAT ARE THE MAJOR IMMUNOLOGICAL DEFECTS IN MALNUTRITION?
There is a very large body of literature that attempts to define immunological dysfunction in malnourished patients, which we will deal with here, though probably the best evidence comes from intervention studies (see below). Studies reporting the findings in cohorts of children with severe malnutrition are difficult to integrate, as the studied groups are often incompletely described and when described well, clearly far from homogenous. There are difficulties in the definition of malnutrition, the identification of cause, and the comprehensive description of concurrent infections, which are often hidden yet are critical confounders. With more complex testing procedures, problems also exist with the definition of normal ranges for age-matched and infectionmatched controls. Future studies will need to describe very carefully the groups studied with particular attention to infectious diseases, and have control data clearly identified to reduce bias and aid interpretation.
Our work in Lusaka addresses children and adults with malnutrition, HIV, and a broad spectrum of infectious diseases and gastrointestinal pathologies. In one recent study, we were not able to confidently identify a single case of primary malnutrition in a cohort of 84 severely malnourished children, as all had presented with history of either lengthy diarrhoeal disease, or pulmonary disease or were found to be HIV infected.
The most compelling evidence that malnutrition is associated with immunodeficiency comes from the descriptions of the infections in severe malnutrition. However, it must be remembered that infections are as much a cause of malnutrition as a consequence, and errors can be made ascribing cause and effect. Infection itself is known to have a negative effect on immunocompetence. Regarding infections in the severely malnourished, two additional facts must be considered. First, infection plays a very major role in the clinical presentation of severe malnutrition. Second, infection is often silent, as the febrile response to infection is often inadequate.
Many authors have aimed to study primary malnutrition, yet this is difficult. As an alternative, the most 'pure' form of human undernutrition amenable to study is anorexia nervosa. Although susceptibility to infectious disease is less in anorexia nervosa than in other forms of undernutrition (11), IL-2 synthesis was reduced by 49% in one study (12). However, T-lymphocyte populations were normal and lymphocyte proliferation in response to phytohaemagglutinin and concanavalin A was if anything increased (11). High circulating levels of IL-1 β and TNF-α were observed in another study, together with reduced T-cell activation as expressed by CD2 and CD69 (13). These findings and others listed below leave us with considerable uncertainty as to whether it is the malnutrition per se that leads to the immune defects we describe, and in later sections we ask whether nutritional treatment can restore immune function.
Before examining the impact of malnutrition on elements of the immune system, it is important to first recognize that susceptibility to infection and associated mortality depends on other host factors also. Various aspects of barrier function become deranged in malnutrition, for example gastric acid secretion is reduced, leading to increased susceptibility to intestinal infection (14).
There is agreement that malnutrition worsens prognosis in AIDS patients (15), and in Lusaka we have confirmed that low body mass index is an independent predictor of death in the short term in patients with AIDS-related diarrhoea (16). Macronutrient support can improve survival in severely malnourished AIDS patients (17). However, it is not clear if this is an effect on immunological function.
Substantiation of malnutrition-related immunodeficiency is assembled from three distinct evidence bases.
1 Increased incidence or severity of infections. It should be noted that without evidence of increased susceptibility to, or severity of, infectious disease, abnormalities in laboratory assessments do not constitute an immunodeficiency. 2 Markers of immunodeficiency (laboratory or clinical, some of these are well validated, others much less so). 3 In vitro functional analysis of immune processes, i.e. dynamic assays. Selected evidence, from studies of malnourished human subjects, for and against a malnutrition-associated immunodeficiency is presented in Table 1. Where possible, data have been collected from longitudinal nutrition intervention studies. Where this has not been possible, observational studies have been cited. The data represent well the breadth and depth of published findings, though the list of citations is not comprehensive.
It is clear that there is much circumstantial evidence in support of malnutrition-associated immunodeficiency, but some evidence against it, and much uncertainty regarding cause and effect. Given the maelstrom of immune defects, it is tempting to consider these many elements of immune dysfunction as evidence of dysregulation rather than immunodeficiency; however, the disturbed processes remain to be uncovered. In addition, against classical immune Early data from our work in Lusaka suggest that DC function, which has not previously been addressed, may also be important, and may underlie some of the dysregulation described above. One child, a girl aged 20 months presented with a 3-month history of diarrhoea and a 5-day history of sores in the mouth, fever and cough. She was emaciated, and had pedal oedema. Her weight-for-height z score on admission was − 3·61. She made a rapid recovery from her malnutrition and her diarrhoea ceased during her admission. Laboratory examination of her DCs on admission and then on recovery (Figure 1) identified a low DC count initially, which had risen at the time of her discharge. In concert with this finding was the discovery that she had an unusual phenotype to her cultured DC population. On stimulation with lipopolysaccharide (LPS), in contrast to normal phenotype, her DCs downregulated HLA-DR and CD86. Downregulation of HLA-DR expression reduces DC capacity to support a protective T-cell response to threat, thereby disabling the immune response.
WHICH ELEMENTS OF THE IMMUNE RESPONSE RESPOND TO GLOBAL NUTRITIONAL REHABILITATION?
Standard nutritional rehabilitation for severe malnutrition now begins with blind antibiotic therapy, though in the past this was not routine. We have selected studies in which primary malnutrition was treated with nutritional therapy alone, though in many studies we cannot be certain that antibiotics were not given. Studies confirm that the initial finding of thymolymphatic atrophy resolves with renutrition (53), and in parallel, T-lymphocyte function as examined by cell proliferation and the tuberculin test improves (28,45,52). Note however, that repeated tuberculin tests will improve responses through the process of 'vaccination' alone. In addition, described defects of the innate immune system, such as complement levels (43), neutrophil microbicidal Figure 1 Evidence of depletion of dendritic cell (DC) number s and dysfunction of DCs in the child whose case is described in the text. A and B FACS (fluorescence activated cell sorter) plot of peripheral blood mononuclear cells (PBMCs) after initial selection by side scatter and CD45 expression. Each point represents one PBMC, and the intensity of staining with lineage markers (CD3, CD14, CD16, CD19, CD56) is shown on the x axis. DCs have little or no staining for these markers. HLA-DR staining is shown on the y axis; DCs have high DR staining and the box therefore includes those cells tha are likely DCs. A FACS plot on admission; B just prior to discharge, after good nutritional recovery and DC numbers have increased from 0·32% to 0·84% of PBMCs. C and D histograms of cultured DCs at rest (blue shading) and after stimulation with lipopolysaccharide (open histogram delineated by black line) which is expected to stimulate DCs; isotype control is shown as green histogram. C cells from admission sample fail to upregulate HLA-DR, but D after nutritional recovery the capacity to upregulate HLA-DR is restored.
Macronutrients
Clinical trials of nutritional rehabilitation and immune function are few. Although there are many trials of nutrition interventions and their effect on infectious disease, trials that show a successful improvement in nutritional status and a subsequent effect on measures of immunological function are very few. In one of the studies in anorexia nervosa referred to above, nutritional rehabilitation returned the increased mitogen responsiveness towards normal (11). Cytokine perturbations also returned to normal after re-feeding (13), but in both of these instances it is not possible to dissect out the influence of macro-and micronutrients. In a trial in which Kenyan school children were randomized to several different food supplementation foods (meatbased, milk-based, vegetable oil-based or none), antibody titres to Helicobacter pylori , rotavirus, tetanus toxoid and malaria merozoite surface proteins showed very little change (55).
The effect of nutritional therapy on malaria has been unclear ever since the Murray team found in the 1970s that undernutrition protected against morbidity and mortality (24). This unexpected finding was borne out by studies in protein-deprived animals. Subsequent work has not really supported this contention, and a recent WHO analysis has, characteristically, attempted to quantify the proportion of malaria attributable to malnutrition (23). This more comfortable reading suggests that micronutrient deficiency plays a more significant role in immunity to malaria than macronutrient deficiency. However, we believe that the earlier work cannot be ignored, especially as it was carried out in a famine situation, which is a more 'pure' form of primary malnutrition.
It is well established that survival in AIDS is determined to a considerable degree by nutritional status (both macronutrient and micronutrient), and if this is through an effect on immune function one would expect to see improvements in CD4 count if weight gain can be achieved. Despite a careful search of several databases, no evidence for an effect of treatment using macronutrients on immune function in AIDS could be found (see also Macallan (15)). For example, parenteral nutrition improved nutritional status (body composition) compared to controls, but no assessment was made of immune function (17). There is no evidence that lipid supplementation is of benefit (56,57).
If there is a relationship between body composition and immune function, it might be mediated by leptin (58). Leptin is a 16 kDa protein hormone that was discovered as the missing gene product in the ob/ob obese mouse. Leptin, produced by adipose tissue, acts as a satiety signal: high levels are associated with body fat and levels decrease as fat tissue is lost during starvation. The leptin receptor has structural similarities to the IL-6 family of cytokines and leptin signalling is inhibited by SOCS-3 that regulates other cytokines. Macrophages from leptin-deficient mice are constitutionally activated and over-react in response to LPS, but their killing of Escherichia coli is impaired. Leptindeficient mice also have lymphopenia and impaired delayedtype hypersensitivity (DTH). It is thus tempting to speculate that low circulating leptin in humans with wasting would lead to such T-cell and macrophage defects seen in both ob/ ob mouse and in starved mice. There is evidence in ob/ob and starving mice that the immune dysfunction is mediated by leptin as leptin reverses the dysfunction (59). However, this has not been shown in humans, and the link between macronutrient depletion and the immune dysfunction remains tentative.
Vitamin A
It has been clear that vitamin A has important anti-infective properties since 1932 when it was shown that it reduced case fatality from measles. Large studies in Ghana, Indonesia and elsewhere have confirmed that vitamin A has important effects in reducing adverse outcome from infectious disease in underdeveloped countries, particularly diarrhoea and measles (60). There are also two relevant clinical trials of the effect of vitamin A supplementation on malaria. The first, in Ghana, found no benefit on malaria morbidity (61), but the second, in Papua New Guinea (62), showed reduced malaria morbidity in children supplemented with vitamin A compared to placebo (relative hazard 0·70, 95%CI 0·57-0·87). Vitamin A supplementation may reduce placental infection (63). However, the outstanding question is: is this an effect on immune function or on some other aspect of host defence such as epithelial integrity?
In laboratory animals, vitamin A polarizes the immune response towards Th2 (64,65), acting through retinoic acid, its principal oxidative metabolite. Retinoic acid also boosts the antitetanus antibody response (66). However, evidence of an immune booster effect in humans is much less clear. This evidence has recently been thoroughly reviewed (67). To summarize this evidence (67), there is evidence that intestinal epithelial integrity is improved by vitamin A (68), but not of improved antimicrobial properties in breast milk, and no evidence of improved barrier function in the vagina. There is very preliminary evidence of reduced secretion of TNF-α and IL-6 when challenged by specific pathogens. There is some evidence of a beneficial effect in raising CD4 counts in HIV-infected children but not in adults. Neither is there conclusive evidence of effects on cytokine production or lymphocyte function, but antibody responses to tetanus toxoid may be enhanced if the vitamin A is given before the vaccine (67). When contrasted with the highly significant effects of vitamin A in reducing childhood morbidity and mortality, particularly from measles and diarrhoea, the very uncertain evidence of effects on immune competence is striking. It seems likely on the basis of current evidence that epithelial or barrier integrity is an important part of the effect of vitamin A. Furthermore, addition of a vitamin A supplement to a supplement of vitamins B, C, and E given to HIV-infected pregnant women detracted from the benefit attributable to the supplement (69) so the effects of vitamin A, even if mediated by augmented cell-mediated immunity, are complex and can be disadvantageous.
Zinc
There is abundant clinical evidence that zinc is a critically important nutrient for the proper functioning of the immune system. Zinc is effective in prevention of diarrhoea: a recent review [6] of nine trials showed significant reductions in diarrhoea incidence, and all showed a reduction of some magnitude (70). Similar benefits were also found for pneumonia and malaria, though fewer trials are available for analysis. Zinc also gives a 42% (95%CI 10-63%) reduction in treatment failure or death from diarrhoea (71), though high doses may be detrimental (72). A meta-analysis of clinical trials of zinc supplementation in prevention of malaria concluded that a reduction in incidence of 36% (95%CI 9 -55%) might be possible (23), but in one trial, there was no benefit on malaria incidence or severity at all (73). Thus, it appears that zinc supplementation is clinically effective in reducing morbidity and mortality due to diarrhoeal disease and malaria in children. But is this an effect on immunity or host defence or something else?
There are two lines of evidence that suggest that zinc deficiency adversely affects immune function and that supplementation improves it. First, in humans there are data from the 1970s which, though not conclusive, support this contention. Children with acrodermatitis enteropathica, a congenital defect of zinc absorption, have thymic atrophy, lymphopenia, reduced lymphocyte response to mitogens, reduced DTH, and reduced immunoglobulin responses (74). Many other reports of immune defects in zinc-deficient patients are difficult to interpret because of comorbid processes (e.g. renal failure) which could themselves impair immunity. But in an important study in Indian children with diarrhoea, zinc supplementation increased numbers of circulating CD3 and CD4 cells, but not CD8 cells, B cells or NK cells (75). In terms of innate immunity, Paneth cells, which synthesize antimicrobial molecules for innate defence of the small intestine in humans, are also dependent on zinc (76,77). Second, zinc deprivation of mice for as little as 30 days reduced cell-mediated immunity, DTH, antitumour immunity, and antibody responses by up to 80% (78). Challenging zinc-deficient animals with low doses of Trypanosoma cruzi or intestinal nematodes resulted in death (79). The deficiency state was associated with reduced numbers of lymphocytes due to impaired lymphopoiesis, but the production of antibody by each cell was not impaired. Furthermore, while zinc deficiency had marked effects on lymphoid cells, there was no effect on myeloid cells, and this lead Fraker et al . (78) to advance a fascinating and potentially very important theory. This is that maintenance of lymphocyte populations is very expensive in terms of zinc and other nutrients, and that in the face of nutritional stress innate defence is maintained at the expense of adaptive immune responses. The Fraker theory is very attractive and deserves much further work. If true, the ramifications for management of infectious disease in malnourished patients could be considerable.
However, there is evidence that NK cell function and phagocytosis by macrophages are also impaired in zinc deficiency, and this may be a consequence of reduced oxidative burst capacity, for example in trypanosomiasis (80). Zinc supplementation of mice during Plasmodium berghei infection reduced markers of oxidative stress (81), but the significance of this is not clear. Zinc itself induces release of IL-1, IL-6, TNF-α , and IFN-γ in macrophages but not T cells, and high supraphysiological concentrations suppress T-cell functions (82). Early data suggest that zinc is important for maintenance of antimicrobial peptide delivery in the small intestine (77,83).
The most definitive evidence that zinc deficiency is critical for immune function in humans comes from experimental zinc deficiency induced by dietary restriction in human volunteers (84). Deficiency reduced thymulin levels in blood, and reduced the CD4/CD8 ratio. Zinc deficiency also reduced synthesis of Th1 cytokines IL-2 and IFN-γ , but not the Th2 cytokines IL-4, IL-6, and IL10. NK cell activity was also reduced in the volunteers on a zinc-deficient diet.
Iron
In studies in iron-deficient humans, iron deficiency has been associated with defects in both adaptive and innate immunity, and these are reversible with iron therapy (85). Adaptive immune defects include reduced T-cell numbers, reduced Tcell proliferation, reduced IL-2 production by T cells, reduced MIF production by macrophages, and reduced tuberculin skin reactivity. Innate immune defects include reduced neutrophil killing, probably due to reduced myeloperoxidase activity and impaired NK cell activity.
However, the picture is far from simple. Lactoferrin in human milk chelates iron and inhibits bacterial proliferation by depriving the bacteria of an essential nutrient. The bacteriostatic effect of human milk is abolished by iron therapy (86), so that iron therapy would be expected to increase neonatal intestinal infectious disease. In milk-drinking nomads, iron therapy was associated with an increase in Entamoeba histolytica infection, possibly due to saturation of the milk transferrin that overcomes the protective effect (87).
The same group had previously noted recrudescence of malaria and schistosomiasis in nomads treated with iron (88). An overview of iron supplementation studies in malarious regions included 11 trials (85). Five of nine trials in which clinical malaria was assessed showed a deleterious effect, and no trials showed benefit. Respiratory infections and other infectious morbidity were also, if anything, increased (though diarrhoeal disease was not). A recent observational study in Kenya (89) indicated that the incidence of clinical malaria was lower among iron-deficient children (IRR 0·70, 95%CI 0·51-0·99). This deleterious effect of iron supplementation on infectious disease has not been observed in clinical trials in nonmalarious regions (85), though there is evidence that dialysis and multiply transfused patients with iron overload have immune defects (90). Finally, and fairly conclusively, a recent large study seems to confirm previous observations that iron supplementation worsens infectious disease morbidity and mortality by 11-15% (91) and, taken together, the evidence is that this effect is real and important.
In summary, it is difficult to draw a firm conclusion as to whether iron status contributes to the impaired immunity to parasites seen in malnutrition. There is evidence of T-cell and innate immune impairment in iron deficiency, but supplementation (i.e. supraphysiological intakes of iron) seems to worsen susceptibility to malaria and possibly to other infectious diseases.
Other antioxidant molecules
Selenium is an important antioxidant that has been shown to have wide-ranging immunostimulant effects in macrophages and T and B cells in humans (92). However the evidence for this rests on a very small number of primary publications (93,94). The most compelling recent evidence is an example of the sort of functional immunological testing that is all too rare in this field (95). Twenty-two British volunteers with low plasma selenium concentrations were given modest doses of a selenium supplement (up to 100 µ g /day) or placebo, then were challenged with oral polio vaccine and immune responses to the vaccine determined (95). Selenium supplemented volunteers showed increased T-cell proliferation, and higher interferon-γ and IL-10 production by T cells 7 days after vaccination. They also showed more rapid clearance of the virus from stool. The situation is very similar for vitamin E, in which there is much interest, but for which the evidence base is narrow. In one clear-cut study in elderly people, vitamin E supplementation for 4 months increased DTH responses and increased antibody titres to clinically relevant vaccines (hepatitis B, tetanus) but not immunoglobulin levels or T or B cell numbers (96). There are no data to our knowledge of the effect of these micronutrients specifically on immune responses to parasites, but these data suggest that antioxidant nutrients are likely to be important in maintaining immunity.
CONCLUSIONS
There is evidence that malnutrition impairs elements of adaptive and innate immunity which would be important for defence against parasitic infections, although evidence of increased incidence or severity of parasitic infections in malnourished humans is fairly limited. The evidence that this immune dysfunction is attributable to deficiency of protein or other macronutrients is weak; we find it unconvincing and conclude that it has been overstated in the past on the basis of poorly controlled studies. On the other hand, there is good evidence of links between micronutrient deficiencies and immune impairment. This evidence is strongest for zinc, deficiency of which leads to impairment of both innate and T-cell responses. The evidence that antibody responses are impaired in any malnourished state is much less convincing. Given the very heavy burden of infectious disease around the world, and its massive contribution to illness and premature death, this field warrants much greater attention. As primary malnutrition is usually associated with famine, conflicts and population displacement, and confounding factors in secondary malnutrition are inevitable, observational studies are difficult to interpret. Study of patients with anorexia nervosa could still give much useful information on the impact of macronutrient depletion. However, the most useful information will be derived from specific controlled interventions in volunteers and in patients. | 2018-04-03T04:42:48.433Z | 2006-11-01T00:00:00.000 | {
"year": 2006,
"sha1": "5a6b3be1e5e08803fe507cadf60c8d98f651b4c0",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc1636690?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a6b3be1e5e08803fe507cadf60c8d98f651b4c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18132522 | pes2o/s2orc | v3-fos-license | The Yamabe invariant of simply connected manifolds
We prove that the Yamabe invariant of any simply connected smooth manifold of dimension n greater than four is non-negative. Equivalently that the infimum of the L^{n/2} norm of the scalar curvature, over the space of all Riemannian metrics on the manifold, is zero.
Introduction
A classical problem in differential geometry is the determination of the family of functions that can be obtained as the scalar curvature of a Riemannian metric in some fixed smooth manifold M. As one might expect, the features of the problem in low dimensions are very different from the high-dimensional case. The classical Uniformization Theorem assures that any Riemannian metric on a 2-dimensional manifold is conformal to a metric of constant scalar curvature (in dimension two, this is the same as constant sectional curvature). For a compact surface, the Gauss-Bonnet formula then shows that the sign of such a constant is determined by the topology of the surface. Moreover, if we restrict to metrics of unit volume, then the value of the constant is completely determined by the topology (it is 4πχ, where χ is the Euler characteristic of the surface).
In higher dimensions, we can still deform any metric to a metric of constant scalar curvature. Namely, given any Riemannian metric g on a compact smooth manifold M, there is a metric conformal to g which has constant scalar curvature. This is the well-known Yamabe problem. Yamabe first stated this result in [25], but his proof was not correct, as pointed out by Trudinger in [24]. The argument was completed in several steps by Trudinger [24], Aubin [3] and Schoen [20]. Note that in dimensions greater than two the obstruction to the existence of metrics with a determined sign is much weaker. For instance, every compact manifold of dimension at least three admits a metric of constant negative scalar curvature. There are well-known obstructions, though, to the existence of metrics of positive or vanishing scalar curvature.
If a function is expressed as the scalar curvature of a metric on a manifold M, any positive multiple of the function can be obtained by rescaling the metric. In order to avoid these "trivial" variations, and therefore to get a meaningful measure of the possible "size" of the scalar curvature function, it is reasonable to restrict to metrics of unit volume. This was first considered by O. Kobayashi in [8], where he introduced what we will call the Yamabe invariant of a compact smooth manifold M. First consider a fixed conformal class of metrics C on M, and let the Yamabe constant of (M, C) be The solution to the Yamabe problem is precisely achieved by showing that this infimum is always attained by a smooth metric, which necessarily has constant scalar curvature. The Yamabe invariant is then defined by where the supremum is taken over all conformal classes of metrics on M. Note that this invariant is also frequently called the sigma constant of M [21].
Note that the invariant is readily computable in dimension two from the Gauss-Bonnet formula. In dimension three, the computation of the invariant (of manifolds for which the invariant is non-positive) would follow from Anderson's program for the hyperbolization conjecture [1]. Computations of the invariant in dimension four have been carried out by LeBrun in [11]; he computed the invariants of all compact complex surfaces of Kähler type which do not admit metrics of positive scalar curvature. See also [5,12,17] for other computations of the invariant in dimension four.
We will be concerned in this paper with simply connected manifolds of dimension greater than four. The study of manifolds which admit metrics of positive scalar curvature has proved to be very interesting and deep. See for instance the work by Gromov and Lawson [4], Schoen and Yau [19] and Stolz [22]. In this last paper, it is completely determined which simply connected compact manifolds of dimension greater than four admit metrics of positive scalar curvature. Recall that the Yamabe invariant of M is positive if and only if M admits a metric of positive scalar curvature (see for instance [21]).
We will prove: Theorem 1 : Every simply connected smooth compact manifold of dimension greater than four has non-negative Yamabe invariant.
Note that for a manifold M which does not admit positive scalar curvature metrics, the Yamabe invariant can also be computed by the formula where M is the space of all Riemannian metrics on M (see for instance [11]).
It is also well-known that if a compact manifold (of dimension at least three) admits a metric of positive scalar curvature then it also admits a scalar-flat metric. We can therefore rephrase the previous theorem as: It follows from these results that dimension four is quite exceptional in terms of the Yamabe invariant. Note that Theorem 1 is obviously true in dimension two and clearly expected to be true in the 3-dimensional case (either from the Poincare Conjecture or from Anderson's program for the Hyperbolization Conjecture [1]). But it is definitely not true in dimension four (see the work of LeBrun in [10,11]); moreover, it seems likely to be the case, from LeBrun's computations, that the generic simply connected compact four-manifold has strictly negative Yamabe invariant.
The Spin Cobordism Ring
The aim of this section is the proof of Theorem 2 below. We will need to recall some basic facts about the spin cobordism ring. Details can be found in [9,23]. Of course, we will need to consider in this section manifolds with boundary. And hence, in this section, we will call a manifold X closed if it is compact and without boundary (in the other sections we always assume that the manifolds have no boundary).
For a spin manifold we will mean a smooth oriented manifold with a fixed spin structure on its tangent bundle. Let X be a spin manifold with boundary. The spin structure on X induces a canonical spin structure on the boundary of X (see [14]), and two closed spin manifolds X 1 and X 2 of dimension n are called spin cobordant if there is a compact spin manifold X (of dimension n + 1) so that ∂X is, as a spin manifold, the disjoint union of X 1 and −X 2 (the minus meaning that the orientation is reversed). The set of equivalence classes of n-dimensional closed spin manifolds under this relation is called the n-dimensional spin cobordism group, Ω Spin n . It is an Abelian group, with the sum given by the disjoint union or equivalently by the connected sum (if X and Y are connected spin manifolds, then X#Y inherits a spin structure so that X#Y is spin cobordant to the disjoint union of X and Y ).
Throughout this section we will denote by [X] the class of the closed spin manifold X in the spin cobordism group.
The product of manifolds gives a ring structure to Ω Spin * . This is called the spin cobordism ring. Its structure has been determined by D.W. Anderson, E.H. Brown and F.P. Peterson in [2]. It is proved there that two closed spin manifolds of dimenion n are spin cobordant if and only if they have the same Stiefel-Whitney and KO characteristic numbers. A particular example of these characteristic numbers, which will play an important role in this article, is the α-homomorphism, α : Ω Spin * → KO * (pt), first introduced by Atiyah and Milnor (see [14]). It is important for us that α is a ring homomorphism; i.e. for any connected closed spin manifolds X and Y , α Now recall that KO n (pt) vanishes for n=3,5,6 and 7, while KO 1 (pt) and KO 2 (pt) are isomorphic to Z 2 , and KO 4 (pt) and KO 8 (pt) are isomorphic to Z. Recall also that multiplication by a generator of KO 8 (pt) gives an isomorphism between KO n (pt) and KO n+8 (pt). Moreover, α is an extension of theÂ-genus in the sense that after picking suitable generators of KO 8k and KO 8k+4 , α is exactly theÂ-genus in dimensions 8k and one half of thê A-genus in dimensions 8k+4. The homomorphism α plays a central role in the study of the scalar curvature of compact spin manifolds. Hitchin proved in [6] (generalizing the now classical work of Lichnerowicz [13]) that if the compact spin manifold X admits a metric of positive scalar curvature, then α[X] = 0. The converse of this result is also true for simply connected manifolds. It was conjectured by Gromov and Lawson in [4], and proved in different cases by Gromov and Lawson [4], Miyazaki [15], Rosenberg [18] and finally (in general) by Stolz [22]. More precisely, Stolz proved that any cobordism class in the kernel of the homomorphism α can be represented by a connected spin manifold which admits metrics of positive scalar curvature (the result then follows from the arguments in [4]). Similarly, to prove Theorem 1, we will need to show that every element in the spin cobordism groups can be represented by a manifold with non-negative Yamabe invariant. We will prove this statement for every dimension in Theorem 2, although we will only need the result for dimensions greater than four. We will prove Theorem 2 now and then use it to prove Theorem 1 in the following section.
We will need to use the following obvious fact: Lemma 1 : Suppose that M and N are closed smooth manifolds and that the Yamabe invariant of M is non-negative. Then the Yamabe invariant of M × N is non-negative.
Proof: Given any ǫ > 0 we will show that there is a metric on M × N of unit volume and constant scalar curvature bounded below by −ǫ. This implies the lemma.
Every compact smooth manifold admits a metric of constant scalar curvature. After rescaling, we can then pick a metric g on N of volume V whose scalar curvature is a constant greater than −ǫ/2. The fact that the Yamabe invariant of M is non-negative assures that there is a metric h on M of volume 1/V and whose scalar curvature is a constant greater than −ǫ/2.
The product metric g + h on N × M has unit volume and scalar curvature bounded below by −ǫ as required. ✷ We can now prove: Theorem 2 : Every element in the spin cobordism group Ω Spin n can be represented by a connected spin manifold with non-negative Yamabe invariant.
Proof: The statement is trivial in the case n = 1. When n = 2, we have that Ω Spin 2 is isomorphic to Z 2 , and its two elements are represented respectively by a torus (the product of two copies of the spin structure on S 1 that is not a spin boundary), whose Yamabe invariant is zero, and by the 2-sphere with its canonical spin structure (the Yamabe invariant of the 2sphere is 8π). When n=3,5,6 or 7, the spin cobordism group Ω Spin n is trivial, and its only element can be represented by the sphere of the corresponding dimension (which of course has positive Yamabe invariant). For n = 4, the spin cobordism group is isomorphic to Z generated by the class of a K3 surface. Note that the Yamabe invariant of the K3 surface is zero, since it admits a scalar flat metric (the Ricci-flat metrics constructed by Yau [26]) but no metric of positive scalar curvature (since itsÂ-genus does not vanish). The same is true for the connected sum of any (positive) number of copies of the K3 surface. The zero element in the group can be represented by the 4-sphere. Now recall that D.D. Joyce constructed examples of 8-dimensional compact Riemannian manifolds with holonomy Spin(7) (see [7]). These manifolds are then necessarily simply connected, spin, Ricci-flat and haveÂ-genus 1 (actually D.D. Joyce shows explicitly that his examples haveÂ-genus 1, and use this to prove that they have holonomy Spin (7) and not a proper subgroup of it). Call J 8 one of these examples. Then α[J 8 ] is a generator of KO 8 (pt). Hence, multiplication by α[J 8 ] gives an isomorphism between KO n (pt) and KO n+8 (pt). Note also that the Yamabe invariant of J 8 is zero (it can not be positive since theÂ-genus is non-zero). Now consider any class [X] ∈ Ω Spin n with n ≥ 8. Let P be a closed spin manifold of dimension n − 8 so that α[P ] is a generator of KO n−8 (pt) (α is an epimorphism in every dimension), and let Q = P × J 8 . Note that it follows from Lemma 1 that Y (Q) ≥ 0. Note also that α[Q] is a generator of KO n (pt). There exists then an integer k so that α[X] = kα [Q]. This means that [X] − k[Q] is in the kernel of α and therefore, from Stolz's Theorem (see [22]), it can be represented by a closed connected spin manifold S of strictly positive Yamabe invariant. Finally, [X] can be represented by the manifold X obtained as the connected sum of S and k copies of Q. But the condition that the Yamabe invariant is non-negative is closed under connected sum (see [8]), and hence Y (X) ≥ 0. This completes the proof of Theorem 2. ✷
Proof of Theorem 1 and final remarks
We will now prove Theorem 1. We will use the following result of Gromov and Lawson [4, proof of Theorem B]: Theorem 3 : Let N be a compact simply connected spin n-dimensional manifold. Suppose that n≥5 and that the manifold X is spin cobordant to N. Then N is obtained from X by doing surgery (on X) on spheres of codimension greater than two.
Let M be a compact simply connected manifold of dimension at least five. It was proved by Gromov and Lawson in [4], that if M is not spin then it admits a metric of positive scalar curvature. This means that the Yamabe invariant of M is strictly positive. We can therefore assume that M is spin.
We have proved in Theorem 2 that M is spin cobordant to a spin manifold X with non-negative Yamabe invariant. It follows from Theorem 3 that M is obtained from X by doing surgery on spheres of codimension greater than two. Finally, it is proved in [16] that if N is obtained from the compact smooth manifold N by doing surgery on spheres of codimension greater than two then Y ( N) ≥ Y (N). Applying this result to our situation, we see that Y (M) ≥ Y (X) ≥ 0, and we have therefore finished the proof of Theorem 1.
✷ Remark: It is clear from Theorem 1 that for any simply connected compact manifold X of dimension greater than four so that α[X] = 0 the Yamabe invariant of X is zero.
Remark: One of the main motivations for the study of the Yamabe invariant is the minimax method to construct Einstein metrics [21]. Assume for simplicity that Y (X) ≤ 0. Then the Yamabe invariant of X is the supremum of the scalar curvature over the family of metrics on X of constant scalar curvature and unit volume. Moreover, if the supremum is achieved by a metric g (as before), then g is Einstein. Unfortunately, it is not always the case that the supremum is achieved. One can deduce for instance many examples from Theorem 1. Namely, if a simply connected manifold X (compact of dimension greater than four) has non-vanishing α-genus, then Y (X) = 0. Hence the minimax procedure would provide a Ricci-flat metric. But, as it is wellknown since the work of Lichnerowicz, when the scalar curvature vanishes the Weitzenböck formula for the Dirac operator shows that every harmonic spinor is parallel. And the existence of non-trivial parallel spinors is a very restrictive condition (we know that in our examples there are non-trivial harmonic spinors since the α-genus is not zero). It implies that our simply connected manifold must be the product of certain 8-dimensional manifolds and a Ricci-flat Kähler manifold (see [6, Theorem 1.2 and footnote p.54]). This, of course, implies that for most of the examples we consider the minimax method to construct an Einstein metric does not work. In particular, it does not work for any of the exotic spheres S such that α[S] = 0 (see [6,14]). | 2014-10-01T00:00:00.000Z | 1998-08-14T00:00:00.000 | {
"year": 1998,
"sha1": "17fd66025c926ec4aa982f43d0d7847608d31bfe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/9808062v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bbf7524fe98e8196db6696a5eed3b67828658b7c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
210332662 | pes2o/s2orc | v3-fos-license | First Russian Experience of Composite Facial Tissue Allotransplantation
The facial allotransplantation technique was first introduced to the general public in 2005. The definition of the face as a complex system of organs that perform social functions made possible the adaptation of this operation into clinical practice. The year 2010 was the starting point for initial research in the Russian Federation. Based on previous achievements and existing world experience in this field, facial allotransplantation was used for the first time in 2015 in St. Petersburg. The goal of this operation was to reconstruct a soldier’s central facial area after an electric burn; he was injured in the military line of duty. This article describes complications faced regarding the preparation for this operation, the issues encountered for facial tissue removal, as well as donor selection criteria. Each stage of the composite facial allotransplantation, complications that can occur during operation, milestone results, as well as the subsequent rehabilitation and immunosuppressive therapy during the 4-year patient observation period following surgery, including the description of a single episode of cell-humoral rejection of transplanted tissue, are described in detail. The experience gained from the first facial allotransplantation performed in Russia shows the possibility of using a new composite allograft to correct deformities in the central area of the face with the achievement of a successfully functioning and aesthetically pleasing result after the operation. After 4 years of dynamic observation and individual rehabilitation programs, the main goal of the facial transplantation, that is, social re-adaptation of the patient, was achieved.
INTRODUCTION
Facial allotransplantation is an experimental procedure and is now at the stage of accumulation and systematization of acquired knowledge on it. The first successful partial facial transplantation was performed by Doubernard in France in 2005. Facial allotransplantations have been performed in the United States since 2009, where a procedure involving the largest volume of transplanted tissues was performed in 2015. 1,2 Currently, global experience includes almost 40 successful allotransplantations, 3 both full and partial; the 32nd surgery, according to the registry, was carried out by the authors in St. Petersburg in Russia in May 2015.
Most authors describe the face as a separate area or structure consisting of different zones, each of which has its own characteristics. 4 Siemionow 5 introduced a concept of the "face as an organ," which allowed for amendment of the legal regulations. In our view, the face is made up of a variety of tissues performing as a whole to fulfill a social function. The social function of the face is crucial, which is why recipients' resocialization after the face transplantation is the main indicator of success and the essence of the whole operation.
MATERIALS AND METHODS
Patient E, a 19-year-old man, was admitted to hospital for treatment of third-to fourth-degree electrical burns to 17% of the surface of the head, neck, and right upper and lower limbs. He was injured on August 09, 2012, in the line of duty while performing military service. Over the period of 3 years, he received >30 reconstructive surgeries, which were performed with the aim of closing the defects, as well as to restore vision, and regain functioning of the affected limbs. However, the results of treatment were not sufficient to allow the patient to achieve social adaptation and did not eliminate the major self-identification disorder which he developed after the facial injury. This caused him to attempt suicide on 2 occasions. All possibilities for plastic reconstructive surgery were exhausted, and in view of deficiency of the covering tissues, it was decided to perform allotransplantation of a complex face tissue allograft for patient E.
It is important to note that after the traditional surgical treatment, functional disorders of the damaged right half of the face were insignificant. The main and most difficult target of the reconstruction was the central zone of the face (Fig. 1).
During preparation of the potential recipient for face allotransplantation, informed consent was obtained for the surgery. The patient was fully examined according to the international pretransplantation protocol.
For the purposes of detailed assessment of the damage, the patient underwent several series of 3-dimensional (3D) laser scans of the face, followed by printing out the areas of defect using a 3D printer. This allowed us to carry out more accurate modeling and anatomical positioning of the allograft, both during preparation for the surgery and during the procedure itself (Figs. 2 and 3).
Stages of Preparation for the Face Allotransplantation Experimental Stage
In preparation for the face allotransplantation in Russia, the authors completed all stages of the world protocol: legal groundwork for the process, and experimental and anatomical studies.
In the period from 2010 to 2014, >40 experimental allotransplantations were performed, followed by cyclosporine monotherapy. Study of the immune response was carried out using a method of quantitative determination of the following subpopulations of lymphocytes: CD3 + , CD4 + , CD8a + , CD161 + , CD25 + , CD45RA + , and CD 20 + . 7 One of the conclusions made during allotransplantation of a revascularized fascial composite allograft, was that we had ascertained the most immunologically beneficial model. A low load on the recipient's immune system was identified when the volume of transplanted tissues was not >25% of the surface of the face and neck, compared with hemifacial and full transplantations. In this group of animals, there were no fatal cases, and the follow-up period was 200 days. 8 The experimental data obtained were taken into account while planning the surgery.
Anatomical Stage
Simultaneously with the experimental research, studies were carried out on unfixed cadaveric material for understanding the anatomical features and practicing technical aspects of withdrawal of a face tissue complex from the donor. The work was performed on 46 cadaver models, during which both a full-face model and partial face allografts were considered.
It should be noted, however, that nearly every tissue complex model is unique, especially when planning partial facial allotransplantation. The criteria for an ideal tissue complex model were as follows: low minimum immunological load, complete restoration of lost functions of the face, and the most aesthetically beneficial result for the patient.
Most favorable in all 3 of these aspects was a model of allotissues that was designed specifically for this surgery. This included the anterior wall of the frontal sinuses; soft tissues of the forehead, including the frontal muscle; fasciae, subcutaneous fat; skin; whole nose, including bone tissue, nasal cartilages; mucosa; muscles; and part of the soft tissues of the midface with 2 facial arteries and facial veins (Fig. 4).
Donor Stage
The donor matched the defined criteria developed for a potential facial tissue donor: an identified person, diagnosed brain death, age 18-55 years, preferably male sex, no damage of the facial skeleton, no skin diseases, no inflammatory processes in the area of operation and in the ENT organs, no atherosclerosis of external and common carotid arteries, artificial ventilation for not >96 hours, stable hemodynamics, and a match with the recipient's anthropometric data.
Immunological examination of the recipient and the donor included assessment of blood group, degree of immunological sensitization of the recipient, recipient's phenotype, donor's phenotype, and a cross-match test. The donor was also examined for markers of infectious diseases (human immunodeficiency virus, syphilis, hepatitis B and C, and cytomegalovirus).
Information about the presence of potential donors in medical organizations from 28 regions of the Russian Federation was collected by the Organ Donation Coordination Center. The search for a suitable donor took 9 months.
In May 2015, the Coordination Center received information about a potential male donor of 51 years of age with a traumatic brain injury and started the procedure for ascertaining brain death. Laboratory, instrumental and immunological studies determined compatibility with the recipient.
After the pronouncement of brain death, intensive therapy of the donor continued, aimed at prevention of purulent-septic complications, correction of water and electrolyte balance, hypotension, hyperglycemia, polyuria, and hypothermia. Consent was obtained from the relatives of the deceased to explant facial tissues.
The facial allotransplantation algorithm was divided into 3 consecutive stages, with repeated training in the operating theater using cadaveric material.
Stage 1. Explantation of the Facial Tissue Complex and Closure of the Defect with a Death Mask
The allograft explantation was performed according to the "full-face" model, with involvement of the facial artery and vein. After complete dissection of the facial allograft, the mucous membrane of the ethmoid sinus and external wall of the frontal sinuses were removed, and the external carotid artery was cannulated, followed by conservation with Custodiol HTK Solution (Essential Pharmaceuticals, LLC) cooled to 2°C. Perfusion efficiency was determined by the change in the color of the graft and the outflow of the solution through the venous system. Duration of the explantation was 7 hours 15 minutes.
In our opinion, explantation of full-face tissues made it possible for us to have a reserve of plastic material, to perform effective perfusion and preservation of the tissues, and to train on the "full-face" explantation technique in real conditions.
According to the principles of humane treatment of the body of a deceased person, the donor's facial tissue defect was closed with a death mask (Fig. 5). At the final stage of manufacturing the death mask, the finished silicone model of the face was cleaned of artifacts, eyebrows and eyelashes were fixed, and makeup was applied. Preparation of the mask took about 8 hours.
Stage 2. Preparation of the Recipient
Preparation of the recipient bed consisted of removing granulating tissues from the cavity of the frontal sinuses, excision of affected tissues in the upper and middle zone of the face within the defect area, and isolation of the external jugular vein and external carotid artery on the right and on the left.
Stage 3. Allotransplantation of the Facial Tissue Complex
This surgical stage included the following tasks: modeling, inclusion of the allograft into the bloodstream, and adaptation of its bone and soft-tissue complex of tissues in the donor-recipient system.
Inclusion of the allograft into the bloodstream was by means of end-to-side anastomosis of the recipient's external carotid artery with the donor's facial artery on the right, and of the recipient's external jugular vein with the donor's facial vein on the left. Standard methods were used to check the competence of the anastomoses and to obtain a sufficient capillary response from donor tissues, including at distal level ( Figs. 6 and 7).
The osteoplastic stage included tamponade of the frontal sinuses with a free-muscle autograft (Fig. 8), anatomical positioning of bone and soft-tissue structures of the allograft, and their fixation to the recipient bed with miniplates (Figs. 9-11). During the surgery, the recipient underwent allotransplantation of a "signal" Chinese flap in the lower third of the left forearm, to obtain biological material for stepby-step biopsies for histological and immunohistochemical studies throughout the lifetime of the donor tissues (Fig. 12).
Immunological Protocol
The immunosuppressive therapy protocol was divided into initial and supporting stages. In accordance with the adopted protocol, a control biopsy of the skin signal flap was performed on days 3, 7, 14, 21, and 30, and every month after that (Fig. 13).
The following complications were observed after the face allotransplantation: disseminated intravascular coagulation syndrome, acute respiratory distress syndrome, moderate posthemorrhagic iron-deficiency anemia, moderate thrombocytopenia, systemic inflammatory reaction syndrome, and false aneurysm of the facial artery on the right. Standard infusion-transfusion, respiratory, and antibacterial therapy was administered, supplemented with hemodiafiltration as needed. This allowed us to eliminate the fluid overload, which was inevitable after a 16-hour intervention. Analgosedation was performed using dexmedetomidine, which at stable hemodynamics allowed us to give the recipient the required level of sleep and pain relief.
The most stressful complication was thrombosis of the donor vein that occurred during the first day after the surgery. The possible cause of the thrombosis was damage of the facial vein incurred during its isolation, which resulted in narrowing of the vessel's lumen. Sequential thrombectomies were performed on the first, second, and third day to restore the blood flow. A catheter was introduced to the right external carotid artery for perfusion of the flap with low-molecular weight heparin; however, this did not have the desired effect, and on the fourth day transplantation of a 12-centimeter part of his own vein from the lower leg was performed. The volume of tissue loss due to ischemia was insignificant and localized in the columella.
False aneurysm of the donor artery on the right was diagnosed at 52 days, situated where the artery emerged from the subcutaneous tunnel in the projection of the nasal bone of the allograft. The cause of this complication was associated with 2 factors: the different pressure of the surrounding soft tissues on the artery in the tunnel, and vascular wall deinnervation, which resulted in a decreased tone. Regression of the symptoms was observed on the 74th day after the face allotransplantation.
Rehabilitation
The tracheostoma was removed 12 days after the surgery. By 4 months, complete restoration of nasal breathing was noted, and tactile sensitivity appeared in the allograft; despite the absence of sutures of the sensory nerves, sensation in the skin allograft was slightly different from that in the surrounding tissues of the recipient.
Psychological Adaptation
Based on information obtained from the study of clinical cases that are similar to this one, experts assume that postsurgical reactions may be due to inconsistencies between the patient's ideas about his new appearance and the real appearance, and include strong emotional reactions to the new face, subsequent negative feelings and depression. 8,9 In contrast to the predicted emotional manifestations caused by potentially stressful conditions at the moment the patient saw the transplantation results for the first time, our patient showed calmness in relation to the new face. There was also no delayed reaction to the stress. The patient did not show behavioral changes, and his emotional condition did not change.
After the operation psychologists were able to establish contact with the patient, which was greatly facilitated by long-term psychological conversations. The experts were able to assess changes in emotional condition, and the cognitive and behavioral ways of reacting and interacting with the world, as well as the internal picture of the disease. Based on psychodiagnostics, a complex of psychocorrectional measures was developed, and bodyoriented psychotherapy, techniques aimed at relaxation, and therapeutic art approaches were applied.
Condition of the Patient at the Present Time
One year after the partial face allotrasplantation, the patient underwent rhinoplasty aimed at reduction of the volume of tissues at the end part of the nose and restoration of the columella (Fig. 14).
In June 2017, graft-versus-host disease of mixed humoral-cellular type occurred. In the course of treatment, the immunosuppression regimen was corrected, with a switch to the tacrolimus group medicine Advograf. The current immunosuppression protocol is as follows: Advagraf 8 mg/day, oral Myfortic 720 mg twice a day, and Solu-Medrol 4 mg once a day. Complete social rehabilitation of our patient has now been achieved. He attends an institution, works, and has a family (Fig. 15).
DISCUSSION
There are defined indications for facial transplantation, the first significant systematization of this knowledge being the monograph by Siemionow. 5 In our case, general and target indications for the partial facial allotransplantation were: motivation, stabilization of long-term outcome of the injury, complete understanding of the consequences of the injury, stable emotional-volitional and intellectual-mental characteristics of his personality, age, the limitations of autoplasty (posttraumatic changes in the soft tissues of the right hand, forearm, upper arm, scapular areas, right and left thighs, right lower leg, sacrum, and combined deformity of toes I, II, and III on both feet), extensive area of facial defect (about 65%), almost total defect of the nose and external wall of the frontal sinuses, soft-tissue defect of the forehead, cicatricial deformity of the eyelids, the right half of the face and neck.
Our experience of performing stages of the international protocol for preparation for facial allotransplantation, with composite hemifacial transplantations and anatomical studies, will help us to be able to use this knowledge effectively in future clinical practice.
In the course of the clinical implementation in this case, we have found that explantation of the allograft should be performed according to the "full-face" model, regardless of the variant of the transplanted allotissue complex. This approach allowed us to adapt the resulting complex of tissues maximally to the recipient, as well as to save the donor vascular system for carrying out adequate perfusion at the stages of allograft explantation and Fig. 15. Schematic of immunosuppressive therapy at different stages of partial face allotransplantation. subsequent conservation. In addition, it helped us to have a reserve of plastic material.
The "full-face" model is the most favorable from an aesthetic point of view; however, in our case, it had extremely limited indications due to the high risk of an unfavorable functional result and allograft rejection.
In view of the characteristics of the donor-recipient system, the simulated composite alloflap in our case included the external wall of the frontal sinuses, soft tissues of the forehead, and the whole nose with adjacent tissues of the midface. Blood supply to the allograft was ensured by using the right facial artery and left facial vein system. This model was satisfactory for achieving the functional and aesthetic goals of face reconstruction in our patient and allowed us to minimize the immunosuppressive load.
CONCLUSIONS
Consistent compliance, with the international protocol for preparation for facial allotransplantation, including performing of experimental allotransplantation, cadaver studies, and 3D modeling, and improvement of organizational and legal regulations, allowed us to successfully perform the first case in Russia of facial allotransplantation in a patient who had suffered electrical burns, and to achieve restoration of aesthetic and functional parameters of the face and the social rehabilitation of the patient.
Facial allotransplantation is not only a viable alternative to conventional reconstructive techniques-in some clinical situations, it is the only choice. Kirochnaya ul. 41, 191015 Saint-Petersburg Russian Federation E-mail: marivolokh@mail.ru | 2019-11-28T12:48:31.148Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "1a12769ac8d611ece51bf7fa783a5dab8ad36cc7",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/gox.0000000000002521",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cdaff21c3e6bf6b1f9ce4c53ade77fc533f09d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221140049 | pes2o/s2orc | v3-fos-license | Quantitative Structure Property Analysis of Anti-Covid-19 Drugs
Inspired by recent work on anti-covid-19 drugs \cite{2} here we study the Quantitative-structure property relationships(QSPR) of phytochemicals screened against SARS-CoV-2 $3CL^{pro}$ with the help of topological indices like the first Zagreb index $M_{1}$, second Zagreb index $M_{2}$, Randi$\acute{c}$ index $R$, Balban index $J$ and sum-connectivity index $SCI(G)$. Our study has raveled that the sum-connectivity index $(SCI)$ and the first Zagreb index $(M_{1})$ are two important parameters to predict the molecular weight and the topological polar surface area of phytochemicals respectively.
Various clinicians and researchers are engaged in investigating and developing antivirals using different strategies combining experimental and in-silico approaches see [1, 3, 5, 7-14, 14, 16, 17, 21-23]. The replication cycle of SARS-CoV-2 can be broadly divided into three processes viral entry, viral RNA replication and lastly, viral assembly and exit from the host cell which is depicted in Figure 1. Recent studies revealed that the genome sequence of SARS-CoV-2 is very similar to that of SARS-CoV. Recently, Qamar et.al [14] reported the following phytochemicals screened against SARS-CoV-2 3CL pro which are depicted in Figure 2.
The main aim of this study is to develop a quantitative structure property relationship between two-dimensional(2D) topological indices, calculated physicochemical parameters of phytochemicals screened against SARS-CoV-2 3CL pro . Experimental data used in this study were taken from [14]. In this paper we have considered five topological indices viz., the first Zagreb index M 1 (G) [6], the second Zagreb index M 2 (G) [6], Randić index R(G) [15], Balban index J(G) [2,3] and the sum-connectivity index SCI(G) [25]. The formulae for these topological indices are give below: where w(u) (resp. w(v)) denotes the sum of distances from u (resp. v) to all the other vertices of G.
The productivity of the above mentioned topological indices were tested using a data set of phytochemical, found at [2] and https://pubchem.ncbi.nlm.nih.gov/. The data set consists of the following data: docking score, binding affinity, molecular weight and topological polar surface, which is given in Table 1. Note: The molecular weight and topological polar surface of NPACT00105 could not find. Therefore, we do not include this molecule for QSPR-analysis.
The topological indices values of phytochemical structures ate given in Table 2.
Regression Models
The following regression models have been used for the study:
Conclusion:
The QSPR study has revealed that the molecular descriptors are best candidates to predict the physicochemical properties of phyctochemicals. In particular, the sum-connectivity index (SCI) and the first Zagreb index (M 1 ) are two important parameters to predict the molecular weight and the topological polar surface area of phytochemicals respectively. Our study may help the researchers in the field of lifescience in finding the anti-covid-19 drugs. | 2020-08-18T01:00:44.366Z | 2020-08-06T00:00:00.000 | {
"year": 2020,
"sha1": "248ccf7da6f1e404bb78f4fedd8351ba65b8a604",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "248ccf7da6f1e404bb78f4fedd8351ba65b8a604",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Physics"
]
} |
267067308 | pes2o/s2orc | v3-fos-license | A Study of Soil-Borne Fusarium Wilt in Continuous Cropping Chrysanthemum Cultivar ‘Guangyu’ in Henan, China
Cut chrysanthemum, known as a highly favored floral choice globally, experiences a significant decline in production due to continuous cropping. The adverse physiological effects on cut chrysanthemums result from the degradation of a soil’s physical and chemical properties, coupled with the proliferation of pathogens. The “Guangyu” cultivar in Xinxiang, Henan Province, China, has been specifically influenced by these effects. First, the precise pathogen accountable for wilt disease was effectively identified and validated in this study. An analysis was then conducted to examine the invasion pattern of the pathogen and the physiological response of chrysanthemum. Finally, the PacBio platform was employed to investigate the dynamic alterations in the microbial community within the soil rhizosphere by comparing the effects of 7 years of monocropping with the first year. Findings indicated that Fusarium solani was the primary causative agent responsible for wilt disease, because it possesses the ability to invade and establish colonies in plant roots, leading to alterations in various physiological parameters of plants. Continuous cropping significantly disturbed the microbial community composition, potentially acting as an additional influential factor in the advancement of wilt.
Introduction
Chrysanthemum morifolium, which belongs to the genus Chrysanthemum, is one of the oldest ornamental and medicinal flowers and the second-largest cut flower in the world [1].It was first cultivated in China and is loved by people all over the world for its attractive colors and shapes.Although the implementation of a noncropping or rotation system has the potential to decrease disease incidence and ensure the high quality of chrysanthemums, the increasing demand for cut chrysanthemums has led to an increase in chrysanthemum cultivation areas, and monocropping is usually the most effective method for maximizing economic benefits.However, during long periods of cultivation, chrysanthemums in continuous cropping areas showed yellowing and wilting leaves, stunted growth, and sharp yield declines.As the cropping time increased, the diseases worsened, resulting in complete failure of harvest in some areas, which seriously affected the economic income of growers and caused huge economic losses [2].Soil microorganisms, specifically soil-borne pathogens, are believed to be the major cause of this decline in productivity [3].
The rhizosphere represents a dynamic soil region that is regulated by intricate interactions between plants and organisms closely associated with roots.These regions are characterized by a high abundance of microorganisms, often referred to as plants' second genome [4], which plays a crucial role in promoting plant health and enhancing crop yield.These microorganisms considerably influence mineral element availability, carbon and nitrogen cycling, and the development of soil structure [4][5][6].The practice of long-term continuous cropping has a profound effect on the structure of soil microbial communities, leading to the emergence of soil-borne diseases and a decline in crop yield.In typical soil conditions, certain native microorganisms possess the ability to suppress the growth of pathogens.However, in the context of continuous cropping soil, pathogens exhibit the capability to swiftly infiltrate and propagate, potentially leading to a reduction in the abundance of beneficial bacteria.Research conducted on the continuous cropping of Chrysanthemum morifolium Ramat revealed significant alterations in the abundance of bacteria, disrupting the delicate equilibrium of microorganisms within the rhizosphere soil [7].
The Fusarium genus encompasses several economically significant plant-pathogenic species that induce wilt disease in various plants [8], including vegetables, grasses, fruit trees, and flowers.The majority of research efforts predominantly concentrated on F. oxysporum, the primary causative agent of fusarium wilt on a global scale [9].This fungal disease poses a substantial threat to crop production, inflicting severe economic losses, particularly in regions characterized by elevated temperatures and humidity levels.The infected plant in the early stage shows slow growth in comparison with a healthy plant as the disease symptoms begin to appear from the lower leaves.Subsequently, curled and yellow leaves are observed from the bottom to the top, ultimately leading to complete withering and demise.Other species within the Fusarium genus, such as F. solani, F. incarnatum, and F. falciforme [10], have been identified as pathogens for chrysanthemums.Wilt disease in chrysanthemums has been reported globally [11], and it has outbreaks in various areas in China, thus progressively emerging as a primary constraint on the advancement of the local chrysanthemum industry.
The advent of third-generation sequencing in recent years, specifically PacBioSMRT sequencing, to enhance the study of soil microbial community structure has had a significant effect on the disciplines of genomics and microbiology [12].This technology has successfully addressed the limitations of conventional culture methods in identifying microorganisms that are challenging to cultivate or have become inactivated.By enabling a comprehensive exploration of the microflora's composition in various environments, PacBio sequencing offers a novel and efficient approach to investigating microbial community structure, thereby facilitating substantial advancements in the field of microbial research.Pootakham et al. [13] employed full-length ITS and 16S rRNA genes to categorize and examine the symbiotic algal family and bacterial communities present in Indo-Pacific corals located in the Gulf of Thailand.The findings indicated that environmental factors exerted an influence on the composition of plant structures and the diversity of bacterial communities associated with corals.Furthermore, the study demonstrated the efficacy of PacBioSMAT sequencing in accurately classifying coral-related microbiota at the species level.
Limited comprehensive research has been conducted on the factors that contribute to the occurrence of chrysanthemum's continuous cropping wilt.Studies pertaining to pathogen infection in chrysanthemums often adopt a descriptive approach, primarily examining pathogen species within host populations rather than exploring the fundamental mechanisms that drive these interactions.So, in the present study, the pathogen that may be associated with Fusarium wilt was primarily isolated and purified from the rhizosphere soil in the cultivar "Guangyu".The invasion pattern of the pathogen was examined, and the physiological response of chrysanthemum plants to stress induced by the pathogen was assessed.Finally, the PacBio platform was utilized to analyze complete 16S rRNA and ITS sequences to investigate the attributes of microbial communities in the rhizosphere of soil from the local cut chrysanthemums that have been subjected to continuous cropping for a duration of 7 years and in those of soil from initial cropping.This study aimed to offer a comprehensive understanding of the potential mechanisms that underlie wilt disease in "Guangyu".The findings could contribute to the establishment of a vital theoretical basis for the sustainable advancement of diverse crop varieties.
Field Investigation and Soil Sample Collection
A field investigation on Fusarium wilt was conducted in a cut chrysanthemum planting base located in Xinxiang, Henan Province, China (35 • 24 ′ N, 114 • 55 ′ E).There were five districts for each continuous cropping and healthy cropping field (first-year cropping) of chrysanthemums, from which 20 chrysanthemum samples were randomly selected.The height and leaf width of each plant were measured.Several chrysanthemum samples exhibiting severe disease symptoms were randomly selected from each district, and their root and stem characteristics were compared with those of healthy plants.Samples of rhizosphere soil were collected and analyzed from 7 years of continuous cropping and first-year cropping fields.Five sampling points in each field were selected, and the collected samples were subsequently merged.The samples were promptly placed into sterile bags, transported to the laboratory under low-temperature conditions, and stored at −80 • C [14].
Isolation and Identification of Pathogen
By using the dilution-plate method, fungi were isolated from soil samples collected from a Fusarium wilt field.The strains were isolated and purified by single spore isolation, inoculated into a PDA solid medium, and incubated at a temperature of 30 • C for 3-7 days [15].When the hyphae were fully covered on a 90 mm plate, the colony morphology, color, texture, and growth rate were carefully observed and recorded.The purified fungi were subsequently cultured in PDA for 7 days.Afterward, the surface conidia were washed with aseptic water and filtered using four layers of lens paper.Finally, 20 µL of the filtered solution was placed under a light microscope (HFX-IIA, Nikon, Tokyo, Japan) to observe the morphology of the conidia.
The fungal genomic DNA was extracted from fresh fungal cultures following the methodology described by Al-Sadi et al. [16].The rDNA-ITS region was amplified using the universal primers ITS1 (5 ′ -TCCGTAGGTGAACCTGCGG-3 ′ ) and ITS4 (5 ′ -TCCTCCGCTTAT TGATATGC-3 ′ ) as outlined by White et al. [17].Subsequently, the amplified DNA fragments were purified and ligated into the pMD19-T vector.The resulting constructs were then submitted to Sangon Biotech Co., Ltd.(Shanghai, China) for sequencing.The obtained sequencing results were compared and analyzed using BLAST program on the NCBI website (http://blast.st-va.ncbi.nlm.nih.gov/Blast.cgi(accessed on 21 August 2022)), and a phylogenetic tree was constructed using Clustal X software version 1.83 [18] and MEGA version 6.0.6 [19].
Pathogenicity Test
Healthy chrysanthemum plants ("Guangyu" cultivar) that remained asymptomatic after three generations of consecutive cutting and transplanting in an indoor environment were chosen as the inoculated hosts [20].The method described by Getha et al. [21] was employed to determine which Fusarium species were responsible for inducing plant disease.In the treatment group, each plant was subjected to a fungal spore suspension of 20 mL at a concentration of 10 6 /mL.The control group (CK) was treated with sterile water.The treated plants were then placed in an illuminating incubator and cultured at 30 • C in a 14 h light/10 h dark cycle.The plants were observed until the appearance of disease spots.Subsequently, the diseased stem tissues were collected, and the pathogen was isolated and purified from the tissues.Finally, the isolated fungal specimen was compared with the inoculated pathogen to determine whether it was the causative agent of chrysanthemum wilt.
Observation of Colonization Process and Disease Assessment
Root dipping was employed to investigate the pathway of pathogen invasion [22].The treatment group was immersed in the prepared spore solution for 30 min, whereas CK was immersed in aseptic water for the same time.The morbidity and mortality rates of the chrysanthemums were then diligently monitored and recorded every 10 days.The disease index (DI) was assessed on the basis of a 0-4 grade by referring to Alkher et al. [23].Grade 0 indicated the absence of symptoms in leaves; grade 1 denoted the presence of a single leaf exhibiting yellowing or curling at the basal region < 30%; grade 2 indicated yellowing, curling, or wilting of approximately 30-50% leaves, accompanied by a slight reduction in plant height; grade 3 denoted yellowing, curling, or wilting of approximately 50-75% leaves, resulting in leaf abscission; and grade 4 indicated 75-100% yellowing, curling, or wilting of all leaves or the demise of the plant.The disease grade for each plant was documented at 10 and 20 days post inoculation (dpi).Subsequently, DI was computed using the following formula: disease index = Σ (number of diseased leaves at each grade × corresponding grade)/(total number of leaves examined × highest grade).
The root tissues of chrysanthemum were collected at specific time intervals (12 h, 5 days, 10 days, 15 days, and 20 days).For each period, 20 fresh root segments were carefully selected and prepared for scanning electron microscopy (SEM) in accordance with the method described by Boamah et al. [24].The root samples were subjected to WGA-AF488 and PI co-staining to observe the colonization of pathogens in the root system by laser confocal scanning microscopy (LCSM) [25].
Physiological Response of Plants after Infection by Pathogen
The concentrations of photosynthetic pigment, including carotenoids and chlorophylls (chla, chlb), soluble sugars, and soluble proteins, in plants were quantified using spectrophotometry [27], anthrone colorimetry [28], and the Coomassie bright blue method [29], respectively.The levels of ash content, potassium (K), phosphorus (P), and calcium (Ca) in leaves were measured using inductively coupled plasma-mass spectrometry [30].
The content of salicylic acid (SA) and jasmonic acid (JA) in the supernatant was measured using SA and JA ELISA kits (RXJ1401587PL, Quanzhou Ruixin Biological Technology Co., Ltd., Quanzhou, China) [34].
Soil DNA Extraction, PCR Amplification, and Sequencing
The soil samples from mixed continuous cropping were subjected to parallel sequencing and labeled as LZ1861a-a,b,c, whereas the healthy soil samples were labeled as XT1861a-a,b,c.The DNA extraction process for fungi and bacteria in each soil sample followed the standard protocol of the OMEGA DNA isolation kit (Omega, Honolulu, HI, USA).The quality of the extracted DNA was assessed using 1% agarose gels.The purity of DNA was assessed using a NanoDrop One UV-Vis spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).Subsequently, the DNA concentration was determined using a Qubit 4.0 fluorometer (Invitrogen, Waltham, MA, USA).The full length of the 16s rRNA gene was amplified using primers 27F/1541R [35].The ITS1/ITS4 primers were used to amplify the full-length ITS rRNA gene [17].Following purification, the amplified product was submitted to Grandomics Co., Ltd.(Wuhan, China) for DNA sequencing using the PacBio RS II sequencer (Pacific Biosciences, Menlo Park, CA, USA).
Microbiome Profiling
The initial dataset was partitioned on the basis of barcode information by using SMRT Portal version 2.3.0,resulting in the acquisition of high-quality circular consensus sequencing sequences.Subsequently, the split sequences were filtered to eliminate extraneous data, including fragments outside the length range of 1400-1600 bp, reads containing "N" bases, reads containing homopolymers exceeding six base pairs, and sequences with an average mass below 90.The set of filtered high-quality data obtained was subjected to a query against the GenBank nonredundant nucleotide database (nt) to identify taxa in the National Center for Biotechnology Information (NCBI).Subsequently, it was clustered using the operational taxonomic unit (OTU) methodology.The representative sequences of OTU at 97% similarity level were subjected to classification and analysis using the UCLUST algorithm in QIIME2 software (version 2020.6).The species present in each sample were classified and quantified at various taxonomic levels, including boundary, phylum, class, order, family, genus, and species.The relative abundance of species at the gate and genus levels was visualized using R software (version 3.3.2).Bacterial and fungal communities were compared against the Silva (version 138.1) and Unite (version 8.2) databases, respectively.The α diversity, β diversity, and species differences among the samples were assessed using QIIME and R software.
Statistical Analysis
Statistical analysis was performed using Excel (version 2016) and GraphPad Prism (version 8.0.2) for Windows.The data were expressed as mean ± standard error of the mean and assessed through two-way ANOVA of three biological replicates, followed by the least significant difference test.Statistical significance was determined at p < 0.05, p < 0.01, and p < 0.001.
Chrysanthemum Disease Symptoms in the Field
Negative physiological characteristics were observed in chrysanthemum plants following continuous cropping.Initially, the lower leaves exhibited wilting and yellowing, whereas no evident browning was observed at the stem base.As the disease progressed, the plants exhibited stunted growth, blackening and rotting of the diseased roots, and increased susceptibility to uprooting.Ultimately, the entire plant withered and perished (Figure 1A,B).According to the statistical analysis conducted on healthy and diseased plants in the field (Figure 1C), the leaf width and plant height of diseased plants were significantly lower than those of healthy plants (p < 0.001).This observation suggested that the presence of Fusarium wilt disease had a substantial detrimental effect on the overall quality of chrysanthemum plants.
Isolation and Identification of Fusarium Pathogens
A sample of rhizosphere soil from wilted chrysanthemum plants was taken, and a total of 53 strains of fungi were isolated and purified using gradient dilution plate culture.Among these fungi, 37 were initially identified as Fusarium species based on morphological observation.The 37 strains exhibited two distinct colony forms, referred to as type A and type B. Type A colonies displayed an intermediate raised villous structure, with a neat edge and dense aerial mycelium (Figure 2A,B).The color of the colonies changed from white to light yellow, and wheel lines appeared after continued cultivation.After 7 days of cultivation, the entire plate became completely covered with colonies.Two distinct types of conidia were produced by type A fungus.On the one hand, the large conidia exhibited a sickle-shaped morphology, which was colorless and transparent, and possessed 2-4 septa.
They measured approximately 15-20 µm in length and 2-3 µm in width.On the other hand, the small conidia were oval-shaped, possessed 0-1 septum, and had dimensions of 4-5 µm in length and 2-3 µm in width (Figure 2C,D).The type B colony displayed protruding villi in the central region, although its colony edges were irregular.The aerial hyphae of this colony were longer than those of type A hyphae.As the culture time progressed, the colony gradually transitioned in color from red to purple.The colony lacked a ring pattern and exhibited slower growth than type A. After 10 days of culture, the entire plate became covered (Figure 2E,F).The conidia of the type B fungus were characterized by their small, rounded, and colorless morphology, with 0-1 septum and a size ranging from 2 µm to 9.5 µm.Notably, no large conidia were visible (Figure 2G,H).
Isolation and Identification of Fusarium Pathogens
A sample of rhizosphere soil from wilted chrysanthemum plants was taken, and a total of 53 strains of fungi were isolated and purified using gradient dilution plate culture.Among these fungi, 37 were initially identified as Fusarium species based on morphological observation.The 37 strains exhibited two distinct colony forms, referred to as type A and type B. Type A colonies displayed an intermediate raised villous structure, with a neat edge and dense aerial mycelium (Figure 2A,B).The color of the colonies changed from white to light yellow, and wheel lines appeared after continued cultivation.After 7 days of cultivation, the entire plate became completely covered with colonies.Two distinct types of conidia were produced by type A fungus.On the one hand, the large conidia exhibited a sickle-shaped morphology, which was colorless and transparent, and possessed 2-4 septa.They measured approximately 15-20 µm in length and 2-3 µm in width.On the other hand, the small conidia were oval-shaped, possessed 0-1 septum, and had dimensions of 4-5 µm in length and 2-3 µm in width (Figure 2C,D).The type B colony displayed protruding villi in the central region, although its colony edges were irregular.The aerial hyphae of this colony were longer than those of type A hyphae.As the culture time progressed, the colony gradually transitioned in color from red to purple.The colony lacked a ring pattern and exhibited slower growth than type A. After 10 days of culture, the entire plate became covered (Figure 2E,F).The conidia of the type B fungus were characterized by their small, rounded, and colorless morphology, with 0-1 septum and a size ranging from 2 µm to 9.5 µm.Notably, no large conidia were visible (Figure 2G,H).The amplified rDNA-ITS sequences of 37 strain fragments had a length of approximately 600 bp.Subsequently, the fragment was ligated to pMD19-T and sent to Shenggong Bio Co., Ltd.(Shanghai, China) for sequencing.Comparison of the sequences with the NCBI Blast database showed that among the 37 strains, the sequence of 16 type B strains exhibited complete consistency with F. oxysporum, with a homology of 100% (designated as Fo).Conversely, the remaining 21 type A strains displayed a homology of 99.8% with F. solani in their ITS sequences (designated as Fs).The phylogenetic tree further proved the isolated strains were conclusively identified as F. oxysporum and F. solani (Figure 2I).
Pathogenicity Determination of Pathogen
The strains of Fo and Fs were subsequently reintroduced to healthy plants under identical growth conditions.The findings indicated that chrysanthemums in the aseptic water treatment group (CK, Figure S1A,B) and Fo treatment group (Figure S1C,D) exhibited normal growth without any signs of disease.Conversely, the chrysanthemums in the Fs treatment group displayed symptoms consistent with those observed in naturally occurring chrysanthemum wilt in the field (Figure S1E,F).Specifically, the lower leaves began to curl and yellow from 30 dpi, with an incidence rate exceeding 80%.The pathogen was subsequently reisolated from the stem base of the affected plant, and the colony and conidia were consistent with those of the previously isolated.
Invasion and Colonization of F. solani in Chrysanthemum Root
The plant cell wall serves as the initial barrier against pathogen invasion.Pathogens are capable of generating certain substances that break down polymers, including pectin and cellulose, within the plant cell wall to disrupt the structural integrity of plant cell tissue and reduce cell adhesion.These substances were observed to be CWDEs, such as Cx, pectinase, hemicellulase, and others.The secretion of CWDEs by pathogens plays a vital role in the process of infection.According to the data presented in Figure S2, F. solani exhibited the capacity to synthesize a range of plant CWDEs, including Cx, βG, PG, PMG, and xylanase.These enzymes may play a crucial role in enabling F. solani to penetrate plant tissue [36].The amplified rDNA-ITS sequences of 37 strain fragments had a length of approximately 600 bp.Subsequently, the fragment was ligated to pMD19-T and sent to Shenggong Bio Co., Ltd.(Shanghai, China) for sequencing.Comparison of the sequences with the NCBI Blast database showed that among the 37 strains, the sequence of 16 type B strains exhibited complete consistency with F. oxysporum, with a homology of 100% (designated as Fo).Conversely, the remaining 21 type A strains displayed a homology of 99.8% with F. solani in their ITS sequences (designated as Fs).The phylogenetic tree further proved the isolated strains were conclusively identified as F. oxysporum and F. solani (Figure 2I).
Pathogenicity Determination of Pathogen
The strains of Fo and Fs were subsequently reintroduced to healthy plants under Following a growth period of 20 days, the entire plant's leaves assumed a yellow-brown hue, making it susceptible to easy uprooting.Furthermore, the roots underwent rotting and blackening, accompanied by an unpleasant odor.Ultimately, the entire plant withered and perished, resulting in a disease severity of grade 4. The incidence and incidence index statistics at 10 and 20 dpi are shown in Table S1.The symptoms exhibited by the F. solani treatment group closely resembled the naturally occurring symptoms of chrysanthemum wilt observed in the field.Conversely, the potted chrysanthemum plants in CK remained symptom-free and exhibited healthy growth (Figure 3).The pathway by which F. solani infiltrated the plant root is depicted in Figure 4. Following a 12 h invasion period, a limited quantity of conidium adhered to the root hair and intercellular space, with some conidium initiating germination to generate bud tubes (Figure 4B).However, no invasive structures were observed.At 1 dpi, the pathogen's hyphae that attached to the root surface began elongating and intertwining (Figure 4D), although they remained confined to the cell gaps to seek opportunities for invasion.At 3 dpi, a notable increase was observed in pathogen abundance within the root system, accompanied by the infiltration of certain hyphae through the intercellular space and the attachment of The pathway by which F. solani infiltrated the plant root is depicted in Figure 4. Following a 12 h invasion period, a limited quantity of conidium adhered to the root hair and intercellular space, with some conidium initiating germination to generate bud tubes (Figure 4B).However, no invasive structures were observed.At 1 dpi, the pathogen's hyphae that attached to the root surface began elongating and intertwining (Figure 4D), although they remained confined to the cell gaps to seek opportunities for invasion.At 3 dpi, a notable increase was observed in pathogen abundance within the root system, accompanied by the infiltration of certain hyphae through the intercellular space and the attachment of numerous conidia in intercellular space (Figure 4F).At 5 dpi, pathogen propagation and subsequent conidia production commenced on the root surface, coinciding with the partial destruction of cellular structures (Figure 4H).At 10 dpi, the plant had transitioned into the initial phase of the disease, characterized by the presence of hyphae and conidia covering the surface of the root system, leading to a substantial proliferation of the pathogen, and the morphology of the root cells appeared abnormal (Figure 4J-L).By contrast, the CK root consistently exhibited intact and well-organized cell morphology, with cells appearing full, smooth, and closely aligned (Figure 4A,C,E,G,I).The invasion and colonization of pathogens in plant roots were visualized using LCSM, as depicted in Figure 5.In CK, the plant cells were consistently arranged in a dense and orderly manner, with no evidence of pathogen hyphae or conidia observed within the tissue (Figure 5A-Q).Conversely, in the pathogen treatment group (Fs), a limited number of pathogen conidium was observed to adhere to the root hairs and aggregate in the intercellular space at 12 h post inoculation (Figure 5F,K,P), whereas the cellular structure remained intact.At 1 dpi, the conidium that accumulated in close proximity to and within the root hairs initiated the process of hyphal formation (Figure 5G,L,Q).Consequently, the mycelium structure became discernible within the plant roots, exhibiting growth and extension along the intercellular space.However, the colonization of pathogens within the roots remained limited.At 3 dpi, the hyphae within the root continued to proliferate along the intercellular space, and the germinated conidia that had attached themselves externally to the root commenced invasion through the intercellular spaces, with observable apical growth ends (Figure 5H,M,R).At 5 dpi, the pathogens present in the root system transitioned into the asexual reproductive phase (Figure 5N-P), resulting in the generation of a significant quantity of conidia.Subsequently, these pathogens initiated the upward transportation of conidium through the intercellular space and vascular bundles (Figure 5S).By 10 dpi, the hyphae had successfully colonized the entire root system of the chrysanthemum plant (Figure 5J,O,T).Thus, the aboveground leaves of the chrysanthemum plant began to exhibit signs of water loss and subsequent shrinkage.
generation of a significant quantity of conidia.Subsequently, these pathogens initiated the upward transportation of conidium through the intercellular space and vascular bundles (Figure 5S).By 10 dpi, the hyphae had successfully colonized the entire root system of the chrysanthemum plant (Figure 5J,O,T).Thus, the aboveground leaves of the chrysanthemum plant began to exhibit signs of water loss and subsequent shrinkage.
Effects on Plant Nutrition and Growth
In the intermediate stage of infection, which occurred 15 dpi, a substantial decline in growth indices of plants, encompassing fresh weight, plant height, and photosynthetic pigments (chla, chlb, and carotenoids) was observed in the treatment group (Fs).The ash content in tissues exhibited a notable increase in comparison with that in CK (Figure 6A).The contents of inorganic P, K, and Ca began to accumulate and reached the peak during the early stage of pathogen infection (0-10 dpi) and subsequently decreased.Furthermore, an increase in soluble sugar and protein concentrations was observed, but as the severity of the plant disease intensified, the levels of inorganic substances and nutrients gradually
. Effects on Plant Nutrition and Growth
In the intermediate stage of infection, which occurred 15 dpi, a substantial decline in growth indices of plants, encompassing fresh weight, plant height, and photosynthetic pigments (chla, chlb, and carotenoids) was observed in the treatment group (Fs).The ash content in tissues exhibited a notable increase in comparison with that in CK (Figure 6A).The contents of inorganic P, K, and Ca began to accumulate and reached the peak during the early stage of pathogen infection (0-10 dpi) and subsequently decreased.Furthermore, an increase in soluble sugar and protein concentrations was observed, but as the severity of the plant disease intensified, the levels of inorganic substances and nutrients gradually diminished in the Fs group.Conversely, the inorganic substances and nutrients in CK did not show any noteworthy alterations throughout the experiment (Figure 6C).
Oxidative Stress on Plants
Within 5 dpi of F. solani (Fs), the H 2 O 2 content in the infected tissue exhibited minimal variation.However, after 5 dpi, a rapid increase in H 2 O 2 levels was observed, peaking at 10 dpi.This increase was significantly higher than that in CK (p < 0.001), suggesting the occurrence of oxidative stress in plants.Consequently, upon pathogen infection and the subsequent buildup of H 2 O 2 , the concentration of malondialdehyde (MDA) in plants exhibited a rapid increase starting at 5 dpi, and the increase was sustained and reached its peak at 10 dpi, remaining consistently high thereafter (Figure 6B).
Changes in Defense-Related Enzyme Activity
The antioxidant enzyme activity in plant leaves during infection was measured, and the finding is presented in Figure 6E.Within 3 dpi, the CAT activity exhibited a rapid increase, reaching its peak, and then gradually declining.However, even after 7 dpi, the CAT activity in Fs remained higher (p < 0.01) than that in CK.Subsequently, a further decline was observed, with no discernible difference compared with CK during the middle and late stages of infection.The POD activity in CK exhibited an initial increase from 0 dpi to 5 dpi, followed by a subsequent decline until reaching a point where it did not significantly differ from that in CK at 15 dpi.Conversely, the SOD activity in the plants did not display any significant differences between the CK and Fs groups (Figure 6E), suggesting that it may not have been activated following pathogen inoculation.The activation of defense enzymes in leaves varied following pathogen inoculation, as depicted in Figure 6D.The findings indicated that the activity of PPO in Fs started increasing from 0 dpi, reaching its peak at 3 dpi, and subsequently declining.However, it remained higher than that in CK until 7 dpi (p < 0.01).The PAL activity exhibited a progressive increase from 3 dpi, corresponding to the gradual increase in infection, and it was higher than that in CK (p < 0.01).The CHI activity displayed an ascending pattern from 5 dpi, reaching its zenith on day 10 dpi and subsequently declining.
Plant Hormone Level
The data presented in Figure 6F demonstrated that F. solani infection in chrysanthemum roots induced the synthesis of endogenous hormones JA and SA.The concentration of JA exhibited an initial increase upon infection, reaching its peak at 5 dpi, which was higher than that in CK (p < 0.01).However, this increase was transient.Conversely, the concentration of SA in Fs showed a rapid increase from the early stages of infection and remained significantly higher than that in CK (p < 0.001) until 7 dpi.Subsequently, it gradually decreased but remained higher than that in CK.
Changes in Bacterial Community Composition
Chao1, Richness, Shannon, and ACE indices were utilized to evaluate the abundance and diversity of bacteria in soil samples.The results reveal a significantly higher bacterial species abundance and diversity in soil samples from continuous cropping compared with healthy soil.The sequencing details and α diversity index of the samples are presented in Table S2.The sparse and Shannon diversity curves are depicted in Figure S3A.They offered an evaluation of the sequencing quantity for each sample.The presented curves provided evidence that the sequencing depth employed facilitated a thorough depiction of bacterial diversity within the sample while enabling the identification of a substantial proportion of the microbial composition.
Figure 7A presents the bacterial community structure and dominant species at the phylum level for continuous cropping soil and healthy soil.The phyla Planctomycetota, Bacteroidota, Proteobacteria, Acidobacteriota, and Chloroflexi exhibited the highest relative abundance of bacterial communities in continuous cropping soil ( LZ1861a-a,b,c) and healthy soil (XT1861 a-a,b,c), ranging from 19.73% to 22.82%, 16.10% to 17.08%, 13.01% to 16.54%, 6.98% to 11.23%, and 8.48% to 10.28%, respectively.Collectively, these phyla accounted for 64.30-77.95% of the total bacterial community in the soil.The relative abundance of phyla, such as Patescibacteria, Verrucomicrobiota, and Methylomirabiota, in continuous cropping soil exhibited a statistically significant increase compared with those in healthy soil, with increases of 44.32%, 48.62%, and 33.33%, respectively.Conversely, the relative abundance of Actinomycetota, Cyanobacteria, and Firmicutes in continuous cropping soil significantly decreased by 54.79%, 97.97%, and 85.47%, respectively, compared with that in healthy soil.At the genus level, the differences in bacterial community structure between continuous cropping soil and healthy soil are shown in Figure 7B.In comparison with the bacterial community observed in healthy soil, the relative abundance of beneficial bacteria, namely Luteimonas, Nitrosospira, Pirellula, Terrimonas, Actinomyces, and Steroidobacteria, exhibited a significant decrease in continuous cropping soil.Conversely, numerous unclassified genera, such as Bacteroidota env.OPS17, and unknown genera like AKYG587 and Rokubacteriales in Planctomycetota experienced a substantial increase.
Bacteroidota, Proteobacteria, Acidobacteriota, and Chloroflexi exhibited the highest relative abundance of bacterial communities in continuous cropping soil ( LZ1861a-a,b,c) and healthy soil (XT1861 a-a,b,c), ranging from 19.73% to 22.82%, 16.10% to 17.08%, 13.01% to 16.54%, 6.98% to 11.23%, and 8.48% to 10.28%, respectively.Collectively, these phyla accounted for 64.30-77.95% of the total bacterial community in the soil.The relative abundance of phyla, such as Patescibacteria, Verrucomicrobiota, and Methylomirabiota, in continuous cropping soil exhibited a statistically significant increase compared with those in healthy soil, with increases of 44.32%, 48.62%, and 33.33%, respectively.Conversely, the relative abundance of Actinomycetota, Cyanobacteria, and Firmicutes in continuous cropping soil significantly decreased by 54.79%, 97.97%, and 85.47%, respectively, compared with that in healthy soil.At the genus level, the differences in bacterial community structure between continuous cropping soil and healthy soil are shown in Figure 7B.In comparison with the bacterial community observed in healthy soil, the relative abundance of beneficial bacteria, namely Luteimonas, Nitrosospira, Pirellula, Terrimonas, Actinomyces, and Steroidobacteria, exhibited a significant decrease in continuous cropping soil.Conversely, numerous unclassified genera, such as Bacteroidota env.OPS17, and unknown genera like AKYG587 and Rokubacteriales in Planctomycetota experienced a substantial increase.Principal coordinate analysis (PCoA) was employed to investigate dissimilarities in the composition of sample communities.Additionally, hierarchical clustering analysis was conducted to construct a dendrogram (Figure S3B) that visually represents the influence of continuous cropping on the structure of soil bacteria communities.The findings revealed that the distances between all items on the coordinate axis of the continuous cropping soil samples (LZ1861 a-a,b,c) were greater than those of the healthy soil samples (XT1861 a-a,b,c), indicating that perennial continuous cropping exerted a more substantial effect on the bacterial community structure within the soil.LEfSE analysis was conducted on the sequencing samples, and the communities or species that exhibited noteworthy disparities in the samples are depicted in Figure S3B.The findings indicated that the species belonging to Planctomycetes, Actinobacteria, Bacilli, Chitinophagales, and Cyanobacteria in healthy soil and the species in Phycisphaerae, Patescibacteria, Pedosphaeraceae, and Geosphere bacteria in continuous cropping soil were the primary contributors to significant variations in community structure.
Changes in the Fungal Community Composition
The Chao1, Richness, Shannon, and ACE indices were measured to evaluate fungal abundance and diversity in soil samples.The findings indicated a significant decrease in species abundance and diversity of fungi in soil samples subjected to continuous cropping compared with healthy soil.The sequencing information and α diversity index of the samples are presented in Table S3.The sparse and Shannon diversity curves of continuous cropping soil and healthy soil samples demonstrated that the fungal diversity in the samples was adequately represented at this sequencing depth (Figure S4A), and it met the necessary prerequisites for subsequent bioinformatics analysis.
The fungal community structure and dominant species of continuous cropping soil and healthy soil are depicted at the phylum level in Figure 8A.Both soils exhibited three main phyla, namely Ascomycota, Basidiomycota, and Chytridiomycota, with Ascomycota being the most abundant phylum in the soil samples.Notably, the relative abundance of Ascomycota significantly decreased in continuous cropping soil samples, whereas the relative abundance of Basidiomycota was significantly greater than that in healthy soil samples.Significant differences in the fungal community composition were observed at the genus taxonomic level between the soil subjected to continuous cropping and the healthy soil (Figure 8B).In comparison with the fungal community found in healthy soil, the continuous cropping soil exhibited a significant decrease in the relative abundance of beneficial species belonging to Microdochium, Aspergillus, Ceratobasidium, and Torula.Conversely, a noteworthy increase was found in the relative abundance of species affiliated with Ascobolus, Myriococcum, Rhizophlyctis, Lodophhanus, and Piskurozyma.The findings indicated that continuous cropping practices had a pronounced influence on the composition and distribution of fungal communities within the soil.Notably, the pathogen responsible for the occurrence of chrysanthemum wilt in the local area was identified as F. solani, which differs from previous reports.Consequently, a detailed analysis of the Fusarium community structure was conducted at the species level, and the corresponding results are presented in Figure 8C.The findings indicated that the practice of continuous cropping resulted in a notable increase in the relative abundance of F. solani, aligning with previous research outcomes [3,11].Furthermore, the previously documented pathogen F. oxysporum constituted less than 1% of the Fusarium present in the local soil samples, potentially attributed to variances in geographical conditions.
The fungal community composition in the samples was examined using PCoA (Figure S4B).The findings indicated that perennial continuous cropping had a substantial impact on the fungal community structure in the soil.LEfSE analysis was conducted to further analyze the sequencing samples, revealing the fungal species that exhibited significant differences among the samples (Figure S4B).In healthy soils, the community structure displayed notable variations primarily due to the presence of Ascomycota, Sordariomycetes, Dothideomycetes, Xylariales, Microdochium, and Aspergillacea.Conversely, Chytridiomycota, Pezizales, Atheliales, Botryoderma, and Cantharellus were found to be predominant in soils subjected to continuous cropping.The fungal community composition in the samples was examined using PCoA (Figure S4B).The findings indicated that perennial continuous cropping had a substantial impact on the fungal community structure in the soil.LEfSE analysis was conducted to further analyze the sequencing samples, revealing the fungal species that exhibited significant differences among the samples (Figure S4B).In healthy soils, the community structure displayed notable variations primarily due to the presence of Ascomycota, Sordariomycetes, Dothideomycetes, Xylariales, Microdochium, and Aspergillacea.Conversely, Chytridiomycota, Pezizales, Atheliales, Botryoderma, and Cantharellus were found to be predominant in soils subjected to continuous cropping.
Discussion
Various pathogens, including F. incarnatum, Dickeya chrysanthemi, Rhizoctonia solani, Erwinia chrysanthemi, and F. oxysporum, have been identified as potential causes of wilt in chrysanthemums [37].Through pathogenicity detection and verification, the results of the present research demonstrate that F. solani displayed pathogenicity, with pathogenic characteristics consistent with those observed in the natural field.Therefore, it was recognized as the primary pathogen accountable for chrysanthemum wilt in "Guangyu."However, in another study, F. oxysporum and F. solani were proven to be the major pathogenic species in continuously monocropped chrysanthemums [3], illustrating that different cultivars of chrysanthemum may exhibit varying degrees of susceptibility to distinct species within the Fusarium genus.The strain's capacity to produce Cx, βG, PG, PMG, and xylanase facilitated fungal penetration of the plant cell wall.Furthermore, the infection induced by this pathogen exerted a substantial inhibitory effect on the growth and metabolic activities of the afflicted plants.Soluble proteins and sugars have been proven to be valuable indicators for evaluating the physiological metabolism of plant cells.The presence of
Discussion
Various pathogens, including F. incarnatum, Dickeya chrysanthemi, Rhizoctonia solani, Erwinia chrysanthemi, and F. oxysporum, have been identified as potential causes of wilt in chrysanthemums [37].Through pathogenicity detection and verification, the results of the present research demonstrate that F. solani displayed pathogenicity, with pathogenic characteristics consistent with those observed in the natural field.Therefore, it was recognized as the primary pathogen accountable for chrysanthemum wilt in "Guangyu".However, in another study, F. oxysporum and F. solani were proven to be the major pathogenic species in continuously monocropped chrysanthemums [3], illustrating that different cultivars of chrysanthemum may exhibit varying degrees of susceptibility to distinct species within the Fusarium genus.The strain's capacity to produce Cx, βG, PG, PMG, and xylanase facilitated fungal penetration of the plant cell wall.Furthermore, the infection induced by this pathogen exerted a substantial inhibitory effect on the growth and metabolic activities of the afflicted plants.Soluble proteins and sugars have been proven to be valuable indicators for evaluating the physiological metabolism of plant cells.The presence of pathogen infection could trigger the defense response in plants, resulting in the production of pathogenesis-related proteins.Sugars play a crucial role in the material and energy metabolism of plants.An increase in soluble sugar levels facilitates the maintenance of cell osmotic pressure, the augmentation of metabolic capabilities, and adaptation to stressful conditions.The findings of this study indicated that the infection of chrysanthemum by F.solani resulted in a significant increase in the levels of soluble protein and sugar in the leaves, and this increase could play a crucial role in preserving the integrity of the plant cell membrane structure and enhancing the plant's resistance against diseases.However, as the invasion prolonged, the plant's autoimmune response became insufficient to counteract the pathogens, leading to a decline in its physiological indices and eventual death.
Pathogen-induced infection in plants could result in the occurrence of oxidative bursts, characterized by the rapid generation of reactive oxygen species (ROS) [38].The excessive accumulation of ROS triggered an oxidative stress response in plants, leading to detrimental effects on nucleic acids, proteins, lipids, and other macromolecules and ultimately disrupting cellular function [39].Plants developed various coping mechanisms to counteract the oxidative damage caused by the excessive ROS accumulation.An effective approach to detoxification involves enhancing the activity of antioxidant enzymes to safeguard plants [40].Within plants, the key antioxidant enzymes include SOD, CAT, and POD, which served as a strategy to combat pathogen invasion [41].SOD, functioning as the primary defense against oxidation, is a vital protective enzyme in plant tissues, primarily engaged in oxygen metabolism and contributing significantly to the investigation of plant disease resistance mechanisms [42].Previous research demonstrated that disease-resistant varieties exhibited significantly higher SOD activity than susceptible varieties following infection by F. trichothecioide [43].CAT, an essential protective enzyme in plant tissues, plays a crucial role in breaking down the accumulated H 2 O 2 resulting from plant metabolism, thereby reducing oxidative damage to cells.Apart from its role as a prominent component of the antioxidant enzyme system, POD plays a crucial role in facilitating the synthesis of lignin, thereby promoting the process of lignification in affected tissues.It also acted as a catalyst for the production of phenols that possessed toxicity towards pathogens, effectively inhibiting their proliferation and expansion within the host organism.The antioxidant enzymes in plants consistently maintained a dynamic equilibrium with ROS, thereby regulating cellular levels of free radicals to a normal range.In this study, the activities of CAT and POD in the early stage of infection showed an upward trend, whereas the contents of ROS and free radicals did not change significantly, indicating that the outbreak of ROS activated the antioxidant system response in plants.With the prolonged duration of infection, the activities of CAT and POD exhibited a decline, whereas the concentration of H 2 O 2 showed an accumulation, and the levels of MDA significantly increased.These observations suggested that during the middle and later stages of F. solani infection, the plants' capacity to manage reactive oxygen diminished, leading to the accumulation of ROS within plants, which, in turn, could cause membrane lipid peroxidation and exacerbate cellular damage.This accumulation surpassed the regulatory limits of plant autoimmunity, leading to metabolic disturbances and disruption of normal physiological structure and function.Consequently, this process accelerates the senescence and demise of plant leaves.
During the course of pathogen infection, plants could augment the activity of defense enzymes to mitigate the accumulation of detrimental substances and bolster their own resistance [44].Defense enzymes, such as PAL, PPO, and CHI [45], were notably instrumental in the synthesis of lignin and phenolic compounds, serving as barriers against pathogen invasion and mitigating toxic substances [46,47].These enzymes could directly contribute to enhancing disease resistance by generating quinone substances that impede the growth and expansion of fungal hyphae.PAL is a crucial enzyme in the phenylpropane metabolic pathway in plants, and its activity serves as a dependable indicator for evaluating the disease resistance of host plants.PPO could reinforce plant disease resistance through its involvement in the synthesis of lignin precursors and the oxidation of phenolic substances in plants.CHI encodes a protein associated with disease progression, and it is capable of degrading the cell wall of pathogens and plays a significant role in plant defense mechanisms.During the initial phases of the disease, plants exhibited an augmented resistance mechanism through the reinforcement of enzymatic activity of PAL, PPO, and CHI to combat the pathogenic agents.However, as the disease progressed, the toxin accumulation in plants increased.Consequently, the innate resistance of the plants proved inadequate to counterbalance the hazardous substances imposed by the pathogen, ultimately resulting in withering and demise.SA and JA are significant mediators of plant immunity against pathogens, with SA and JA signaling playing crucial roles in defending against biotrophic and necrotrophic pathogens [48].Recent investigations shed light on the pivotal function of SA in impeding biotrophic infections and unraveled the biosynthesis and signaling pathways linked to the modification of pathogen susceptibility [49].The results of the present study suggest that pathogen infection may trigger the activation of the SA defense pathway.However, additional research is needed to determine the extent of the involvement of the JA pathway in the defense response against Fusarium wilt in chrysanthemum, and related work is planned to be carried out in subsequent studies.
The soil environment is a multifaceted ecosystem, wherein the composition, diversity, and function of the soil microbial community are influenced by various factors such as climate, cultivation techniques, soil nutrient levels, introduction of invasive pathogens, and agricultural management practices.The soil microbial community plays a crucial role in suppressing soil-borne diseases through mechanisms such as promoting the synthesis of plant hormones, competing with soil-borne pathogens for essential nutrients, engaging in direct competition with plants, or activating immune responses regulated by microorganisms.The rhizosphere is widely recognized as the primary barrier against soil-borne pathogens.Plants have the ability to develop defense mechanisms against soil-borne pathogens through the targeted stimulation of antagonistic microorganisms, resulting in a significant increase in the abundance of these antagonistic microorganisms.In this study, the prolonged cultivation of chrysanthemum over numerous years led to the microbial community structure towards an unfavorable environment for plant growth.A significant decrease was observed in the proportion of beneficial bacteria, specifically Nitrosospira and Pirellula, and in the proportion of antagonistic microorganisms, such as Actinobacteria and Aspergillus.Conversely, a noteworthy increase was observed in the proportion of pathogen and saprophytic fungi, including species in Myriococcum, Rhizophlyctis, and Ascobolus.Moreover, a significant increase in the occurrence of F. solani, the causative agent of localized chrysanthemum wilt disease, was observed, surpassing a prevalence of 60%.Consequently, the soil microbial community underwent a gradual modification, adversely affecting plant growth and resulting in the emergence of soil-borne Fusarium wilt among the native chrysanthemum population.However, the specific molecular mechanisms governing the interaction between F. solani and chrysanthemum remain unexplored.Therefore, conducting further investigations on the interplay between pathogen and host is imperative because it could enhance the understanding of the fundamental molecular mechanisms implicated in pathogen infiltration.Choosing efficient soil-applied chemical fungicide, bio-fungicide, and bio-organic fertilizer, as well as their combined application, could be the best choice to improve rhizosphere microbial properties and effectively control the Fusarium wilt of chrysanthemums [50,51].
J 21 Figure 1 .
Figure 1.Chrysanthemum disease symptoms investigated in the field: (A) Fusarium wilt field.(B) Diseased plants.(C) Statistical analysis of plant height and leaf width of Fusarium wilt and healthy plants.***: significant at 0.001 level.
Figure 1 .
Figure 1.Chrysanthemum disease symptoms investigated in the field: (A) Fusarium wilt field.(B) Diseased plants.(C) Statistical analysis of plant height and leaf width of Fusarium wilt and healthy plants.***: significant at 0.001 level.
21 Figure 2 .
Figure 2. Morphological characteristics of Fusarium strains and identification: (A-D) Colony and light microscope images of type A strain at a magnification of 400×.(E-H) Colony and light microscope images of type B strain at a magnification of 400×.(I) Neighbor-joining tree constructed based on ITS sequences of strains F. oxysporum (Fo) and F. solani (Fs).
Figure 2 .
Figure 2. Morphological characteristics of Fusarium strains and identification: (A-D) Colony and light microscope images of type A strain at a magnification of 400×.(E-H) Colony and light microscope images of type B strain at a magnification of 400×.(I) Neighbor-joining tree constructed based on ITS sequences of strains F. oxysporum (Fo) and F. solani (Fs).The characteristic symptoms observed in potted chrysanthemum plants affected by F. solani were as follows: During the initial phase (0-10 days), the lower leaves gradually exhibited curling, indicating that the disease severity reached grade 1.In the intermediate phase(10-15 days), approximately 30% of the lower leaves display yellowing and wilting, accompanied by browning of the roots, indicating a disease severity of grade 2. In the later stage of the disease (15-20 days), a significant proportion of leaves (50-75%) exhibited yellowing and withering at the bottom, and individual leaves gradually turned brown.The
Figure 4 .
Figure 4. Invasion of chrysanthemum roots by F. solani under scanning electron microscopy (SEM) at magnifications ranging from ×200 to ×2000: (A,C,E,G,I) Roots of the control group observed at 12 h and 1, 3, 5, and 10 d. (B,D,F,H,J-L) Roots of the pathogen infection group observed at 12 h and 1, 3, 5, and 10 d.
Figure 6 .
Figure 6.Physiological responses of chrysanthemum to F. solani inoculation: (A) Plant growth and photosynthetic pigment changes on 15 dpi of F. solani.(B) Changes in hydrogen peroxide and MDA
Figure 7 .
Figure 7. Analysis of soil bacterial microbial community structure: (A) Relative abundance of bacterial at the phylum level.(B) Relative abundance of bacteria at the genus level.Red square: the relative abundance of bacteria increased, Green square: the relative abundance of bacteria decreased (in continuous cropping soil compared with in healthy soil).
Figure 7 .
Figure 7. Analysis of soil bacterial microbial community structure: (A) Relative abundance of bacterial at the phylum level.(B) Relative abundance of bacteria at the genus level.Red square: the relative abundance of bacteria increased, Green square: the relative abundance of bacteria decreased (in continuous cropping soil compared with in healthy soil).
Figure 8 .
Figure 8. Analysis of soil fungal microbial community structure: (A) Relative abundance of fungi at the phylum level.(B) Relative abundance of fungi at the genus level.(C) Relative abundance of Fusarium at the species level.Red square: the relative abundance of fungi increased, Green square: the relative abundance of fungi decreased (in continuous cropping soil compared with in healthy soil).
Figure 8 .
Figure 8. Analysis of soil fungal microbial community structure: (A) Relative abundance of fungi at the phylum level.(B) Relative abundance of fungi at the genus level.(C) Relative abundance of Fusarium at the species level.Red square: the relative abundance of fungi increased, Green square: the relative abundance of fungi decreased (in continuous cropping soil compared with in healthy soil). | 2023-11-17T14:04:18.727Z | 2023-12-27T00:00:00.000 | {
"year": 2023,
"sha1": "b72a6d130188ef7dc559eba08fae76c2bee5f3dc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/10/1/14/pdf?version=1703722775",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fd239825b7619a010f1a13759efe6b3820e56a8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233709948 | pes2o/s2orc | v3-fos-license | Synthesis of nano-TiO2 assisted by glycols and submitted to hydrothermal or conventional heat treatment with promising photocatalytic activity
TiO2 nanoparticles were successfully synthesized by the sol-gel method employing different glycols (ethylene glycol, diethylene glycol or polyethylene glycol 300), which were heat-treated in conventional oven or by hydrothermal via, obtaining photocatalysts with particle sizes and distinct crystalline structures. HRTEM analyses showed that the oxides submitted to hydrothermal treatment featured spherical morphology, being formed by partially aggregated particles with sizes varying between 2 and 5 nm. X-ray diffractograms and Raman spectroscopy confirm that anatase was predominant in all synthesized compounds, with presence of brookite phase for samples that received hydrothermal treatment or were synthesized in the presence of polyethylene glycol with heat treatment in conventional oven. The amount of brookite as well as the cell volume, deformation, network parameters and crystallinity were estimated by Rietveld refinement. The surface area and porosity of the materials were higher when the synthesis involved the use of hydrothermal treatment. These oxides are mesoporous with porosity between 14 and 31%. The oxide synthesized in the presence of ethylene glycol with hydrothermal thermal treatment (TiO2G1HT) exhibited the highest photocatalytic activity in terms of mineralization of azo-dye Ponceau 4R (C.I. 16255), under UV-Vis irradiation. This higher photocatalytic activity can be attributed to the formation of binary oxides composed by anatase and brookite and by its optimized morphological and electronic properties.
In the present study, photocatalysts based on TiO 2 were synthesized by the sol-gel method. 114 The influence of the use of different structural molds (ethylene glycol, diethylene glycol or 115 polyethylene glycol 300) as well as the effect of thermal treatments by conventional or 116 hydrothermal routes, was evaluated on their photocatalytic activity, and structural optical and 117 morphological properties. The photocatalytic activity was evaluated through the degradation of 118 the azo-dye Ponceau 4R, chosen due to its industrial application and undesirable effects on the 119 environment and human health (Oliveira et al., 2012;European Food 2020). The results 120 presented here aim to provide new insights into the synthesis of TiO 2 -based photocatalysts with 121 different crystalline phases and the influence of preparation conditions on the photocatalytic 122 properties of these systems. All chemicals were of analytical or HPLC grade and were used as received. Ultrapure water 129 obtained from an Elix 5 Milli-Q ® water purification system was employed in all experiments. 130 TiO 2 samples were synthesized by the sol-gel method, using different glycols (ethylene glycol, 131 diethylene glycol or polyethylene glycol 300) (Sigma Aldrich), and heat treatment in a 132 conventional oven or hydrothermal system. 133 The TiO 2 Gx photocatalyst was obtained from the mixture, under magnetic stirring, of 10 mL 134 of Ti (IV) isopropoxide (Aldrich, 97%) and 50 mL of glycol (where x = 1 when 886 mmol of 135 ethylene glycol (Vetec, 99.5%) were used, x = 2 for 527 mmol diethylene glycol (Vetec, 99.5%), 136 and x = 3 when polyethylene glycol 300 (Fluka) was used). After 2 hours of stirring, a mixture 137 containing 10 mL of ultrapure water and 90 ml of acetone (Synth, 99.5%) was added to the 138 suspension and kept under stirring for 2 hours. The white precipitate was separated with the aid 139 of a centrifuge (9000 rpm for 20 minutes), followed by washing several times with ethanol to 140 remove residues of glycol, followed by washing three times with distilled water.
141
For the preparation of heat-treated photocatalysts in a conventional oven (TiO 2 GxM), after 142 washing the powder was dried at 70°C under reduced pressure and sintered at 400°C for 2 hours. 143 After centrifugation and washing the decanted oxide prepared using hydrothermal treatment, 144 TiO 2 GxHT, was submitted to the hydrothermal reactor under a pressure of approximately 13.8 145 bar at 200°C for 4 hours. Subsequently, it was dried at 70°C for 24 hours. High resolution electronic transmission images were obtained using a Jeol, JEM-2100, 152 Thermo scientific Transmission Electron Microscope. The particle size and spacing between 153 crystalline planes were calculate with the free software "ImageJ". 154 X-ray diffraction analyses (XRD) using a Shimadzu XRD600 powder diffractometer 155 operating at 40 kV and 120 mA, employing Cu Kα (λ= 1,54148 Å) radiation. The diffractograms 156 were collected between 10°≤ 2θ ≤ 90° under a rate of 0.5º min -1 . Crystalline silicon was used as 157 the diffraction standard. X-ray diffratogram of the oxides were refined by the method of Rietveld 158 using the FullProf software, with fitting criteria (Factor S -Goodness of Fit) was employed as 159 the ratio between the weight factor (R wp ) and the expected factor (R exp ), which should be closer 160 to 1. The fit parameters can be found in the Supplemental Information (Table S1).
161
N 2 adsorption-desorption isotherms were obtained using an ASAP 2010 analyzer 162 (Micrometrics). The specific area were analyzed using the Brunauer, Emmett and Teller (BET) 163 model and the Barrett-Joyner-Halenda (BJH) model for the porous volume (Barrett, Joyner & 164 Halenda, 1951). 165 Raman spectra were acquired at room temperature using a Bruker spectrometer model RFS 166 100/S, samples were excited at 1064 nm with laser operating at 100 mW. Diffuse reflectance 167 spectra of the synthesized oxides were acquired using an UV-1650PC Spectrometer (Shimadzu), 168 at room temperature and potassium bromide was used as reference. The band gap energy being 169 estimated by the Kubelka-Munk function (Patterson, Shelden & Stockton, 1977). In all photocatalytic assays, 100 mg L -1 of the catalyst was added to 31 mg L -1 dye Ponceau 174 4R (trisodium (8Z)-7-oxo-8-[(4-sulfonatonaphthalen-1-yl)hydrazinylidene]naphthalene-1,3-175 disulfonate, CI 16255, Sigma-Aldrich, 75%) aqueous solution (pH = 6.9) under magnetic 176 stirring. The experimental setup was previously described in detail (Oliveira et al., 2012). 177 Information about the radiation source and experimental data were available in (Machado et al., 178 2008;Santos et al., 2015). 179 The photocatalytic system was kept at 40 ± 2 ºC and under stirring for 30 minutes in the dark 180 to reach the adsorption equilibrium. Control measurements in the dark were performed and in the 181 absence of a catalyst to evidence the role of TiO 2 in the photochemical reaction. Aliquots were 182 taken at 20 minutes intervals, filtered and analyzed by spectrophotometry, following the 183 discoloration at 507 nm using a Shimadzu spectrophotometer model 1650PC and by Total 184 Organic Carbon (TOC) measurements, using a Shimadzu TOC-VCPH/CPN analyzer. The XRD data ( Fig. 2) confirm that all samples are composed mostly of nanocrystals of 216 anatase, with the (101) phase preferably exposed. In the case of HT processing at 200°C for 4h, 217 the presence of crystalline anatase phases and traces of brookite was observed, being confirmed 218 by the presence of peaks at 2θ equal to 25.38° (101) and 30.80° (121), respectively. Under the 219 treatment conditions to which these materials were submitted, the formation of the rutile phase 220 was not observed. The formation of the brookite phase was probably a crucial factor for the 221 inhibition of the transformation of anatase into rutile.
222
Rietveld analyses of the diffractograms (Fig. S1) confirm the decrease in crystallite size for 223 the oxides obtained after hydrothermal heat treatment (HT), which agrees with the HRTEM 224 images. These quantitative data also confirm the greater presence of brookite phase in samples 225 submitted to hydrothermal treatment. Besides that, it is observed that the percentage of the 226 brookite phase remains practically constant even with the use of different glycols in the synthesis 227 process. Already for materials prepared with heat treatment in a conventional oven, it turns out 228 that the use of different glycols leads to greater deformations only for the TiO 2 G3M sample, PeerJ Mat. Sci. reviewing PDF | (MATSCI-2020:09:53207:1:1:NEW 13 Jan 2021)
Chemistry Journals
Analytical, Inorganic, Organic, Physical, Materials Science 229 where polyethylene glycol was used in the synthesis, causing the formation of 17.47% brookite 230 phase (Tay et al., 2013). 231 The TiO 2 G3HT sample had a higher portion of brookite when compared to the TiO 2 G3M 232 sample, because since brookite is featured by its low symmetry, its formation is more efficient 233 under mild conditions such as shorter period and lower preparation temperature, as occurs in 234 hydrothermal treatment conditions (Lin et al., 2012). The formation of mesoporous structures was confirmed by N 2 adsorption-desorption 241 isotherms (Fig 3). Isotherms follow the type III for samples with 100% anatase phase (TiO 2 G1M 242 e TiO 2 G2M). The other samples, with brookite content, have type IV with a pronounced 243 hysteresis loop of types H3 and H4, according to the IUPAC classification. This suggests that 244 these materials are mesoporous solids formed by agglomerated or aggregated particles (Gregg & 245 Sing, 1982). The presence of brookite causes a decrease in the average pore diameters, 246 suggesting that the presence of structural defects influences the adsorption capacity and porosity 247 of the material. The values of surface area and porosity of these materials are presented in Table 248 1. Table 1 Morphologic and electronic parameters to oxides synthesized. 254 [ Table] 255 256 The diffuse reflectance spectra, expressed in terms of F(R) vs. photon energy (E), are 257 presented in Fig. 4. The indirect band gap value (E g ) was obtained by extrapolating the linear 258 segment to the X axis, Table 1. However, a simple inspection of the spectra suggests that the 259 band gap values calculated in this way calculated in this way are deviated from the actual values, 260 since the radiation absorption is not canceled (E<E g ), except from the point where F(R) → 0. 261 This suggests the existence of permitted states with energies lower than the estimated E g , that is, 262 E g(real) < E g . Thus, considering the lower threshold of the conduction band, which occurs when 263 F(R)0, that is, states with energies less than or equal to the energy associated with this 264 threshold, are prohibited. In view of this, E g(real) was also calculated (Table 1). Based on this 265 information, it appears that all photocatalysts absorb radiation more intensely in the near-UV 266 region. However, these photocatalysts, despite the high band gap energies, have significant 267 photocatalytic activity in the visible region, as suggest the estimated values of E g (real) . The 268 TiO 2 G1HT, TiO 2 G2HT and TiO 2 G1M photocatalysts show a radiation absorption profile shifted PeerJ Mat. Sci. reviewing PDF | (MATSCI-2020:09:53207:1:1:NEW 13 Jan 2021)
Chemistry Journals
Analytical, Inorganic, Organic, Physical, Materials Science 269 to the visible region, with E<E g , being therefore able to uptake photons in a large range of 270 wavelengths. Related to these factors, the high surface area, crystallinity and mixture of 271 crystalline phases are added, which end up favoring the photocatalytic potential of these oxides.
272
The electronic properties of the particles change significantly by reducing their size. Thus, 273 new properties can be expected in nanoparticles when compared to bulk (Hodes, 2007). The 274 variation of energy as a function of size promotes the quantum confinement and is characterized 275 by an increase in the indirect band gap energy (E g ), as can be seen for TiO 2 G1HT, which has 276 smaller particle and crystallite sizes, as estimated by HRMET and DRX analyses, and E g (3.30 277 eV) greater than that of the extended solid (3.20 eV for TiO 2 ) (Kumar & Devi, 2011). The catalysts were also evaluated using Raman spectroscopy (Fig. 5). All samples exhibit 284 vibration modes typical of anatase (3E g + 2B 1g + A 1g ). A 1g symmetry mode was not visualized, 285 probably due the overlap with the band corresponding to the second mode, of B 1g symmetry 286 (Iliev et al., 2013;Fang et al., 2015). A slight change in the signs is observed depending on the 287 type of heat treatment used (Fig 5 -Inset). The bands referring to samples thermally treated by 288 hydrothermal route are broader than those observed for the calcined oxides in a conventional 289 oven. This broadening can be directly correlated to the concentration of oxygen vacancies on the 290 photocatalysts, as previously shown by Parker e Siegel (Parker & Siegel, 1990). Thus, Raman 291 analysis indicates that the synthesis of oxides treated by the hydrothermal route, induces the 292 formation of oxygen vacancies on the oxide surface, increasing the system disorder. Figure 5 Raman spectra, at room temperature, for the synthesized TiO 2 photocatalysts. Inset: 297 Expanded normalized Raman spectra between 100 and 200 cm -1 in the main E g peak region 298 attributed to the broadening of the band according to the type of heat treatment.
299 300 Photocatalytic activity 301 The photocatalytic activity of the different synthesized oxides was evaluated in terms of the 302 degradation of the azo-dye Ponceau 4R. The control experiment, in the absence of any 303 photocatalyst, reveals extremely low levels of dye discoloration (4.0%) and mineralization (13%) 304 after 140 minutes of irradiation (Fig. S2). The degradation efficiency presented by the different 305 photocatalysts is summarized in Table 2. Table] 310 311 The oxides thermally treated by hydrothermal via were more efficient than conventional heat 312 treatment in promoting the degradation and mineralization of the dye under study. Calcination in 313 a conventional oven led to an increase in the crystallinity of the materials, as seen by the XRD 314 data, and a decrease in the surface area, which ended up compromising the photocatalytic 315 activity of these oxides.
316
The photocatalytic performance exhibited by the samples synthesized in the presence of 317 different glycols and thermally treated by hydrothermal via can be attributed to the coexistence 318 of anatase and brookite the high surface area, mesoporosity, and more appropriate particle sizes. 319 Crystalline materials with smaller particle sizes are more likely to exhibit expressive 320 photocatalytic properties (Ohno et al., 2001). 321 Although the TiO 2 G3M photocatalyst also presents itself as a mixture of polymorphs anatase 322 and brookite, it did not show significant photocatalytic activity, probably related to its smaller 323 surface area.
324
The increase in photocatalytic activity of samples that present anatase and brookite can be 325 explained by the synergism between these polymorphs. Although anatase and brookite present a 326 very close E g (Machado et al., 2012;Patrocinio et al., 2015), theoretical calculations have 327 shown that the energies of the conduction and valence bands of anatase phase are slightly lower 328 than the corresponding energy levels of brookite (Li et al., 2008), suggesting a certain ease of 329 migration of electrons from brookite to anatase. Thus, the holes are more available for oxidation 330 reactions. In addition, the energy barrier between these polymorphs will tend to hinder the 331 recombination among charge carriers. Therefore, with an extended life span, holes in the 332 brookite valence band have a greater chance to oxidize organic matter, while electrons "trapped" 333 in anatase may favor reduction reactions, leading to an increase in the photocatalytic activity (Li, 334 Ishigaki & Sun, 2007;Patrocinio et al., 2015). 335 A complex degradation mechanism is expected in heterogeneous photocatalysis (Hoffmann et 336 al., 1995;Ahmed et al., 2010). The reactions occur initially at the solid-solution interface and 337 involve reactive species generated on the surface of the excited photocatalyst or by direct 338 interaction between the excited photocatalyst and the substrate (Oliveira et al., 2012;Santos et 339 al., 2015). In the degradation under study, the discoloration of the dye is probably related to the 340 homolytic scission of the azo group. Hydroxyl radicals (HO • ), formed in the solid-solution 341 interface, may be responsible for this process (Kumar & Devi, 2011). Table 2 presents data on 342 the percentage of discoloration in the reactions mediated by the oxides synthesized in this study. 343 The best performances occurred using oxides submitted to hydrothermal treatment.
344
The mineralization process follows a Langmuir-Hinshelwood kinetics (Hoffmann et al., 345 1995), being of pseudo-first order in relation to the dye, as show in Fig. 6. The rate constants are 346 listed in Table 2. In these assays, it was found that only 4.0% of the dye was adsorbed in the photocatalyst (Fig. 356 S2), suggesting that the observed mechanism occurs mainly through the photodegradation of 357 organic matter, certainly by the action of reactive oxygen species (ROS), such as HO • or O 2 •-, 358 with the predominant action of the HO • radicals, a very strong oxidizing agent (standard 359 reduction potential of HO • /H 2 O 2.38 V vs. NHE) (Hoare, 1985). Accordingly, and based on the 360 characterization of the photocatalysts, we can propose a mechanism, shown in equations (1-9) 361 which must occur at the solid-solution interface, where TiO 2 (A) is the anatase polymorph and 362 TiO 2 (B) is the brookite polymorph. As result of the photoexcitation of the catalyst (1), the e -/h + 363 pairs are generated; recombination processes (2) compete with the electron trapping in the 364 polymorph anatase (3) and holes in the brookite polymorph (4 and 5), generating the reactive 365 species responsible for the degradation of the dye (5, 6 and 7). In the valence and conduction 366 bands, the oxidation (8) and reduction (9) reactions occur, respectively, resulting in degradation 367 products. The TiO 2 G1HT oxide, present the best photocatalytic performance (k app = 5.9 × 10 3 min -1 ; R 380 = 0.9824), because the availability of reactive species becomes proportionally higher as the 381 concentration of the dye decreases, since the concentration of these species is practically constant 382 during the photocatalytic process (França et al., 2016). Therefore, P4R undergo fragmentation at 383 the beginning of the reaction (Fig 6 -Inset), which should favor a faster mineralization.
384
Analyzing the spectrum presented in the Inset of Fig 6, it can be seen that at the end of the 385 photocatalytic process, the band centered at 500 nm, referred to an electronic transition with a 386 major component π → π* (Oliveira et al., 2012), involving the naphthalenic structures and the 387 azo group, associated with the coloring of the dye, decreases significantly. The formed products 388 should not present new or significant absorption bands in the analyzed region, suggesting that the 389 degradation not only induces a quick discoloration of the dye (Table 2), as they should also cause 390 a significant fragmentation of the dye structure, whose fragments should not absorb significantly 391 in the monitored region of the electromagnetic spectrum.
392
The coexistence of anatase and brookite in the TiO 2 synthesized with different glycols and 393 treated by a hydrothermal via at low temperature, minimized the recombination rate of the e -/h + 394 pairs, thus allowing the holes to be available for oxidation reactions. In addition, the correlation 395 of physical and chemical factors, such as high surface areas and porosity, high photon absorption 396 capacity in the UV-visible region and crystallinity considerably improved the photocatalytic 397 activity of these oxides. In this study, we present the preparation of TiO 2 mesoporous nanoparticles using the sol-gel 402 method with different glycols as structural molds. The use of ethylene glycol associated to 403 further hydrothermal heat treatment proved to be the most effective way to obtain nanoparticles 404 with improved photocatalytic activity. The results showed that materials submitted to 405 hydrothermal heat treatment presented smaller particles and greater porosity, with formation of 406 approximately spherical nanoparticles and with sizes up to 5 nm and formation of a binary 407 mixture of anatase and brookite phases. The use of different glycols influenced the size of the 408 particles, promoting the formation of smaller particles. The existence of a junction between 409 different phases of the same semiconductor, accompanied by a decrease in the size of the 410 particles, favored the charge transfer processes and contributed to the delay of the recombination 411 processes, significantly improving the photocatalytic activity, verified by the degradation of the 412 azo-dye Ponceau 4R under UV-Vis light irradiation. This type of photocatalyst that can harness 413 both UV and visible light is a promising candidate for applications in photochemistry, sensors 414 and solar cells, which has motivated us to develop oxides and nanocomposites based on TiO 2 415 with a wide spectrum of applications. | 2021-05-05T00:08:21.831Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "8a8da39e6995dfb68c2d4e2803d8eac172080cbf",
"oa_license": "CCBY",
"oa_url": "https://peerj.com/articles/matsci-13.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ca717de30533065b60b2c2c4af0836aace5d3dc5",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
12657020 | pes2o/s2orc | v3-fos-license | Characterization of Pressure Transients Generated by Nanosecond Electrical Pulse (nsEP) Exposure
The mechanism(s) responsible for the breakdown (nanoporation) of cell plasma membranes after nanosecond pulse (nsEP) exposure remains poorly understood. Current theories focus exclusively on the electrical field, citing electrostriction, water dipole alignment and/or electrodeformation as the primary mechanisms for pore formation. However, the delivery of a high-voltage nsEP to cells by tungsten electrodes creates a multitude of biophysical phenomena, including electrohydraulic cavitation, electrochemical interactions, thermoelastic expansion, and others. To date, very limited research has investigated non-electric phenomena occurring during nsEP exposures and their potential effect on cell nanoporation. Of primary interest is the production of acoustic shock waves during nsEP exposure, as it is known that acoustic shock waves can cause membrane poration (sonoporation). Based on these observations, our group characterized the acoustic pressure transients generated by nsEP and determined if such transients played any role in nanoporation. In this paper, we show that nsEP exposures, equivalent to those used in cellular studies, are capable of generating high-frequency (2.5 MHz), high-intensity (>13 kPa) pressure transients. Using confocal microscopy to measure cell uptake of YO-PRO®-1 (indicator of nanoporation of the plasma membrane) and changing the electrode geometry, we determined that acoustic waves alone are not responsible for poration of the membrane.
Scientific RepoRts | 5:15063 | DOi: 10.1038/srep15063 is caused by the buildup of charge on the membrane leading to "pinching" of the phospholipids and thus pore formation 3 . Electrodeformation is an electrical-field-driven internal mechanical stress that causes the entire cell to deform, leading to a higher probability of pore formation 4 . Another competing theory of poration, championed by Vernier, has suggested poration occurs "due to field-induced reorganization of water dipoles at the water-lipid or water-vacuum interfaces", presumably this reorganization of water molecules creates more energetically favorable situation for pore formation 5 . These theories of poration, although plausible, are not all-inclusive and do not account for other non-electrical factors, such as external mechanical stress caused by interactions with pressure transients.
Pressure transients have been shown to create pores in plasma membranes by imparting a mechanical stress [6][7][8][9][10][11][12][13][14][15][16] . Sonoporation uses ultrasonic waves (essentially pressure transients in the MHz range) to create holes in the biomembranes of cells and vesicles for the purposes of either delivering or releasing compounds, biomolecules, drugs, etc 8,10,13 . These ultrasonic shock waves can cause cavitation microbubbles, leading to poration by one of the following mechanisms: acoustic micro-streaming, bubble oscillations, or inertial cavitation shock waves 13 . Inertial cavitation shock waves, if of sufficient amplitude, impart mechanical stress on the plasma membranes of nearby cells leading to poration.
We hypothesize that pressure transients created by nsEP exposure 17 are directly linked to the phenomena of nanoporation. We used the probe beam deflection technique (PBDT), an all-optical, non-contact method for detecting pressure transients generated in gaseous and liquid environments to characterize the pressure transients generated by typical nsEP expsoures [18][19][20][21][22] . With PBDT, the propagation of a pressure transient causes a change in the refractive index of the medium through which a probe beam travels, resulting in a deflection of that beam. This deflection is detected by a modified quadrature diode and quantified as the time derivative of a pressure transient. This approach is used in place of submerging a hydrophone in the conductive media, which is traditionally used to detect pressure transients but is not practical given the high-voltages consistent with nsEP. Further studies have also shown PBDT to be considerably more sensitive than most hydrophones, which are limited to their narrow cone of acceptance 23 .
We characterized the pressure transients based on frequency, amplitude, shape, and speed. We performed a Fast Fourier Transform (FFT) on pressure transient signals collected to determine the frequency of the pressure transients. We then used an ultrasonic transducer and a calibrated hydrophone to calculate the amount of pressure generated by nsEP exposure. In an effort to identify the source of the pressure transients, we used infrared thermography, Schlieren imaging, and pump-probe laser imaging to capture evidence of physical events occurring at or near the surface interface of the electrodes. Finally, we used confocal microscopy and the fluorescent dye YO-PRO(R)-1, to determine the effect of the pressure transients on nanoporation. The findings in this paper provide new insights as to the nature of the physical mechanisms that occur rapidly after the application of nsEP at the surface of the electrodes and how these events could potentially contribute to the breakdown of plasma membranes.
Results
Detection of Near-field Waves Produced by nsEP Using PBDT. When the electrodes were placed in very close proximity (< 1 mm) to the probe beam, termed the near-field, substantial deflections of the probe beam were detected upon nsEP exposure. The nsEP exposure was administered with a pulse width of 600 ns and an applied voltage of 1000 V to generate an electrical field of approximately 13.1 kV/cm at 50 μ m (typical cell exposure distance) from the electrodes. The electrical field strength was calculated in a FEM model/simulation for nsEP using the applied voltage measured on an oscilloscope as described in the methods section. The largest near-field deflections were observed when the nsEP electrode was closest to the beam; the deflections diminished as the electrodes were moved further away. For the X+ plane (electrodes positioned to the right and parallel of the probe beam), the largest deflection signal was recorded between 0 and 100 μ m away from the probe beam (Fig. 1A). For the Y+ plane (electrodes positioned above the probe beam), the largest deflection signal was recorded between 60 and 70 μ m from the probe beam (Fig. 1B). Deflections of the probe beam tracked linearly with changes in the electric field (Fig. 1C) and in the pulse duration (Fig. 1D). The greatest deflections were observed at the highest electric field and with the longest pulse duration. These near-field deflections were undetectable below an electric field of 2.7 kV/cm or a pulse width of 30 ns. The time required for these deflections to return to baseline was long (> 35 ms) suggesting that they could be thermal transients.
Thermal Profile of nsEP. Due to the nature of the waves detected by PBDT very near the electrodes, infrared thermography was performed in an effort to determine the total increase in thermal energy deposited by a typical nsEP pulse. Pulse durations of 1000, 800, 600, and 400 ns were used at 1000 V (applied), yielding an electric field of 13.1 kV/cm at the imaging plane. Figure 2A,B show a colorized FLIR image of the electrodes 1.25 ms before and after the nsEP pulse. An average thermal profile for each of these pulse durations is plotted in Fig. 2C. The 1000 ns duration pulse caused an increase of approximately 0.15 °C, whereas 800 ns, 600 ns and 400 ns pulse durations, caused an increase of 0.13, 0.1, and 0.075 °C respectively. The speed of the camera, at 800 frames/sec limited our ability to detect thermal increases occurring much sooner after or during the pulse.
Detection of Far-field Waves Produced by nsEP Using PBDT. In the far field (> 1 mm away from the probe beam), we identified deflections in the microsecond time domain. The nsEP electrodes were scanned in 1 mm increments in the X+ (Fig. 3A), X-(Fig. S1) and Y+ planes (Fig. S2) and the PBDT signals were captured. The time-delay between the application of the nsEP pulse and the resulting deflection corresponded linearly with the distance between the nsEP electrodes and fixed probe beam. This time delay was due to the travel time of the induced wave, the speed of which was determined by plotting the travel time of the wave against the distance of the electrodes from the probe beam (Fig. 3B). The speed of the phenomenon was found to be 1.511 mm/μ s, which is very close to the speed of sound (c s = 1.5023 mm/μ s) in normal saline at 23 °C 24 . Based on the speed at which these waves travel we identified them as acoustic pressure transients.
To determine the pressure generated by the nsEP, we used an ultrasonic transducer to generate a positive control pressure transient. The peak to peak voltage changes recorded by the PBDT and a co-localized calibrated hydrophone were plotted as a function of transducer input voltage (Fig. 3C). The generated pressure was then determined from the calibrated Onda hydrophone calibration and related to the probe beam deflection voltage, which was determined to have a sensitivity of 15 μ V/Pa. We then indirectly quantified the amount of pressure produced by a typical nsEP exposure (600 ns, at 13.1 kV/cm). Supplementary Fig. 3 shows the calibration setup and the pressures generated in this experiment can be found in supplementary Table 1. We calculated the peak pressure at 5 mm from the electrodes to be 13 kPa for a 13.1 kV/cm, 600 ns pulse.
To ensure that the PBDT had sufficient frequency response to capture the waves produced by nsEP, we performed a FFT for the transducer signal captured by PBDT ( Supplementary Fig. S4) and compared it to the FFT of the same signal captured by the calibrated hydrophone ( Supplementary Fig. S5). The frequency response of the signals from each of these techniques matched quite well and had a cutoff frequency of approximately 20 MHz. To quantify the frequency characteristics of nsEP-induced pressure transients, a FFT was performed on a representative 600 ns, 13.1 kV/cm nsEP and on the resulting PBDT signal. The FFT of the nsEP trace showed a broad frequency range with a peak at 1 MHz ( Supplementary Fig. S6). The fundamental ultrasound frequency of the nsEP pressure transient was found to be approximately 2.5 MHz (Fig. 3D). Therefore, the nsEP-induced pressure transients were well within the pass-band of the PBDT system implemented in these experiments. The Effect of Altering the Electrical Parameters of the nsEP on the Pressure Transients. Having determined that the deflections in the far-field from the electrodes were most likely pressure transients emanating from the nsEP electrodes, we decided to determine how the nsEP pressure transients depended on the electrical parameters used to produce the nsEP (i.e., the electric field and or pulse duration). At a fixed distance of 5 mm to the beam in the X+ plane, we recorded the deflections for 600 ns pulses at electric fields ranging from 13.1-1.5 kV/cm. Deflections recorded in the Y+ for same pulses can be found in supplementary Fig. 7. Altering the applied input voltage to the nsEP exposure changed the intensity of the electric field at the electrodes. At 1.5-4.0 kV/cm, no pressure transients were detected, suggesting a threshold for formation (Fig. 4A). At the higher electric fields, 5.3-13.1 kV/ cm, deflection of the probe beam was observed with dependence in amplitude on the electric field, indicating that the pressure transients responded linearly with electrical input (Fig. 4A). These deflections occurred at approximately 3.3 μ s after the pulse was fired, closely matching the time required for sound to travel 5 mm. The width of these initial deflections was approximately 600 ns. A secondary deflection can be seen trailing the first major set of deflections, possibly a reflection from an internal surface in the experimental tank. Rotating the electrodes 90° to the probe beam had no effect on the PBDT pattern or amplitude (supplementary Fig. 8A-D).
Using the same probe-beam and electrode orientation as in Fig. 4A, we altered the pulse durations from 10 to 600 ns, while holding the electric field at a constant 13.1 kV/cm. The smallest (lowest amplitude) pressure transient was detected at 10 ns and the largest (highest amplitude) occurred at 400 ns pulse width (Fig. 4B). These same measurements in the Y+ plane are shown in supplementary Fig. 9. Using the calibration constant previously determined, we were able to calculate the amount of pressure produced by each nsEP exposure. The calculated pressures from Fig. 4A are plotted in Fig. 4C and they Tukey's multiple comparison test was performed, each data set was found to be significantly different form each other. Siginificance was not noted on the figure for simplificaiton purposes. P-values, 1000 ns vs. 800 ns(< 0.005), 1000 ns vs. 600 ns (< 0.000005), 1000 ns vs. 400 ns (< 0.000005), 800 ns vs. 600 ns (< 0.005), 800 ns vs. 400 ns (< 0.000005), and 600 ns vs. 400 ns (< 0.005). Note, the FLIR camera used for this experiment has a frame rate of 800 frames/second, therefore the initial maximal temperature spike may not have been captured.
show a linear dependence with respect to the electric field of an nsEP exposure. Figure 4D displays the pressures for the pulse width experiment. Curiously, the shorter pulse width of 400 ns generates a higher pressure transient than the longer 600 ns pulse. This result could be an artifact caused by reduced signal quality of the 600 ns pulse due to electromagnetic interference with the recording apparatus. Despite this result, the linear dependence of the pressure wave magnitude on the applied voltage to generate the nsEP provides further evidence that nsEP produce acoustic pressure transients.
Schlieren Imaging of nsEP Generated Pressure Transient. To obtain further confirmation of a pressure transient produced by nsEP, the Schlieren imaging technique was used to capture an image of the pressure transient propagating away from the electrodes. The Schlieren imaging technique has the advantage of being able to capture changes in the refractive index of a media in two dimensions. In Schlieren imaging, collimated light passes through the area to be imaged before being focused onto an optical stop. Light that does not interact with a refractive index gradient will pass through the sample un-deflected and thus will be blocked by this optical stop. However, light that interacts with a physical wave in the image area will change direction and bypass the optical stop, where it is captured by a charge-coupled device camera to create a shadowgraph. A drawing of the Schlieren imaging setup is presented in Fig. 5A. Images before the pulse ( 5C confirmed that the source of the pressure transients was the electrodes, thus we sought to visually capture any physical phenomena occurring at the electrodes during and after the pulse. Figure 6 is a collage of images collected beginning at the time of the exposure (0 μ s), during the exposure (0.5 μ s) and for several frames after. A corona can be seen forming around the edge of the electrode (anode), at 1.5 μ s after the initiation of the pulse. This corona existed for approximately 1.5 μ s, eventually leading to the formation of microbubbles. These microbubbles appear and cavitate > 10 μ s after the exposure. The number and the density of bubbles decrease with time, with fewer bubbles formed by 13.5 μ s after the end of the nsEP.
Effect of Pressure Transients on Nanoporation.
In the previous experiments, we showed that the electric field intensity directly influences the creation of pressure transients (Figs 1C and 4A). We used increases in YO-PRO ® -1 fluorescence immediately after nsEP exposure as an indicator of nanoporation. YO-PRO ® -1, a nucleic acid stain, has been shown to enter live cells exposed by nsEP, suggesting it enters the cell via nanopores [25][26][27] . YO-PRO ® -1 fluorescence can be non-linear especially if the indicator enters the nucleus, however, in our experiments, we only recorded changes in YO-PRO ® -1 fluorescence occurring < 30 seconds after exposure, thus remaining in the linear range of the stain. We applied a single 600 ns pulse at 12.0, 9.6, 7.2, 4.8, or 2.5 kV/cm and recorded the relative change in fluorescent intensity of YO-PRO ® -1 within exposed cells. We found that relative increases in YO-PRO ® -1 fluorescence, correlated linearly with increases in the electric field (Fig. 7A). A representative CHO-K1 cell, exposed with electrodes positioned 50 μ m above, can be seen in Fig. 7B just before the pulse, and 25 seconds after the pulse (Fig. 7C). To determine what effect the pressure transients in the far field have on nanoporation, we placed the electrodes at varying heights to assess the effect of the pressure transients in the near vs. the far field. Figure 8A shows the typical electrode orientation, positioned 50 μ m above the cells. This orientation was used with heights of 0, 50, 100, 150, 200, 250 or 500 μ m above CHO-K1 cells stained with YO-PRO ® -1.
A single 600 ns pulse at approximately 12.0 kV/cm was applied to cells at each height. The percentage of YO-PRO ® -1 fluorescence increase was plotted vs time after application of the pulse (Fig. 8B). We determined the greatest level of nanoporation occurred when the electrodes were closest to the cells (near field). At 0 μ m from the cells (the electrodes were touching the glass slide bottom of the dish) there was a 70% increase in YO-PRO ® -1 fluorescence. YO-PRO ® -1 fluorescence, and presumably nanoporation, dropped as the electrodes were moved away from the cells, indicating that the electric field maybe driving nanoporation either directly or indirectly. At 50 μ m we observed an increase in YO-PRO ® -1 fluorescence of approximately 40%. At 100 μ m there was an increase of 36%, which dropped to 17% at 150 μ m. Nanoporation, as indicated by YO-PRO ® -1 did not occur with the electrodes at a height of 250 μ m or higher above the cells. It appears that pressure transients, measured in the far field contribute little to the process of nanoporation.
In an effort to decouple electric field from the acoustic near field, we constructed electrodes with different gaps: 89, 319, and 966 μ m (Fig. 9A). To account for differences in electrical field, we calculated the equivalent electrical fields for each electrode based on FEM model/simulation (Fig. 9A,B). 100, 300, and 1000 V were applied to the 89, 319, and 966 μ m electrodes respectively, generating an electric field of and 14% increase in YO-PRO ® -1 fluorescence respectively. Applying the maximum voltage of 1000 V to the 319 μ m electrodes yielded an electric field of 6.8 kV/cm. Cells exposed using the 319 μ m gap electrodes at 6.8 kV/cm yielded a 64.4% increase in YO-PRO ® -1 fluorescence. Cells exposed by the 89 μ m gap electrodes at 4.8 kV/cm yielded a 45.0% increase in YO-PRO ® -1 fluorescence. The greatest increase in YO-PRO ® -1 fluorescence occurred when 1000 V was applied to the 89 μ m gapped electrodes, which consequently yielded the highest electrical field of 12.0 kV/cm. Changes in YO-PRO ® -1 fluorescence mirrored the electric field trend, essentially, as the electric field intensity increased, so did the level of nanoporation.
Discussion
Previous reports have shown that different cell lines have varying degrees of sensitivity to nsEP exposure. Adherent cells, like HeLa and CHO-K1 were found to be more resilient to the effects of nsEP than are suspension cell types such as Jurkat and U937 [28][29][30] . These differences in viability were speculated to be related to the composition of each cell's plasma membrane. To examine this hypothesis, Thompson et al. used atomic force microscopy to determine the Young's Modulus for each of the cell types mentioned above 31,32 . It was determined that more rigid cell types had a higher threshold for damage by nsEP and thus had increased viability compared to less rigid cells. In a follow on experiment, Thompson et al. treated rigid cells with latrunculin A, (a sponge toxin capable of depolymerizing actin) thus making them "softer" and found that these cells became more prone to damage by nsEP 31 . These findings suggest that membrane rigidity could be a contributing factor for survivability of cells exposed to nsEP.
Further experiments have shown that altering the rigidity of the plasma membrane directly affects cellular viability when exposed to nsEP. Recently, we have shown that the depletion of cholesterol from CHO-K1 cells made them 50% more susceptible to nsEP exposure compared to sham-treated cells. Experiments with the trivalent cation gadolinium have shown that cells treated with this chemical agent and exposed to nsEP have a higher threshold for damage (higher viability) than do cells exposed in the absence of gadolinium 33 . Gadolinium, believed to make the plasma membrane more rigid, has been used as an MRI contrast agent and is used in electrophysiology to block sodium leak channels and stretch-activated ion channels (SAC). It is possible that the observed effect of Gd 3+ is not entirely due to its ability to increase plasma membrane rigidly, but rather to its ability to block the mechanically sensitive SAC channels. Altogether, these studies show that alteration of the rigidity of a cell affects its sensitivity to nsEP. However, it is important to note that treating cells with toxic compounds, such as latrunculin and gadolinium potentially alters the cells normal response to nsEP, suggesting that generalized cellular stress may also contribute to the observed changes in susceptibility.
High speed calcium imaging has shown that nsEP causes a rapid increase in intracellular calcium that originates from membrane regions closest to the electrodes 34 . Beier et al. suggested the possible mechanism for the rapid increase in intracellular calcium is likely due to several mechanisms, including the formation of nanopores, the poration of intracellular organelles, and/or activation of specific ion channels 34 . It is possible that calcium enters the cell via mechanically activated channels or through the pore forming subunits of the piezo proteins found in cell membranes. Semenov et al. proposed that extracellular calcium via nanopores is a more efficient way of increasing intracellular calcium 35 . It is possible that a rapid increase in intracellular calcium could be caused by mechanical perturbation of the endoplasmic reticulum/plasma membrane stimulating the release of calcium from intracellular stores. This release of calcium could induce a cascade of channels to open, thereby allowing more calcium to flood into the cell. Interestingly, in a very recent publication, researchers using laser-induced cavitation as a high-throughput screening tool for mechanotransduction research, identified calcium release from the endoplasmic reticulum as a primary biomarker for cells exposed to a single intense shear stress wave 36 . These single intense shear stress waves, termed "μ tsunami", were also reported to directly or indirectly stimulate specific G-protein coupled receptors (GPCR) on the plasma membrane leading to the production of IP 3 36 . Tolstykh et al. has shown that nsEP exposure activates the intracellular phosphoinositide signaling pathway [37][38][39] , hypothetically through the hydrolysis of phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P 2 ) or PIP 2 , a well-characterized intracellular pathway that originates on the inner surface of the plasma membrane. Hydrolysis of PIP 2 ultimately causes intracellular calcium release from the endoplasmic reticulum via inositol trisphosphate (IP 3 ) receptors, activating protein kinase C (PKC). The similarities between the observed bioeffects of a single intense shear stress wave (mechanical stimulation), and a single nsEP exposure are striking, and it is possible that the major biophysical mechanism behind nsEP action is due to mechanical stimulation.
In this paper, we present evidence of two different types of waves generated by nsEP exposure that could be responsible for the above mentioned mechanical stimulation. Waves emanating from the nsEP electrodes were recorded by PBDT as deflections both in the near-field and in the far-field. The waves differed in these two regions, offering clues as to the nature of biophysical mechanisms occurring after an nsEP exposure. The near-field deflections are thought to be thermal transients based on their limited spatial range, approximately four-fold larger amplitude compared to the far field deflections, and relatively slow rebound time (> 35 ms). If this interpretation is correct, this result is an important finding because it provides evidence that the pressure transients generated by nsEP may be thermoelastic in nature, suggesting that rapid heating of the local environment by nsEP are responsible for the generation of pressure transients. Infrared thermography of the electrodes revealed a 0.1 °C temperature increase occurring 1.25 ms post 600 ns exposure. A 0.1 °C increase appears to be marginal; however, if this increase occurs within the span of a typical 600 ns, then this increase could be significant. A 0.1 °C rise over 600 ns would equate to a fast thermal gradient of 167,000 °C/second. Understanding the source of the pressure transients is fundamental to elucidating their potential biological effect; thus, these thermal waves warrant further study and characterization.
The deflections in the far-field are due to pressure transients interacting with the probe beam. The pressure transients traveled at the speed of sound, had a rapid relaxation time, and were not spatially constrained. Characterization of these pressure transients found that they have a peak frequency at 2.5 MHz and produce pressures in the 13 kPa range. Visualization of a pressure transient by Schlieren imaging revealed the Gaussian nature of the wave as it propagated outward from the electrodes. The finding that the pressure transient is Gaussian suggests the wave may be thermoelastic in nature and is the result of rapid heating of the solution around the electrodes during the exposure. With pump-probe laser imaging we observed the formation of a corona around the edge of the electrodes immediately after the nsEP exposure. Collapse of the corona resulted in many microbubbles forming randomly and persisting for 10 μ s. This observation, the first of its kind for exposures used to induce nanoporation in cells, indicates that there is a mechanical component in the physical processes initiated by nsEP.
While pressure waves in the far field were clearly observed, they appear to have little impact on nanoporation when acting on the cells at distances greater than 250 μ m. Our biological experiments imply that nanoporation tracks the intensity of the electric field. The stronger the electric field, the more nanoporation occurs. Data presented in Figs 7-9 corroborate these observations. When the electrodes are positioned 50 μ m above the cells, increasing the electric field results in increases in nanoporation. Adjusting the height of the electrodes modulates the intensity of the electric field experienced by the cells. As the height of the electrodes is increased, the electric field intensity diminishes as well as the effect on nanoporation. These findings suggest that the electric field is either directly or indirectly responsible for nanoporation. We speculated that the microbubbles formed by nsEP exposure (captured in the collage presented in Fig. 6) could be responsible for nanoporation. It is known that the collapse of microbubbles can create jets, which, when near plasma membranes can cause damage, that appears similar to nanoporation. However, the microbubbles were only observed forming at or near the anodic electrode. To determine if the microbubbles played a role in nanoporation, we used electrodes with different gaps and examined cells in the middle of the electric field for nanoporation. Adjusting the input voltage to match the gap of the electrodes, ensured the production of electric fields of similar strength. No appreciable difference in nanoporation was observed with different gapped electrodes, suggesting once again that the electric field and not microbubbles is responsible for nanoporation.
Determining the effect of the pressure transients on nanoporation in the near field is much more difficult. The acoustic near field and the electric field are intimately linked, with the intensity of the electric field most likely determining the strength of the acoustic near field. Not only is the acoustic near field constrained by the electric field intensity, but it is also chaotic, with significant fluctuations in pressure intensity due to constructive and destructive interference of the multiple waves 40 . We speculate that both the near field and far field waves are evidence of an uncharacterized event occurring at the electrodes, driven by electric field intensity. This unidentified event could be responsible for some of the observed bioeffects associated with nsEP exposure. A possibility is that this event is the atomization of water, caused by the rapid alignment and breaking or bending of the water molecules by the intense but short duration electric fields created by nsEP. When an electric field intensity is high enough, the bonds holding water molecules together become stretched and or may break, resulting in the production of a shock wave. That shock wave would slow down as it propagated outward, eventually coalescing into an acoustic pressure wave, much like the pressure transients we have characterized in this paper. The rapid increase in temperature and pressure at the water/electrode interface, occurring as a result of the atomization, would lead to electrolysis of the water. The electrolysis of water would result in the production of hydrogen and oxygen gas, which would in turn cause the formation of microbubbles. These microbubbles would be very similar to the microbubbles we recorded in Fig. 6. The remaining free hydrogen and oxygen ions would recombine to form reactive oxygen species (ROS). Although we did not measure ROS production in this paper, ROS has been previously detected and described in response to nsEP exposure 41 . More work must be aimed at identifying the cause(s)/source (s) of the pressure transients identified by this paper. Identification of the event that leads to the production of these pressure transients will not only provide new details to how electrical pulses behave in an aqueous environment, but it may finally the answer the question of how electric fields cause the breakdown of plasma membranes 4 .
Methods
PBDT setup. A 4.5 mW He-Ne laser (Thorlabs, Newton, NJ) with emission at 632.8 nm was employed as the probe beam. The laser was focused to a beam waist of approximately 150 μ m for the fast axis of the beam, which was parallel to the nsEP electrodes as indicated in Fig. 10A. The probe beam passed through the center of a glass tank measuring 13.5 cm × 9 cm × 3 cm, containing approximately 350 ml of a physiological buffer comprised of 135 mM NaCl, 5 mM KCL, 10 mM HEPES, 10 mM Glucose, 2 mM CaCl 2 , and 2 mM MgCl 2 (Fig. 10B). The buffer osmolality was 300 ± 10 mOsm and the pH was 7.4. After passing through the tank, the beam was reflected at a 45° angle into the custom-made quadrature diode detector. This quadrant silicon photodiode (Gamma Scientific, San Diego, CA) was chosen for its large active area (about 10 mm diameter) and fast response time. The nsEP electrodes were positioned in either + Y, + X, − Y, or − X planes (Fig. 10C) by a motorized stage capable of moving 25 mm in the X-plane and 12 mm in the Y-plane. When a wave generated from the electrodes intersected the probe beam, the variation in the refractive index of the medium caused the probe beam to deflect from its original direction, which appeared as an intensity and/or trajectory change in the output of the position detector 17-22 . Exposures/Data capture. The nsEP exposures were generated by a custom pulsing system previously described in the literature 42 . This custom nsEP pulser can deliver six discrete pulse widths, 600, 400, 200, 60, 30, and 10 ns with applied voltages ranging from 0 to 1000 V. The nsEP electrode was prepared similarly to previously published methods 17 , but, in short, the electrodes were constructed using 127 μ m tungsten wire rods (A-M Systems, Sequim, WA). A single rod of the selected wire was then threaded through a piece of polyimide tubing (A-M Systems) only slightly larger than the wire (142 μ m for 127 μ m). Once threaded, two of the insulated rods were then threaded together through a borosilicate glass capillary (World Precision Instruments, Sarasota, FL) and fixed in place with superglue (Scotch Brand, 3M, St. Paul, MN). For each electrode, 12 mm of wire extended from the glass capillary, the last 6 mm of which were denuded of the polyimide coating. The gap between electrode rods was approximately 125 μ m. Once the superglue was dry, the free ends of the electrode were connected to a type-K connector from OMEGA Engineering Inc. (Stamford, CT). Accurate delivery of the pulse was monitored on a Tektronix TDS-3054b e*Scope ™ oscilloscope (Tektronix Inc., Beaverton, OR) for each pulse using a 100× high voltage probe. Each trace represented is an average of 180-200 traces collected over 3 min at a pulse rate of 1 Hz.
Calibration of PBDT. The probe beam deflection technique was calibrated utilizing a calibrated Onda hydrophone (NC-1500, Sunnyvale, CA). A square wave pulse with varying input voltages was applied to an ultrasound transducer emission source. Both hydrophone and ultrasound transducer were submersed inside the same tank and buffer solution mentioned above. The transducer aperture was positioned such that the tangent vector to the aperture face pointed to the 45° surface of a right triangle prism positioned at the bottom of the tank. The aperture of the calibrated hydrophone faced the 45° slope of the prism from the left. The probe beam was focused to a point directly between the 45° prism surface and the hydrophone aperture. The focal point was positioned as close to the hydrophone aperture as physically possible, in this case less than 1 mm. The ultrasonic wave emitted from the transducer traveled down to the prism surface, reflected off of the prism and traveled parallel to the bottom of the tank towards the Onda Hydrophone aperture passing the probe beam along its trajectory. Signals were recorded from the hydrophone and the probe beam simultaneously on the Tektronix TDS-3054b e*Scope ™ oscilloscope.
Schlieren Imaging. The Schlieren imaging technique used a 4.5 mW He-Ne laser (Thorlabs, Newton, NJ) as the light source and a Zyla 5.5 sCMOS high speed camera (Andor, South Windsor, CT) with a 10 μ s exposure time at 100 frames per second to record the resultant changes in the refractive index. Timing was accomplished by using a Stanford Research Systems digital delay generator (Sunnyvale, CA). This digital delay generator was used to trigger the nsEP and the subsequent imaging by the camera at different iterations after the pulse. The same glass tank mentioned above containing the same physiological buffer was used as the liquid medium for the propagation waves from the nsEP electrodes. The pulse duration was 600 ns at 1000 V using the previously described electrodes, yielding an electric field of 13.1 kV/cm. Captured data was analyzed with ImageJ 43,44 .
Pump-probe Laser Imaging. We constructed a pump-probe laser imaging system to visualize the acoustic and thermal waves at the electrode surfaces, as well as effects on the electrodes themselves. The system consisted of a 70 ns, 532 nm Nd:YAG pulsed laser as a strobe source synchronized to the nsEP pulse firing to provide "snapshots" of the waves at discrete points during and after the pulse. Timing was critical in order for the propagation of the waves to be observed as they originate from the electrode and travel across the solution. The visual shape of the energy propagation as well as the presence and properties of cavitation effects was determined by acquiring images of both the thermal and acoustic waves.
Determination of nanoporation.
Chinese hamster ovarian (CHO-K1) cells from ATCC (Manassas, VA) were grown according to supplier's recommendation in F-12K medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin at 37 °C, 5% CO 2 , and 95% humidity. Approximately 6.5 × 10 3 were plated on 35 mm glass bottom dishes coated with Poly-d-lysine (MatTek Corporation, Ashland, MA) and allowed to incubate overnight at 37 °C, 5% CO 2 , and 95% humidity. Twenty-four hours later the cells were washed with DPBS and stained with YO-PRO ® -1 (Life Technologies, Grand Island, NY) which was added to 3 mL of the physiological buffer described in the above sections. The buffer containing YO-PRO ® -1 was added to the cells, and then incubated at 26 °C for 20 minutes. Cells were exposed on the Zeiss LSM 710 as described in our previous publications 45 . YoPro1 ™ fluorescence data was analyzed using Fiji (ImageJ).
Modeling of Electric Field. A Finite Element Method (FEM) modeling of electric field, based on the solution to Maxwell's equations, was performed using Comsol Multiphysics ® . A 3D geometry was assembled with two cylindrical tungsten electrodes, in a cube domain, filled with saline solution, with a conductivity of 1.35 S/m. Voltages were applied to one electrode, while the other electrode had an applied voltage of 0 V (ground potential). Applied voltage was arbitrary, as it is only the relative difference in voltage between electrodes that affects the electric field strength. A heterogenous mesh was applied to decrease the granularity of the solution, in areas of interest. The boundaries of the cube domain are considered a soft boundary condition, resulting in a nulled field at the cube boundary. A stationary solver was utilized, and the solution electric field was plotted utilizing "slice" visualization. Electric field strengths were calculated for each experimentally applied voltage and electrode distance, to maintain an approximately equivalent electric field at the area of interest for each experimental configuration. | 2018-04-03T00:31:03.519Z | 2015-10-09T00:00:00.000 | {
"year": 2015,
"sha1": "3611912b04376d8d58bd92c1e3ce5491bc82bfa4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep15063.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3611912b04376d8d58bd92c1e3ce5491bc82bfa4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
265637761 | pes2o/s2orc | v3-fos-license | Oral Immunization with Attenuated Salmonella Choleraesuis Expressing the FedF Antigens Protects Mice against the Shiga-Toxin-Producing Escherichia coli Challenge
Edema disease (ED) is a severe and lethal infectious ailment in swine, stemming from Shiga-toxin-producing Escherichia coli (STEC). An efficient, user-friendly, and safe vaccine against ED is urgently required to improve animal welfare and decrease antibiotic consumption. Recombinant attenuated Salmonella vaccines (RASV) administered orally induce both humoral and mucosal immune responses to the immunizing antigen. Their potential for inducing protective immunity against ED is significant through the delivery of STEC antigens. rSC0016 represents an enhanced recombinant attenuated vaccine vector designed for Salmonella enterica serotype Choleraesuis. It combines sopB mutations with a regulated delay system to strike a well-balanced equilibrium between host safety and immunogenicity. We generated recombinant vaccine strains, namely rSC0016 (pS-FedF) and rSC0016 (pS-rStx2eA), and assessed their safety and immunogenicity in vivo. The findings demonstrated that the mouse models immunized with rSC0016 (pS-FedF) and rSC0016 (pS-rStx2eA) generated substantial IgG antibody responses to FedF and rStx2eA, while also provoking robust mucosal and cellular immune responses against both FedF and rStx2eA. The protective impact of rSC0016 (pS-FedF) against Shiga-toxin-producing Escherichia coli surpassed that of rSC0016 (pS-rStx2eA), with percentages of 83.3%. These findings underscore that FedF has greater suitability for vaccine delivery via recombinant attenuated Salmonella vaccines (RASVs). Overall, this study provides a promising candidate vaccine for infection with STEC.
Introduction
ED is a condition of intestinal toxemia triggered by STEC, often seen in piglets aged 4-12 weeks [1].Clinical presentations include eyelid swelling, paralysis, abnormal vocalizations, neurological signs, and a significantly increased mortality rate, leading to substantial economic losses in the swine farming industry [2].In the realm of clinical application, the pathogenic strain of Escherichia coli has been undergoing evolutionary changes [3].Moreover, the emergence of novel resistance genes, fueled by antibiotic misuse, has given rise to widespread multidrug resistance.This complex scenario engenders significant impediments to the efficacious management and containment of Escherichia-coli-associated maladies [4].Consequently, there persists an exigent demand for the expeditious development of a novel vaccine technology platform [5,6].Meanwhile, Salmonella Choleraesuis Biomolecules 2023, 13, 1726 2 of 13 is a significant pathogen responsible for paratyphoid fever in piglets aged 2 to 4 months.This bacterium can induce widespread illness in recently weaned piglets, resulting in various clinical symptoms such as sepsis and localized inflammation in other tissues.Its impact on the breeding industry is pivotal [7,8].STEC and Salmonella, both Gram-negative facultative anaerobic bacilli, are prevalent commensal and pathogenic bacteria within the gastrointestinal tracts of warm-blooded animals [9].
The Recombinant Attenuated Salmonella Vaccines (RASVs) approach stands out as a compelling platform for delivering antigens [10].It offers an economical and needle-free approach to transporting foreign antigens, leading to a substantial enhancement in vaccine immunogenicity and cost-effectiveness [11].To date, RASVs have effectively delivered antigens from various sources, including bacteria, viruses, and parasites, eliciting immune responses [12][13][14].Many research efforts have utilized attenuated Salmonella as a vehicle for delivering E. coli antigens, with the goal of preventing and managing pathogenic Escherichia coli.Up to now, the researchers have used attenuated Salmonella as a vector to express the pathogenic Escherichia coli antigens K88, K99, FedA, FedF, FasA, and F41 [15][16][17][18][19][20][21][22].Salmonella possesses inherent attributes as a carrier.These encompass its notable adjuvant properties and the capability to produce various Toll-like receptor agonists like flagellin, lipopolysaccharides, and lipoproteins.These components serve as potent adjuvants, enhancing the generated immune response.This significantly boosts both the Th1-dominant and mucosal immune response to exogenous antigens [23][24][25].RASVs could use a type III secretion system (T3SS) for injection of effector proteins into the host cell cytosol which presented by MHC-I molecules generating efficient CD8 + T-cell responses [12,[26][27][28].RASVs are recognized for their capacity to vigorously stimulate both the humoral and cellular components of the immune response in vaccinated individuals.They can grow within the host's body and have been extensively employed in the management of Salmonellosis.Their capacity to access the host effectively through mass oral administration along the mucosal route results in comprehensive protection against Salmonella [10].Research has demonstrated that the utilization of RASVs carrying exogenous antigens can confer dual protection at the same time [29][30][31].
The selection of antigens has always been a research focus in the development of edema disease vaccines.Among them, F18 fimbriae and Stx2e are highly concerned [32,33].The clinical evidence has substantiated a frequent correlation between F18 fimbriae and piglet diarrhea.The F18-fimbriae-fed gene cluster encompasses essential genes including fedA (coding for the primary subunit protein), fedB (coding for the molecular chaperone), fedC (coding for the introducer protein), fedE (coding for the secondary subunit), and fedF (coding for the adhesin) [34].It has been observed that the gene fedF, responsible for encoding the adhesion subunit, demonstrates noteworthy conservation [35].Moreover, in vitro experiments have confirmed that mutant strains lacking the FedF gene experience a decline in adhesion capability [36].This has led researchers to predominantly target FedF for vaccine development purposes.As for the edema-disease-associated Stx2e whole toxin, it consists of a toxic A subunit housing N-terminal glycosidase activity and five nontoxic B subunits responsible for cell receptor binding [37][38][39].Notably, the A subunit assumes the role of toxicity induction.In consideration of this, during the formulation of vaccines directed at A-subunit proteins, the essential step of codon optimization becomes imperative to effectively mitigate and eliminate toxicity [40].
In order to create a potent edema disease vaccine, we utilized a recombinant strain known as rSC0016 [41].Vaccine candidate strains, namely rSC0016(pS-FedF) and rSC0016(pS-rStx2eA), were engineered to express the FedF and rStx2eA antigens.We assessed the immune responses elicited by rSC0016(pS-FedF) and rSC0016(pS-rStx2eA), along with their protective efficacy against STEC, using a mouse model.Our findings demonstrated that these constructs might present a novel avenue in the pursuit of preventing and controlling edema disease.
Animals and Ethics Statement
Female BALB/c mice were procured from the Comparative Medicine Center at Yangzhou University in Jiangsu, China.All animal experiments adhered rigorously to the animal welfare regulations outlined in the Animal Research Committee Guidelines of Jiangsu Province (License Number: SYXK(SU) 2017-0044) and received approval from the Ethics Committee for Animal Experimentation at Yangzhou University.In the course of the animal experiments, every endeavor was made to reduce suffering and optimize animal welfare.
Plasmids and Bacterial Strains
The strains and plasmids utilized in this study are presented in Table 1.The STEC strain STEC20, preserved in our laboratory, was used to amplify the gene fragments fedf and rstx2eA.Plasmid pYA3493 functions as an Asd+ vector, and plasmids pS-FedF and pS-rStx2eA, derived from pYA3493, carry the fedF or rstx2eA gene from STEC20, respectively.The strain rSC0016 was prepared through prior laboratory research [41].
Protein Expression, Protein Purification, and Antibody Preparation
The sequences of fedF or rstx2eA genes were amplified via PCR and then inserted into the expression vector pET28a.For rStx2eA amplification, overlap PCR was employed to substitute the codons at the 167th and 170th amino acid positions with codons encoding Gln and Lys.This modification aimed to reduce its toxicity and enhance its immunogenicity [40].The primers utilized in this study are detailed in Table 2.The vectors pET28a-FedF and pET28a-rStx2eA transformed E. coli BL21 (DE3) competent cells to generate purified proteins.BL21 cells harboring pET28a-FedF and pET28a-rStx2eA were grown in LB medium.
The medium was supplemented with kanamycin, and the cells were incubated at 37 • C. The incubation continued until they reached an OD 600 of 0.6, which marked the logarithmic growth phase.Following this, the bacterium was subjected to induction for a duration of 4 h using IPTG.The proteins were then purified utilizing Ni-NTA.Purified recombinant proteins were measured for their protein concentrations using the BCA method, identifying proteins through Western blot analysis with anti-His-tag monoclonal primary antibodies (Boster Biological Technology Co., Ltd., Wuhan, China).The proteins were diluted as necessary and mixed with an equal volume of Quick Antibody-Mouse3W adjuvant (Biodragon, Suzhou, China).Six-week-old female BALB/c mice received two intramuscular immunizations in the leg, spaced two weeks apart, with each mouse receiving 20 µg of the immunogen ever time.One week after the final immunization, blood samples were collected from both immunized and nonimmunized mice.Following centrifugation at 3000× g for 15 min, the sera were separated, and their antibody titers were evaluated using ELISA assay.
Indirect ELISA
An ELISA analysis was conducted to determine antibody titers targeting FedF and rStx2eA as previously described [41].Recombinant FedF and rStx2eA proteins or S. Choleraesuis OMPs (0.5 µg/mL) were immobilized onto microtiter plates using 0.1 M sodium carbonate buffer (pH 9.6), subsequent to an overnight incubation at 4 • C. Following the incubation with a blocking buffer, the wells were subjected to a 2 h incubation at 37 • C with polyclonal antibody serum that had been appropriately diluted in PBST (varying from 1:1000 to 1:128,000) or serum and vaginal mucosal flushing solution from Salmonellavaccine-immunized subjects (ranging from 1:100 to 1:12,800).Following that, 100 µL of goat anti-mouse IgG or goat anti-mouse IgA antibody (1:5000) was allowed to incubate at 37 • C for 90 min.The color reaction was initiated by the addition of 100 µL of TMB (Solarbio, Beijing, China) and allowed to progress for 15 min.The reaction was subsequently terminated with the addition of 50 µL of 2 M H 2 SO 4 .Lastly, the optical density (OD) was assessed at 450 nm using an automated microplate reader.The outer membrane proteins (OMPs) from the wild-type S. Choleraesuis strain C78-3 were obtained using the established procedure as previously described [41].In brief, bacterial pellets were collected by centrifugation and suspended in a 4 mL buffer comprising 1% Sarkosyl and 20 mM of Tris-HCl (pH 8.6).The suspension was then incubated on ice for 30 min.Subsequently, OMPs were isolated by centrifugation at 4 • C for 1 h at 132,000× g.The separated OMPs were resuspended in a 4 mL buffer containing 20 mM of Tris-HCl (pH 8.6).
Construction of Vaccine Strains and Detection of Proteins Expression
The genes fedF and rstx2eA were gain from pET28a-FedF and pET28a-rStx2eA and integrated into the EcoR I and Hind III restriction enzyme sites of the plasmid pYA3493 backbone, resulting in plasmids named pS-FedF and pS-rStx2eA, respectively.The primer sequences used in this study can be found in Table 2.The vector control plasmid pYA3493, along with the pS-FedF and pS-rStx2eA plasmids, were introduced into the asd-deficient S. Choleraesuis vector rSC0016, resulting in strains designated as rSC0016(pYA3493), rSC0016(pS-FedF), and rSC0016(pS-rStx2eA).To confirm the successful expression of FedF and rStx2eA proteins in these vaccine candidate strains, Western blot was conducted using the anti-FedF and anti-rStx2eA serum prepared earlier.
Bacterial Growth Curves
Cultures of rSC0016(pS-FedF), rSC0016(pS-rStx2eA), and rSC0016(pYA3493) in the mid-exponential growth phase were adjusted to an OD 600 of 0.5.They were then diluted 1:100 in fresh LB medium and subsequently incubated at 37 • C. Bacterial growth curves were derived from OD 600 measurements taken every two hours over an 8 h period.
Immunization in Mice
The bacterial solution preserved at −80 • C was revived on an LB plate enriched with 0.2% arabinose and mannose.Subsequently, individual colonies were transferred into LB liquid medium supplemented with 0.2% arabinose and mannose, and they were incubated at 37 • C for 16-18 h.For the inoculation, a 1:100 dilution was made in LB liquid medium enriched with 0.2% arabinose and mannose.The mixture was shaken and cultured on a constant temperature shaker at 37 • C until the OD 600 value of the bacterial solution reached approximately 0.9.The bacterial solution was subsequently centrifuged, washed with sterile PBS, and the bacterial pellet was collected.Afterwards, PBS was added to resuspend the bacterial pellet, resulting in a thoroughly mixed immune bacterial solution [43].
Immunoprotective experiments were conducted with 6-week-old female BALB/c mice (n = 9).Mice were kept 1 week after arrival to acclimate them to our animal facility before immunization and were deprived of food and water for 6 h before oral immunization.Two groups received PBS through oral administration, serving as healthy control and subsequent challenge control groups.Furthermore, two groups were subjected to oral pipette feeding with a 20 µL (1 ± 0.3 × 10 9 CFU) bacterial solution of rSC0016 (pS-FedF) and rSC0016 (pS-rStx2eA), while a separate group received the bacterial solution of rSC0016 (pYA3493) as an empty control.Food and water were returned to the mice 30 min after immunization.After 21 days from the initial immunization, each group received an additional immunization.On the 21st and 35th days after the initial immunization, serum samples were collected to assess IgG levels.In addition, vaginal mucosal flushing solutions were obtained by rinsing the vagina with sterile PBS to measure secretory IgA levels.The collected blood was placed in a refrigerator set at 4 • C overnight, followed by centrifugation to extract serum.All collected serum and vaginal rinse samples from each group were stored at −80 • C for preservation.Specific antibody titers in both vaginal mucosal flushing solution and serum were detected using an indirect ELISA.IFN-γ and IL-4 levels were detected using the Mouse IFN-γ and IL-4 ELISA KIT (Beijing Solarbio Science & Technology Co., Ltd., Beijing, China), following the provided instructions.
Challenge in Mice
Select a single colony of STEC20 and allow it to incubate overnight in 5 mL of LB liquid culture medium.On the following day, inoculate the colony at a 1:100 ratio into 50 mL of LB liquid medium and shake it at 37 • C for cultivation.When the OD 600 value of the bacterial solution reaches 0.8, harvest the bacterial cells.Resuspend the bacterial cells in 300 µL of PBS.Perform successive 10-fold dilutions and select the appropriate dilution for the challenge.Randomly divide 25 female BALB/c mice into 5 groups, each containing 5 mice.Inject the leg muscles of each mouse in the groups with 4 different target dilutions of the bacterial solution.An additional 5 mice comprise the blank control group, receiving individual injections of the same volume of PBS.Utilize the Reed-Muench method to calculate the LD 50 based on these injections shown in Table S1.Three weeks after the second immunization, inject STEC20 into their leg muscles for challenge, using a challenge dose equivalent to 3.5 times the LD 50 .Following the challenge, continuously observe the mice to calculate the survival rate based on their mortality status.The experimental design is shown in Figure S1.
Statistical Analysis
Statistical analyses were conducted using GraphPad Prism 8. Data were presented as the mean ± SEM for all assays.Group comparisons were performed using the Mann-Whitney U Test.A p-value of less than 0.05 was considered statistically significant for all tests.
Expression Recombinant FedF and rStx2eA Proteins and Production Polyclonal Antibody Sera
Using the STEC20 strain as a template, we amplified an 840 bp fedF gene fragment and an 891 bp rstx2eA gene fragment through PCR (Figure 1A,B).The fedF and rstx2eA fragments were then inserted into the pET28a vector.Positive plasmids were identified using doublerestriction endonuclease digestion (Figure 1C).Afterward, they were introduced into the expression strain E. coli BL21(DE3), resulting in the creation of BL21(pET28a-rStx2eA) and BL21(pET28a-FedF) strains.Western blot showed that both BL21(pET28a-rStx2eA) and BL21 (pET28a-FedF) lanes exhibited specific bands of the expected size, while the empty control strain did not show any bands (Figure 1D).The results demonstrated the successful expression of FedF and rStx2eA proteins by BL21(pET28a-FedF) and BL21(pET28a-rStx2eA), respectively.Subsequently, the purified FedF protein and rStx2eA protein were used to generate polyclonal antibody sera in mice.Antibody titers were determined via indirect ELISA, revealing serum titers of 1:51,200 for the FedF antigen and 1:25,600 for the rStx2eA antigen.
mL of LB liquid medium and shake it at 37 °C for cultivation.When the OD600 value of the bacterial solution reaches 0.8, harvest the bacterial cells.Resuspend the bacterial cells in 300 μL of PBS.Perform successive 10-fold dilutions and select the appropriate dilution for the challenge.Randomly divide 25 female BALB/c mice into 5 groups, each containing 5 mice.Inject the leg muscles of each mouse in the groups with 4 different target dilutions of the bacterial solution.An additional 5 mice comprise the blank control group, receiving individual injections of the same volume of PBS.Utilize the Reed-Muench method to calculate the LD50 based on these injections shown in table S1.Three weeks after the second immunization, inject STEC20 into their leg muscles for challenge, using a challenge dose equivalent to 3.5 times the LD50.Following the challenge, continuously observe the mice to calculate the survival rate based on their mortality status.The experimental design is shown in Figure S1.
Statistical Analysis
Statistical analyses were conducted using GraphPad Prism 8. Data were presented as the mean ± SEM for all assays.Group comparisons were performed using the Mann-Whitney U Test.A p-value of less than 0.05 was considered statistically significant for all tests.
Expression Recombinant FedF and rStx2eA Proteins and Production Polyclonal Antibody Sera
Using the STEC20 strain as a template, we amplified an 840 bp fedF gene fragment and an 891 bp rstx2eA gene fragment through PCR (Figure 1A,B).The fedF and rstx2eA fragments were then inserted into the pET28a vector.Positive plasmids were identified using double-restriction endonuclease digestion (Figure 1C).Afterward, they were introduced into the expression strain E. coli BL21(DE3), resulting in the creation of BL21(pET28a-rStx2eA) and BL21(pET28a-FedF) strains.Western blot showed that both BL21(pET28a-rStx2eA) and BL21 (pET28a-FedF) lanes exhibited specific bands of the expected size, while the empty control strain did not show any bands (Figure 1D).The results demonstrated the successful expression of FedF and rStx2eA proteins by BL21(pET28a-FedF) and BL21(pET28a-rStx2eA), respectively.Subsequently, the purified FedF protein and rStx2eA protein were used to generate polyclonal antibody sera in mice.Antibody titers were determined via indirect ELISA, revealing serum titers of 1:51,200 for the FedF antigen and 1:25,600 for the rStx2eA antigen.
Construction and Characterization of rSC0016(pS-FedF) and rSC0016(pS-rStx2eA)
The fedF and rstx2eA genes from STEC20 were inserted into the pYA3493, resulting in the creation of pS-FedF and pS-rStx2eA (Figure 2A).The pS-FedF and pS-rStx2eA plasmids were verified using double-restriction enzyme digestion.The sizes of the fragments were as follows: 3113 bp for pYA3493, 840 bp for fedF, and 891 bp for rstx2eA (Figure 2B).The pS-FedF and pS-rStx2eA plasmids were introduced into the competent rSC0016 strain to create vaccines rSC0016(pS-rStx2eA) and rSC0016(pS-FedF).Equal volumes of bacterial solutions of three vaccine strains were subjected to Western blot analysis.The results revealed an approximately 36 kDa protein expression in rSC0016(pS-FedF) and an approximately 38 kDa protein expression in rSC0016(pS-rStx2eA).No distinct bands were observed in the lane corresponding to rSC0016(pYA3493) (Figure 2C).These observed protein band sizes aligned with the anticipated sizes of FedF and rStx2eA proteins, indicating the accurate synthesis of the target antigens by both vaccine candidates.The growth curve outcomes indicated that despite carrying heterologous antigens, there was no notable difference in the growth rate among rSC0016(pS-FedF), rSC0016(pS-rStx2eA), and rSC0016(pYA3493) (Figure 2D).
The fedF and rstx2eA genes from STEC20 were inserted into the pYA3493, resulting in the creation of pS-FedF and pS-rStx2eA (Figure 2A).The pS-FedF and pS-rStx2eA plasmids were verified using double-restriction enzyme digestion.The sizes of the fragments were as follows: 3113 bp for pYA3493, 840 bp for fedF, and 891 bp for rstx2eA (Figure 2B).The pS-FedF and pS-rStx2eA plasmids were introduced into the competent rSC0016 strain to create vaccines rSC0016(pS-rStx2eA) and rSC0016(pS-FedF).Equal volumes of bacterial solutions of three vaccine strains were subjected to Western blot analysis.The results revealed an approximately 36 kDa protein expression in rSC0016(pS-FedF) and an approximately 38 kDa protein expression in rSC0016(pS-rStx2eA).No distinct bands were observed in the lane corresponding to rSC0016(pYA3493) (Figure 2C).These observed protein band sizes aligned with the anticipated sizes of FedF and rStx2eA proteins, indicating the accurate synthesis of the target antigens by both vaccine candidates.The growth curve outcomes indicated that despite carrying heterologous antigens, there was no notable difference in the growth rate among rSC0016(pS-FedF), rSC0016(pS-rStx2eA), and rSC0016(pYA3493) (Figure 2D).
S. Choleraesuis Vaccine Vector Strains rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Elicited Elevated Serum IgG and Mucosal IgA Responses to FedF and rStx2eA
Indirect ELISA measured IgG levels for FedF and rStx2eA proteins in serum after 3 and 5 weeks of initial immunization, along with IgA levels in vaginal rinses.Also, IgG levels for induced C78-3 outer membrane proteins (OMPs) were assessed using indirect
S. Choleraesuis Vaccine Vector Strains rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Elicited Elevated Serum IgG and Mucosal IgA Responses to FedF and rStx2eA
Indirect ELISA measured IgG levels for FedF and rStx2eA proteins in serum after 3 and 5 weeks of initial immunization, along with IgA levels in vaginal rinses.Also, IgG levels for induced C78-3 outer membrane proteins (OMPs) were assessed using indirect ELISA.In comparison to the rSC0016(pYA3493) and the blank control group, the results revealed that following the first and second immunizations, those immunized with rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) demonstrated markedly elevated concentrations of serum IgG and mucosal IgA against FedF and rStx2eA.Moreover, all immunized groups exhibited higher antibody levels at 5 weeks post-initial immunization compared to 3 weeks (Figure 3A,B).There was no significant difference in antibody levels between the rSC0016(pS-FedF) immune group and the rSC0016(pS-rStx2eA) immune group.The rSC0016(pYA3493) empty vector group did not express heterologous antigens; consequently, no corresponding antibodies were produced.Notably, the antibody levels against the OMPs induced by rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) were comparable to those of the empty vector immunized group after both immunizations (Figure 3C).This indicates that the two vaccine candidate strains not only induced immune responses to heterologous antigens but also triggered immune responses against Salmonella.In contrast, the blank control group did not generate any antibodies.
tions of serum IgG and mucosal IgA against FedF and rStx2eA.Moreover, all immunized groups exhibited higher antibody levels at 5 weeks post-initial immunization compared to 3 weeks (Figure 3A,B).There was no significant difference in antibody levels between the rSC0016(pS-FedF) immune group and the rSC0016(pS-rStx2eA) immune group.The rSC0016(pYA3493) empty vector group did not express heterologous antigens; consequently, no corresponding antibodies were produced.Notably, the antibody levels against the OMPs induced by rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) were comparable to those of the empty vector immunized group after both immunizations (Figure 3C).This indicates that the two vaccine candidate strains not only induced immune responses to heterologous antigens but also triggered immune responses against Salmonella.In contrast, the blank control group did not generate any antibodies.
S. Choleraesuis Vaccine Vector Strains rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Induced Higher Levels of IFN-γ and IL-4 in Mice
Seven days after the second immunization, three mice were randomly selected from each group, and their spleens were obtained and homogenized.Following multiple freeze-thaw cycles, the samples were subjected to centrifugation at 12,000× g for 1 min to acquire the supernatant.This supernatant was then used for subsequent analysis.Using the obtained supernatant, cytokine levels were measured using IL-4 and IFN-γ ELISA kits.In comparison to the rSC0016(pYA3493) group, both vaccine formulations induced significantly higher levels of IL-4 and IFN-γ in immunized groups.The spleen samples from rSC0016(pS-FedF) mice exhibited an average IFN level of 600 pg/mL, which is five times higher than that of the rSC0016(pYA3493) group.Likewise, the rSC0016(pS-FedF) group exhibited IL-4 levels approximately three times greater than the empty vector group.Furthermore, there was no significant disparity in IL-4 and IFN-γ cytokine levels between the spleens of mice immunized with the rSC0016(pS-rStx2eA) strain and those immunized with the rSC0016(pS-rStx2eA) strain.The blank control group did not show detectable serum factor levels (Figure 4).
S. Choleraesuis Vaccine Vector Strains rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Induced Higher Levels of IFN-γ and IL-4 in Mice
Seven days after the second immunization, three mice were randomly selected from each group, and their spleens were obtained and homogenized.Following multiple freezethaw cycles, the samples were subjected to centrifugation at 12,000× g for 1 min to acquire the supernatant.This supernatant was then used for subsequent analysis.Using the obtained supernatant, cytokine levels were measured using IL-4 and IFN-γ ELISA kits.In comparison to the rSC0016(pYA3493) group, both vaccine formulations induced significantly higher levels of IL-4 and IFN-γ in immunized groups.The spleen samples from rSC0016(pS-FedF) mice exhibited an average IFN level of 600 pg/mL, which is five times higher than that of the rSC0016(pYA3493) group.Likewise, the rSC0016(pS-FedF) group exhibited IL-4 levels approximately three times greater than the empty vector group.Furthermore, there was no significant disparity in IL-4 and IFN-γ cytokine levels between the spleens of mice immunized with the rSC0016(pS-rStx2eA) strain and those immunized with the rSC0016(pS-rStx2eA) strain.The blank control group did not show detectable serum factor levels (Figure 4).
S. rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Vaccine Strains Protects Mice against STEC Infection
Except for the mice in the PBS group, all other mice were challenged by injecting STEC20 into their leg muscles at a dose equivalent to 3.5 times the LD 50 .Within a span of 36 h following exposure, mice in both the rSC0016(pS-FedF) immunized group and the blank control group exhibited gradual fatalities.The survival rates for the rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) immunization groups were 83.3% and 33.3%, respectively (Figure 5).The surviving mice displayed mild clinical symptoms including mental fatigue, disordered hair, and eyelid congestion.They subsequently resumed normal activities and eating patterns.The results underscore that both the rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) immunization groups conferred a certain level of protection against Escherichia coli infection in mice.Notably, the protective efficacy of rSC0016(pS-FedF) was significantly superior in comparison to that of rSC0016(pS-rStx2eA).
S. rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Vaccine Strains Protects Mice against STEC Infection
Except for the mice in the PBS group, all other mice were challenged by injecting STEC20 into their leg muscles at a dose equivalent to 3.5 times the LD50.Within a span of 36 h following exposure, mice in both the rSC0016(pS-FedF) immunized group and the blank control group exhibited gradual fatalities.The survival rates for the rSC0016(pS-FedF) and rSC0016(pS-rrStx2eA) immunization groups were 83.3% and 33.3%, respectively (Figure 5).The surviving mice displayed mild clinical symptoms including mental fatigue, disordered hair, and eyelid congestion.They subsequently resumed normal activities and eating patterns.The results underscore that both the rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) immunization groups conferred a certain level of protection against Escherichia coli infection in mice.Notably, the protective efficacy of rSC0016(pS-FedF) was significantly superior in comparison to that of rSC0016(pS-rStx2eA).
Discussion
STEC is a pathogen of edema disease (ED).Being a highly deadly infectious disease among piglets, it has resulted in substantial economic losses to the global breeding
S. rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) Vaccine Strains Protects Mice against STEC Infection
Except for the mice in the PBS group, all other mice were challenged by injecting STEC20 into their leg muscles at a dose equivalent to 3.5 times the LD50.Within a span of 36 h following exposure, mice in both the rSC0016(pS-FedF) immunized group and the blank control group exhibited gradual fatalities.The survival rates for the rSC0016(pS-FedF) and rSC0016(pS-rrStx2eA) immunization groups were 83.3% and 33.3%, respectively (Figure 5).The surviving mice displayed mild clinical symptoms including mental fatigue, disordered hair, and eyelid congestion.They subsequently resumed normal activities and eating patterns.The results underscore that both the rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) immunization groups conferred a certain level of protection against Escherichia coli infection in mice.Notably, the protective efficacy of rSC0016(pS-FedF) was significantly superior in comparison to that of rSC0016(pS-rStx2eA).
Discussion
STEC is a pathogen of edema disease (ED).Being a highly deadly infectious disease among piglets, it has resulted in substantial economic losses to the global breeding
Discussion
STEC is a pathogen of edema disease (ED).Being a highly deadly infectious disease among piglets, it has resulted in substantial economic losses to the global breeding industry.Vaccine immunization remains a powerful measure for preventing and controlling edema disease [5].Given the rise of multidrug-resistant strains of STEC in afflicted pigs, employing vaccines remains a potent strategy for preventing and managing edema disease [5,44].Inactivated vaccines are currently widely used in the market, but they require multiple immunizations and large doses, thus increasing the cost of use.Consequently, there persists a necessity for the development of vaccines that are both more efficient and user-friendly, while ensuring safety [45].
Our recent research has devised mechanisms for controlled delayed attenuation and antigen synthesis [41,46,47].During the initial phases of oral immunization, the meticulously controlled delayed attenuated Salmonella vaccine strain demonstrates efficient colonization of deep lymphoid tissues similar to virulent wild-type strains.Subsequently, due to the lack of mannose and arabinose in the host, the rSC0016 demonstrate detoxification characteristics and do not elicit disease symptoms.Research indicates that the controlled delayed antigen synthesis system can govern the production of foreign antigens.This empowers vaccines to induce robust levels of specific antigen antibodies upon colonization of lymphatic tissue [48].FedF is regarded as a promising protective antigen for vaccine development.This is because it is relatively conserved and associated with bacterial adhesion [35,36].Active immunity to Stx2e toxin induces strong immune responses in piglets and sows [49].
Vaccine candidate strains rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) were prepared that can express two virulence-related factors FedF and rStx2eA (including dual mutations) of STEC.To assess the potential of rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) as vaccine candidates against STEC, we analyzed the characteristics of these strains.The growth patterns of rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) exhibited a similarity to that of the control strain containing an empty vector, which is rSC0016(pYA3496).Furthermore, the production of exogenous protective antigens FedF and rStx2eA was observed in rSC0016(pS-FedF) and rSC0016(pS-rStx2eA).Following the immunization of mice, the vaccine strains elicited a robust, targeted immune response, resulting in elevated titers of IgG and IgA.Th1 cells are pivotal in orchestrating cellular immune responses against intracellular parasites [50,51], mainly secreting IFN-γ.IL-4 is secreted by Th2-type cells, and its primary function is to stimulate B-cell proliferation.It plays a significant role in both humoral immune responses and mechanisms of inflammation [52].In this research, the average cytokine levels of IL-4 and IFN-γ in the spleens of mice immunized with rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) were greater than those in the control group.Furthermore, the immune groups involving rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) also provoked comparable levels of IL-4 and IFN-γ, suggesting the establishment of a harmonious Th1/Th2 immune response.
As for the protective effect, our results show that the rSC0016(pS-FedF) group achieved an 83.3% post-challenge survival rate.This outcome aligns closely with the survival rate of approximately 80%, as reported by Ren W et al. in their study involving a vaccine targeting the pili adhesion factor FedF [53].These results further underscore the potential of the pili subunit FedF as a prime target in the development of edema disease vaccines.In contrast to the impressive protective efficacy observed with the modified Stx2e whole-toxin subunit vaccine [49], the protective effectiveness achieved by utilizing the Salmonella vector to deliver rStx2eA post-immunization yielded suboptimal results in this investigation.This discrepancy could be attributed to the apparent lack of production of efficacious neutralizing antibodies following immunization with rSC0016(pS-rStx2eA) in mice [54].Additionally, a limitation of this study is the absence of a comparative analysis between the immune efficacies of the vaccine strains and commercially available edema disease vaccines.
Overall, rSC0016(pS-FedF) and rSC0016(pS-rStx2eA) vaccines induce cellular, mucosal, and humoral immune responses in mice.The rSC0016(pS-FedF) amalgamates the benefits of both rSC0016 and FedF, striking a favorable equilibrium between host safety and immunogenicity, thereby providing protection against SETC in mice.The findings of this research strongly indicate that the rSC0016(pS-FedF) strain holds significant promise as a candidate for the development of vaccines targeting Shiga-toxin-producing Escherichia coli.
Figure 3 .
Figure 3. Detection of antibody titer in the immunized mice.(A) FedF-specific and Stx2eA-specific IgG antibody titer in serum determined by ELISA.(B) FedF-specific and Stx2eA-specific IgA antibody titer in vaginal rinses determined by ELISA.(C) OMPs-specific IgG antibody titer in serum determined by ELISA.The results are expressed as the mean ± SD.Degrees of significance are indicated as follows: * p < 0.05; ** p < 0.01; ns p ≥ 0.05).
Figure 3 .
Figure 3. Detection of antibody titer in the immunized mice.(A) FedF-specific and Stx2eA-specific IgG antibody titer in serum by ELISA.(B) FedF-specific and Stx2eA-specific IgA antibody titer in vaginal rinses determined by ELISA.(C) OMPs-specific IgG antibody titer in serum determined by ELISA.The results are expressed as the mean ± SD.Degrees of significance are indicated as follows: * p < 0.05; ** p < 0.01; ns p ≥ 0.05.
Figure 4 .
Figure 4. Levels of secreted IL-4 and IFN-γ were assayed by ELISA.Splenic lymphocytes were used to evaluate cytokine secretion in vitro following restimulation with purified FedF and Stx2eA protein, respectively.The results are expressed as the mean ± SD.Degrees of significance are indicated as follows: *** p < 0.01.
Figure 5 .
Figure 5. Protective efficacy of developed vaccine.Survival rates of mice after the Shiga-toxin-producing Escherichia coli challenge were determined.
Figure 4 .
Figure 4. Levels of secreted IL-4 and IFN-γ were assayed by ELISA.Splenic lymphocytes were used to evaluate cytokine secretion in vitro following restimulation with purified FedF and Stx2eA protein, respectively.The results are expressed as the mean ± SD.Degrees of significance are indicated as follows: *** p < 0.01.
Figure 4 .
Figure 4. Levels of secreted IL-4 and IFN-γ were assayed by ELISA.Splenic lymphocytes were used to evaluate cytokine secretion in vitro following restimulation with purified FedF and Stx2eA protein, respectively.The results are expressed as the mean ± SD.Degrees of significance are indicated as follows: *** p < 0.01.
Figure 5 .
Figure 5. Protective efficacy of developed vaccine.Survival rates of mice after the Shiga-toxin-producing Escherichia coli challenge were determined.
Figure 5 .
Figure 5. Protective efficacy of developed vaccine.Survival rates of mice after the Shiga-toxinproducing Escherichia coli challenge were determined.
Figure S1.Schematic diagram of the immunization and challenge experiment.
Figure S2.Original images of Figure 1A-C.Figure S3.Original images of Figure 1D. Figure S4.Original images of Figure 2B. Figure S5.Original images of Figure 2C.Author Contributions: H.S., Y.F., G.Z., Y.L., Q.L. and S.W. conceived the study.Y.F. and G.Z. collected study samples.Y.F. and G.Z. performed experiments.Y.F. and G.Z. performed data analysis.G.Z. wrote the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This study was supported by the National Natural Science Foundation of China (grant numbers 32172802, 31672516, 32002301, 31172300, 30670079); Jiangsu Province Science and Technology Program Special Fund Project (BZ2022042); China Postdoctoral Science Foundation (grant number 2019M661953); Postgraduate Research & Practice Innovation Program of Jiangsu Province
Table 1 .
Bacterial strains and plasmids used in this study.
Table 2 .
The primers information. | 2023-12-05T16:33:16.141Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "47b38ae1fb26a5273d32ddbb2d01f4212d17f62f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/13/12/1726/pdf?version=1701335930",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0ba0d9c94caebf50c2d52ca26c067b960dcc616",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247608508 | pes2o/s2orc | v3-fos-license | Can Lung Ultrasound Be the Ideal Monitoring Tool to Predict the Clinical Outcome of Mechanically Ventilated COVID-19 Patients? An Observational Study
Background: During the COVID-19 pandemic, lung ultrasound (LUS) has been widely used since it can be performed at the patient’s bedside, does not produce ionizing radiation, and is sufficiently accurate. The LUS score allows for quantifying lung involvement; however, its clinical prognostic role is still controversial. Methods: A retrospective observational study on 103 COVID-19 patients with respiratory failure that were assessed with an LUS score at intensive care unit (ICU) admission and discharge in a tertiary university COVID-19 referral center. Results: The deceased patients had a higher LUS score at admission than the survivors (25.7 vs. 23.5; p-value = 0.02; cut-off value of 25; Odds Ratio (OR) 1.1; Interquartile Range (IQR) 1.0−1.2). The predictive regression model shows that the value of LUSt0 (OR 1.1; IQR 1.0–1.3), age (OR 1.1; IQR 1.0−1.2), sex (OR 0.7; IQR 0.2−3.6), and days in spontaneous breathing (OR 0.2; IQR 0.1–0.5) predict the risk of death for COVID-19 patients (Area under the Curve (AUC) 0.92). Furthermore, the surviving patients showed a significantly lower difference between LUS scores at admission and discharge (mean difference of 1.75, p-value = 0.03). Conclusion: Upon entry into the ICU, the LUS score may play a prognostic role in COVID-19 patients with ARDS. Furthermore, employing the LUS score as a monitoring tool allows for evaluating the patients with a higher probability of survival.
Introduction
The COVID-19 pandemic increased the workload of intensive care units (ICU) with a mean admittance of 16% of SARS-CoV-2-positive hospitalized patients. The principal diagnosis at admission was respiratory insufficiency [1]. The pragmatic reference standard for diagnosing the infection is the nasopharyngeal molecular swab test, while the chest computed tomography (CT) scan is the gold standard for diagnosing COVID-19 pneumonia [2]. Lung ultrasound (LUS) is a well-established diagnostic tool in acute respiratory failure and acute respiratory distress syndrome (ARDS), suited for COVID-19 clinical management giving results similar or superior to chest CT and superior to traditional chest X-rays [3,4].
LUS is a useful tool: it can be performed at the bedside, is easy-to-learn, is easy-to-use, is radiation-free, gives relevant clinical information, and permits saving precious time [5].
The LUS score allows for examining and rating the pulmonary aeration. It is calculated by dividing the thorax into 12 regions and assigning a number from 0 (normal lung) to 3 (lung consolidation) to each region; the sum of these numbers gives a value that ranges from 0 (completely aerated lung) to 36 (completely consolidated lung) [6].
During the pandemic, specific sonographic patterns were described [7]. However, the application of LUS in COVID-19 patients is still controversial. Persona et al. suggested that the LUS score is not as reliable as in the non-COVID-19 patients [8], whereas Stecher et al. stated that the LUS in ICU COVID-19 patients predicts the clinical course but not the outcome [9].
However, in a study conducted in Israel in a medical ward and intensive care setting, the baseline LUS score predicted clinical deterioration and death [10].
This study aimed to evaluate whether the LUS score at the ICU admittance can predict the clinical outcome in COVID-19 patients. The secondary aim was to evaluate a correlation between LUS score trends and clinical course in the survived patients.
Study Protocol
This study was a retrospective observational study of prospectively and systematically collected data about LUS examination in patients with SARS-CoV-2 admitted to the Department of Anesthesia and Intensive Care of the University Hospital of Udine, Italy. The Institutional Review Board of the University of Udine approved the study with the number ID # 068/2021, 8 September 2021. The patient's consent was obtained through the general consent (GECO) system, and the European General Data Protection Regulation 2016/679 (GDPR) was respected.
Study Population
Inclusion criteria were: patients admitted to the COVID-19 ICU with a positive nasopharyngeal molecular swab test, patients with > 18 years of age.
Exclusion criteria were: history of lung surgery (lung resections or pneumonectomy), severe pulmonary fibrosis and lung cancer or metastatic localization, difficult ultrasonographic window.
Lung Ultrasound Examination
Experienced intensive care physicians performed LUS with an Affiniti 70 G ultrasound machine (Philips, Amsterdam, The Netherlands) with a convex probe (Mhz 2−5).
As a normal clinical practice, we calculate the LUS score on the day of admittance and discharge from the ICU. For the patients who died, we reported only the first LUS score evaluation.
Before starting the enrollment, we organized a discussion between the operators about the LUS approach. To test the LUS inter-operator variability in the interpretation of LUS signs and patterns, online training was set up with a total of 25 clips, including the whole range of significant LUS COVID-19 signs.
We calculated the LUS score by dividing the thorax into 12 regions, 6 for each hemithorax, through the anterior and posterior axillary lines and a transverse line starting from the xiphoid process.
That results in three superior areas (anterior, lateral, and posterior) and three inferior areas (anterior, lateral, and posterior) for each hemithorax, permitting a global evaluation.
The international evidence-based recommendations for point of care lung ultrasound [11] describe the possible ultrasound patterns and profiles that may be found, attributing a score from 0 to 3: 0 points for a normal or A-pattern (A-lines or <2 B lines and lung sliding present), 1 point for B1-pattern (well-spaced ≥ 3 B lines and lung sliding present), 2 points for B2-pattern (coalescent B-lines, lung sliding present, and light beam), 3 points for C-pattern (lung consolidation and multiple subpleural consolidations).
Adding the different scores, we obtained the LUS score ranging from 0 (normal lung) to 36 (completely consolidated lung).
Recorded Data
Anthropometric parameters such as age, gender, weight, height, and body mass index (BMI) were recorded, and medical history and clinical conditions at the admission.
We reported the necessity of oxygen therapy, non-invasive ventilation (NIV), intubation and mechanical ventilation, and the specific therapy duration.
Study Outcome
The main aim was to verify if the LUS score at admission in the ICU could predict the clinical outcome (survival or death) in critically ill COVID-19 patients. The secondary aim was to evaluate the trend of LUS score at admission and at discharge from the ICU of the survived patients to verify if there was a correlation between the LUS score and the disease course.
Statistical Analysis
The distribution between the two groups of patients (intra-ICU survivors and deceased) was compared by Student's t-test (variables are expressed as mean and standard deviation) after verifying the normality of the distribution by means of the Shapiro-Wilk test. In contrast, Fisher's exact test was used for variables expressed as absolute frequency and relative percentage, and a p-value < 0.05 was considered statistically significant. We also verified the correlation between the measured variables and the outcome by univariate and multivariate logistic regression.
The paired Student's t-test was used to compare the LUS at admission (t0) and the discharge from the ICU (t1).
Results
From 1 December 2020 to 30 April 2021, 104 patients were enrolled in the study. One patient was excluded from the analysis due to incomplete clinical data, and therefore the final sample consists of 103 patients. Of these, 34 (33%) died during hospitalization in the ICU (Figure 1).
By dividing the patients into survivors and the deceased (Table 1), the variables significantly distributed between the two clusters were the LUS score at admission (23. The ROC curve shows an AUC of 61.3% at univariate regression for the LUS score at the admittance (LUSt0). The best cut-off value, according to Youden's J index method, is 25 (OR 1.1; IQR 1.0−1.2; p-value = 0.05; sensitivity 67.6%; specificity 56.5%).
The stepwise generalized linear regression model shows that the value of LUSt0 (OR 1.1; IQR 1.0−1. Comparing the LUS score at the two examinations among the survivors, the trend shows a statistically significant reduction in LUS (mean of differences equal to 1.746, p = 0.033) (Figure 4). Further analyzing the data of the LUS among the survivors, we observed that 29 out of 54 patients presented a ∆LUS ([LUSt0-LUSt1]/LUSt0) between −0.10 and −0.39; seven patients showed a ∆LUS below −0.40; 15 patients showed a ∆LUS between −0.09 and 0.09; and finally, three patients showed a ∆LUS greater than 0.10.
Discussion
The main finding of our study was that COVID-19 patients with a lower LUS score value at admission in the ICU had a better survival rate than the patients who died.
Analyzing the LUS score trends among the patients that survived, it is possible to identify at least four subpopulations: (a) those whose clinic improved independently from the LUS evolution; (b) those who presented a moderate improvement in the ultrasound imaging; (c) those who responded very clearly, with a significant reduction in pulmonary involvement; (d) those who, while improving their clinical conditions, did not show an evident improvement from an ultrasound point of view and presented an apparent worsening in LUS. The first two subpopulations are the two most represented ones (Figure 1).
Whether this result is due to different disease clusters or the early onset of therapy, it cannot be established from our study design, deserving further targeted studies.
Compared to the literature, the role of the LUS score and, specifically, in COVID-19 pneumonia as a prognostic tool, was investigated in many studies without univocal results. During the COVID-19 pandemic, the spared of LUS use increased. A recent survey conducted on about 700 Italian intensivists, showed that the physicians who use the LUS daily raised from about 10% in the pre-COVID-19 era to 28% in the COVID-19 period. The percentage of daily user intensivists of the LUS score grew from less than 2% to 9%. The majority of the practitioners stated that the LUS influenced their clinical decisions (68%) and patient monitoring (73%) [12].
However, the role of the LUS score is still controversial, and conflicting results are present in the literature. Persona et al. did not find in a study that enrolled 28 patients any significant difference in the LUS score at the admission and discharge in survivors and non-survivors, suggesting that the LUS score is not as reliable as in non-COVID-19 ARDS patients [8]. On the contrary, Lithcer et al. found that a higher LUS score predicts mortality and the need for mechanical ventilation [10]. Dargent et al., in a small sample of ten patients with COVID-19 ARDS, showed that the course of the disease could be described with the modifications of the LUS score [13]. Li et al. showed similar results in 280 patients. The LUS score is a useful tool for monitoring patients with COVID-19 ARDS [14].
In a study from a Brazilian group, de Alencar et al. found in 180 patients a correlation between the LUS score at admission and death, mechanical ventilation, and intubation. This study considered a broader population spectrum admitted to the emergency department with only 74 ICU patients [15]. In a recent review, a higher baseline LUS score is related to a higher risk of unfavorable outcomes (ICU admission, mechanical ventilation, and death) [16].
Our results agreed with Lithcer, Dargent, Li, and de Alencar, and showed that the LUS score could be used in COVID-19 patients admitted to the ICU as a prognostic tool. Notably, the LUSt0 of 25 is the cut-off value that could predict a higher risk of death. Age and sex are also related to higher mortality risk. Interestingly, more days in spontaneous breathing decrease the mortality risk.
Our result has a double clinical significance: first of all, it endorses the use of lung ultrasound as a tool to screen COVID-19 patients with respiratory failure to evaluate those most at risk and who therefore require immediate intensive care. Second, monitoring clinical conditions proves to be a useful tool for establishing the effectiveness of ongoing therapies.
However, describing the results obtained, we are aware of the limits of the LUS. Although several authors have described typical patterns of COVID-19 pneumonia [17], to date, it has not yet been shown that the ultrasound picture of COVID-19 patients is particularly different from similar forms of non-COVID-19 interstitial diseases [18]. The specificity of LUS in an audience of general patients is not high [19]. However, the patients entering the ICU are selected patients evaluated by different physicians and imaging techniques. This path could increase the specificity of the LUS [20]. On the other hand, while enjoying higher accuracy, even the chest CT scan shows poor specificity towards SARS-CoV-2 interstitial pneumonia [21].
Furthermore, although beyond the scope of this study, we need to recode the role that LUS has outside the diagnosis or monitoring of COVID-19 patients. LUS is also useful as a guiding tool during invasive procedures: LUS is a particularly suitable tool for bedside procedures, such as the drainage of pleural effusions that complicate the course of COVID-19 patients in a not-so-small percentage [2,22]. This result is particularly relevant if we consider the logistical difficulties of moving a highly infectious patient to the Radiology Department [23,24].
This makes the use of LUS particularly attractive in COVID-19 patients: ultimately, LUS is also an aid for procedures or diagnosis, and allows for prognostic stratification as we suggested with our study.
Limitations
Our study is retrospective and therefore subject to potential selection bias. Furthermore, only a small percentage of the patients treated and assessed by ultrasound were registered. While confirming the absence of explicit enrollment bias, we cannot exclude any random errors related to the enrollment modality. The study population is not extensive and larger studies are needed to confirm these results.
Conclusions
Upon entry into the ICU, the lung ultrasound score may play a prognostic role in COVID-19 patients with ARDS. Furthermore, the monitoring employing the LUS score allows for evaluating the patients with a higher probability of survival. A multicentral study is urgently needed to confirm our data. | 2022-03-23T15:28:50.536Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "050b7a8837171a1a845079f9258a3ef22b9b7add",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/3/568/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f85ce36d549134b735fd3e3f7e027fb96c4fc1fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233864598 | pes2o/s2orc | v3-fos-license | Reliability Testing for Natural Language Processing Systems
Questions of fairness, robustness, and transparency are paramount to address before deploying NLP systems. Central to these concerns is the question of reliability: Can NLP systems reliably treat different demographics fairly and function correctly in diverse and noisy environments? To address this, we argue for the need for reliability testing and contextualize it among existing work on improving accountability. We show how adversarial attacks can be reframed for this goal, via a framework for developing reliability tests. We argue that reliability testing — with an emphasis on interdisciplinary collaboration — will enable rigorous and targeted testing, and aid in the enactment and enforcement of industry standards.
Introduction
Rigorous testing is critical to ensuring a program works as intended (functionality) when used under real-world conditions (reliability). Hence, it is troubling that while natural language technologies are becoming increasingly pervasive in our everyday lives, there is little assurance that these NLP systems will not fail catastrophically or amplify discrimination against minority demographics when exposed to input from outside the training distribution. Recent examples include GPT-3 (Brown et al., 2020) agreeing with suggested suicide (Rousseau et al., 2020), the mistranslation of an innocuous social media post resulting in a minority's arrest (Hern, 2017), and biased grading algorithms that can negatively impact a minority student's future (Feathers, 2019). Additionally, a lack of rigorous testing, coupled with machine learning's (ML) implicit assumption of identical training and testing distributions, may inadvertently result in systems that discriminate against minorities, who are often underrepresented in the training data. This can take * Correspondence to: samson.tan@salesforce.com Figure 1: How DOCTOR can integrate with existing system development workflows. Test (left) and system development (right) take place in parallel, separate teams. Reliability tests can thus be constructed independent of the system development team, either by an internal "red team" or by independent auditors. the form of misrepresentation of or poorer performance for people with disabilities, specific gender, ethnic, age, or linguistic groups (Hovy and Spruit, 2016;Crawford, 2017;. Amongst claims of NLP systems achieving human parity in challenging tasks such as question answering (Yu et al., 2018), machine translation (Hassan et al., 2018), and commonsense inference (Devlin et al., 2019), research has demonstrated these systems' fragility to natural and adversarial noise (Goodfellow et al., 2015;Belinkov and Bisk, 2018) and out-of-distribution data (Fisch et al., 2019). It is also still common practice to equate "testing" with "measuring held-out accuracy", even as datasets are revealed to be harmfully biased (Wagner et al., 2015;Geva et al., 2019;Sap et al., 2019).
Many potential harms can be mitigated by detecting them early and preventing the offending model from being put into production. Hence, in addition to being mindful of the biases in the NLP pipeline (Bender and Friedman, 2018;Mitchell et al., 2019;Waseem et al., 2021) and holding creators accountable via audits (Raji et al., 2020;Brundage et al., 2020), we argue for the need to evaluate an NLP system's reliability in diverse operating conditions. Initial research on evaluating out-of-distribution generalization involved manually-designed challenge sets (Jia and Liang, 2017;Nie et al., 2020;Gardner et al., 2020), counterfactuals (Kaushik et al., 2019;Khashabi et al., 2020;, biased sampling (Søgaard et al., 2021) or toolkits for testing if a system has specific capabilities (Ribeiro et al., 2020) or robustness to distribution shifts (Goel et al., 2021). However, most of these approaches inevitably overestimate a given system's worst-case performance since they do not mimic the NLP system's adversarial distribution 1 .
A promising technique for evaluating worst-case performance is the adversarial attack. However, although some adversarial attacks explicitly focus on specific linguistic levels of analysis (Belinkov and Bisk, 2018;Tan et al., 2020;Eger and Benz, 2020), many often simply rely on word embeddings or language models for perturbation proposal (see §4). While the latter may be useful to evaluate a system's robustness to malicious actors, they are less useful for dimension-specific testing (e.g., reliability when encountering grammatical variation). This is because they often perturb the input across multiple dimensions at once, which may make the resulting adversaries unnatural.
Hence, in this paper targeted at NLP researchers, practitioners, and policymakers, we make the case for reliability testing and reformulate adversarial attacks as dimension-specific, worst-case tests that can be used to approximate real-world variation. We contribute a reliability testing framework -DOCTOR -that translates safety and fairness concerns around NLP systems into quantitative tests. We demonstrate how testing dimensions for DOC-TOR can be drafted for a specific use case. Finally, we discuss the policy implications, challenges, and directions for future research on reliability testing.
Terminology Definitions
Let's define key terms to be used in our discussion.
NLP system. The entire text processing pipeline built to solve a specific task; taking raw text as input and producing predictions in the form of labels 1 The distribution of adversarial cases or failure profile.
(classification) or text (generation). We exclude raw language models from the discussion since it is unclear how performance, and hence worst-case performance, should be evaluated. We do include NLP systems that use language models internally (e.g., BERT-based classifiers (Devlin et al., 2019)).
Reliability. Defined by IEEE (2017) as the "degree to which a system, product or component performs specified functions under specified conditions for a specified period of time". We prefer this term over robustness 2 to challenge the NLP community's common framing of inputs from outside the training distribution as "noisy". The notion of reliability requires us to explicitly consider the specific, diverse environments (i.e., communities) a system will operate in. This is crucial to reducing the NLP's negative impact on the underrepresented.
Dimension. An axis along which variation can occur in the real world, similar to Plank (2016)'s variety space. A taxonomy of possible dimensions can be found in Table 1 (Appendix).
Adversarial attack. A method of perturbing the input to degrade a target model's accuracy (Goodfellow et al., 2015). In computer vision, this is achieved by adding adversarial noise to the image, optimized to be maximally damaging to the model. §4 describes how this is done in the NLP context.
Stakeholder.
A person who is (in-)directly impacted by the NLP system's predictions.
Actor. Someone who has influence over a) the design of an NLP system and its reliability testing regime; b) whether the system is deployed; and c) who it can interact with. Within the context of our discussion, actors are likely to be regulators, experts, and stakeholder advocates.
Expert. An actor who has specialized knowledge, such as ethicists, linguists, domain experts, social scientists, or NLP practitioners.
The Case for Reliability Testing in NLP
The accelerating interest in building NLP-based products that impact many lives has led to urgent questions of fairness, safety, and accountability (Hovy and Spruit, 2016;Bender et al., 2021), prompting research into algorithmic bias (Bolukbasi et al., 2016;Blodgett et al., 2020), explainability (Ribeiro et al., 2016;Danilevsky et al., 2020), robustness (Jia and Liang, 2017), etc. Research is also emerging on best practices for productizing ML: from detailed dataset documentation (Bender and Friedman, 2018;Gebru et al., 2018), model documentation for highlighting important but often unreported details such as its training data, intended use, and caveats (Mitchell et al., 2019), and documentation best practices (Partnership on AI, 2019), to institutional mechanisms such as auditing (Raji et al., 2020) to enforce accountability and red-teaming (Brundage et al., 2020) to address developer blind spots, not to mention studies on the impact of organizational structures on responsible AI initiatives (Rakova et al., 2020).
Calls for increased accountability and transparency are gaining traction among governments (116th U.S. Congress, NIST, 2019;European Commission, 2020;Smith, 2020;California State Legislature, 2020;FDA, 2021) and customers increasingly cite ethical concerns as a reason for not engaging AI service providers (EIU, 2020).
While there has been significant discussion around best practices for dataset and model creation, work to ensure NLP systems are evaluated in a manner representative of their operational conditions has only just begun. Initial work in constructing representative tests focuses on enabling development teams to easily evaluate their models' linguistic capabilities (Ribeiro et al., 2020) and accuracy on subpopulations and distribution shifts (Goel et al., 2021). However, there is a clear need for a paradigm that allows experts and stakeholder advocates to collaboratively develop tests that are representative of the practical and ethical concerns of an NLP system's target demographic. We argue that reliability testing, by reframing the concept of adversarial attacks, has the potential to fill this gap.
What is reliability testing?
Despite the recent advances in neural architectures resulting in breakthrough performance on benchmark datasets, research into adversarial examples and out-of-distribution generalization has found ML systems to be particularly vulnerable to slight perturbations in the input (Goodfellow et al., 2015) and natural distribution shifts (Fisch et al., 2019). While these perturbations are often chosen to max-imize model failure, they highlight serious reliability issues for putting ML models into production since they show that these models could fail catastrophically in naturally noisy, diverse, real-world environments (Saria and Subbaswamy, 2019). Additionally, bias can seep into the system at multiple stages of the NLP lifecycle (Shah et al., 2020), resulting in discrimination against minority groups (O'Neil, 2016). The good news, however, is that rigorous testing can help to highlight potential issues before the systems are deployed.
The need for rigorous testing in NLP is reflected in ACL 2020 giving the Best Paper Award to Check-List (Ribeiro et al., 2020), which applied the idea of behavior testing from software engineering to testing NLP systems. While invaluable as a first step towards the development of comprehensive testing methodology, the current implementation of CheckList may still overestimate the reliability of NLP systems since the individual test examples are largely manually constructed. Importantly, with the complexity and scale of current models, humans cannot accurately determine a model's adversarial distribution (i.e., the examples that cause model failure). Consequently, the test examples they construct are unlikely to be the worst-case examples for the model. Automated assistance is needed. Therefore, we propose to perform reliability testing, which can be thought of as one component of behavior testing. We categorize reliability tests as average-case tests or the worst-case tests. As their names suggest, average-case and worst-case tests estimate the expected and lower-bound performance, respectively, when the NLP system is exposed to the phenomena modeled by the tests. Average-case tests are conceptually similar to 's counterfactuals, which is contemporaneous work, while worst-case tests are most similar to adversarial attacks ( §4).
Our approach parallels boundary value testing in software engineering: In boundary value testing, tests evaluate a program's ability to handle edge cases using test examples drawn from the extremes of the ranges the program is expected to handle. Similarly, reliability testing aims to quantify the system's reliability under diverse and potentially extreme conditions. This allows teams to perform better quality control of their NLP systems and introduce more nuance into discussions of why and when models fail ( §5). Finally, we note that reliabil-ity testing and standards are established practices in engineering industries (e.g., aerospace (Nelson, 2003;Wilkinson et al., 2016)) and advocate for NL engineering to be at parity with these fields.
Evaluating worst-case performance in a label-scarce world
A proposed approach for testing robustness to natural and adverse distribution shifts is to construct test sets using data from different domains or writing styles (Miller et al., 2020;Hendrycks et al., 2020), or to use a human vs. model method of constructing challenge sets (Nie et al., 2020;Zhang et al., 2019b). While they are the gold standard, such datasets are expensive to construct, 3 making it infeasible to manually create worst-case test examples for each NLP system being evaluated. Consequently, these challenge sets necessarily overestimate each system's worst-case performance when the inference distribution differs from the training one. Additionally, due to their crowdsourced nature, these challenge sets inevitably introduce distribution shifts across multiple dimensions at once, and even their own biases (Geva et al., 2019), unless explicitly controlled for. Building individual challenge sets for each dimension would be prohibitively expensive due to combinatorial explosion, even before having to account for concept drift (Widmer and Kubat, 1996). This coupling complicates efforts to design a nuanced and comprehensive testing regime. Hence, simulating variation in a controlled manner via reliability tests can be a complementary method of evaluating the system's out-of-distribution generalization ability.
Adversarial Attacks as Reliability Tests
We first give a brief introduction to adversarial attacks in NLP before showing how they can be used for reliability testing. We refer the reader to Zhang et al. (2020b) for a comprehensive survey.
Early work did not place any constraints on the attacks and merely used the degradation to a tar-
Algorithm 1 General Reliability Test
Require: switch TestType do 5: case AverageCaseTest 6: s ← MEAN(S(y , M(C))) 7: X ← X ∪ C 8: case WorstCaseTest 9: x , s ← arg min xc∈C S(y , M(xc)) 10: X ← X ∪ {x } 11: r ← r + s 12: end for 13: r ← r |X| 14: return X , r get model's accuracy as the measure of success. However, this often resulted in the semantics and expected prediction changing, leading to an overestimation of the attack's success. Recent attacks aim to preserve the original input's semantics. A popular approach has been to substitute words with their synonyms using word embeddings or a language model as a measure of semantic similarity (Alzantot et al., 2018;Michel et al., 2019;Ren et al., 2019;Zhang et al., 2019a;Li et al., 2019;Garg and Ramakrishnan, 2020;Li et al., 2020a).
Focusing on maximally degrading model accuracy overlooks the key feature of adversarial attacks: the ability to find the worst-case example for a model from an arbitrary distribution. Many recent attacks perturb the input across multiple dimensions at once, which may make the result unnatural. By constraining our sample perturbations to a distribution modeling a specific dimension of interest, the performance on the generated adversaries is a valid lower bound performance for that dimension. Said another way, adversarial attacks can be reframed as interpretable reliability tests if we constrain them to meaningful distributions. This is the key element of our approach as detailed in Alg. 1. We specify either an average (Lines 5-7) or worse case test (Lines 8-10), but conditioned on the data distribution D that models a particular dimension of interest d. The resultant reliability score gauges real-world performance and the worstcase variant returns the adversarial examples that cause worst-case performance. When invariance to input variation is expected, y is equivalent to the source label y. Note that by ignoring the averagecase test logic and removing d, we recover the general adversarial attack algorithm.
However, the key difference between an adversarial robustness mindset and a testing one is the latter's emphasis on identifying ways in which natural phenomena or ethical concerns can be operationalized as reliability tests. This change in perspective opens up new avenues for interdisciplinary research that will allow researchers and practitioners to have more nuanced discussions about model reliability and can be used to design comprehensive reliability testing regimes. We describe such a framework for interdisciplinary collaboration next.
A Framework for Reliability Testing
We introduce and then describe our general framework, DOCTOR, for testing the reliability of NLP systems. DOCTOR comprises six steps: 1. Define reliability requirements 2. Operationalize dimensions as distributions 3. Construct tests 4. Test system and report results 5. Observe deployed system's behavior 6. Refine reliability requirements and tests Defining reliability requirements. Before any tests are constructed, experts and stakeholder advocates should work together to understand the demographics and values of the communities the NLP system will interact with (Friedman and Hendry, 2019) and the system's impact on their lives. The latter is also known as algorithmic risk assessment (Ada Lovelace Institute and DataKind UK, 2021). There are three critical questions to address: 1) Along what dimensions should the model be tested? 2) What metrics should be used to measure system performance? 3) What are acceptable performance thresholds for each dimension?
Question 1 can be further broken down into: a) general linguistic phenomena, such as alternative spellings or code-mixing; b) task-specific quirks, e.g., an essay grading system should not use text length to predict score; c) sensitive attributes, such as gender, ethnicity, sexual orientation, age, or disability status. This presents an opportunity for interdisciplinary expert collaboration: Linguists are best equipped to contribute to discussions around (a), domain experts to (b), and ethicists and social scientists to (c). However, we recognize that such collaboration may not be feasible for every NLP system being tested. It is more realistic to expect ethicists to be involved when applying DOCTOR at the company and industry levels, and ethics-trained NLP practitioners to answer these questions within the development team. We provide a taxonomy of potential dimensions in Table 1 (Appendix).
Since it is likely unfeasible to test every possible dimension, stakeholder advocates should be involved to ensure their values and interests are accurately represented and prioritized (Hagerty and Rubinov, 2019), while experts should ensure the dimensions identified can be feasibly tested. A similar approach to that of community juries 4 may be taken. We recommend using this question to evaluate the feasibility of operationalizing potential dimensions: "What is the system's performance when exposed to variation along dimension d?". For example, rather than simply "gender", a better-defined dimension would be "gender pronouns". With this understanding, experts and policymakers can then create a set of reliability requirements, comprising the testing dimensions, performance metric(s), and passing thresholds.
Next, we recommend using the same metrics for held-out, average-case, and worst-case performance for easy comparison. These often vary from task to task and are still a subject of active research (Novikova et al., 2017;Reiter, 2018;Kryscinski et al., 2019), hence the question of the right metric to use is beyond the scope of this paper. Finally, ethicists, in consultation with the other aforementioned experts and stakeholders, will determine acceptable thresholds for worst-case performance.
The system under test must perform above said thresholds when exposed to variation along those dimensions in order to pass. For worst-case performance, we recommend reporting thresholds as relative differences (δ) between the average-case and worst-case performance. These questions may help in applying this step and deciding if specific NLP solutions should even exist (Leins et al., 2020): • Who will interact with the NLP system, in what context, and using which language varieties?
• What are the distinguishing features of these varieties compared to those used for training?
• What is the (short-and long-term) impact on the community's most underrepresented members if the system performs more poorly for them?
We note that our framework is general enough to be applied at various levels of organization: within the development team, within the company (compliance team, internal auditor), and within the industry (self-regulation or independent regulator). However, we expect the exact set of dimensions, metrics and acceptable thresholds defined in Step 1 to vary depending on the reliability concerns of the actors at each level. For example, independent regulators will be most concerned with establishing minimum safety and fairness standards that all NLP systems used in their industries must meet, while compliance teams may wish to have stricter and more comprehensive standards for brand reasons. Developers can use DOCTOR to meet the other two levels of requirements and understand their system's behaviour better with targeted testing.
Operationalizing dimensions. While the abstractness of dimensions allows people who are not NLP practitioners to participate in drafting the set of reliability requirements, there is no way to test NLP systems using fuzzy concepts. Therefore, every dimension the system is to be tested along must be operationalizable as a distribution from which perturbed examples can be sampled in order for NLP practitioners to realize them as tests.
Since average-case tests attempt to estimate a system's expected performance in its deployed environment, the availability of datasets that reflect real-world distributions is paramount to ensure that the tests themselves are unbiased. This is less of an issue for worst-case tests; the tests only needs to know which perturbations that are possible, but not how frequently they occur in the real world. Figuring out key dimensions for different classes of NLP tasks and exploring ways of operationalizing them as reliability tests are also promising directions for future research. Such research would help NLP practitioners and policymakers define reliability requirements that can be feasibly implemented.
Constructing tests. Next, average-and worstcase tests are constructed (Alg. 1). Average-case tests can be data-driven and could take the form of manually curated datasets or model-based perturbation generation (e.g., PolyJuice ), while worst-case tests can be rule-based (e.g., Morpheus (Tan et al., 2020)) or model-based (e.g., BERT-Attack (Li et al., 2020a)). We recommend constructing tests that do not require access to the NLP model's parameters (black-box assumption); this not only yields more system-agnostic tests, but also allows for (some) tests to be created independently from the system development team. If the black-box assumption proves limiting, the community can establish a standard set of items an NLP system should export for testing purposes, e.g., network gradients if the system uses a neural model. Regardless of assumption, keeping the regulators' test implementations separate and hidden from the system developers is critical for stakeholders and regulators to trust the results. This separation also reduces overfitting to the test suite.
Testing systems. A possible model for test ownership is to have independently implemented tests at the three levels of organization described above (team, company, industry). At the development team level, reliability tests can be used to diagnose weaknesses with the goal of improving the NLP system for a specific use case and set of target users. Compared to unconstrained adversarial examples, contrasting worst-case examples that have been constrained along specific dimensions with non-worst-case examples will likely yield greater intuition into the model's inner workings. Studying how modifications (to the architecture, training data and process) affect the system's reliability on each dimension will also give engineers insight into the factors affecting system reliability. These tests should be executed and updated regularly during development, according to software engineering best practices such as Agile (Beck et al., 2001).
Red teams are company-internal teams tasked with finding security vulnerabilities in their developed software or systems. Brundage et al. (2020) propose to apply the concept of red teaming to surface flaws in an AI system's safety and security. In companies that maintain multiple NLP systems, we propose employing similar, specialized teams composed of NLP experts to build and maintain reliability tests that ensure their NLP systems adhere to company-level reliability standards. These tests will likely be less task-/domain-specific than those developed by engineering teams due to their wider scope, while the reliability standards may be created and maintained by compliance teams or the red teams themselves. Making these stan-dards available for public scrutiny and ensuring their products meet them will enable companies to build trust with their users. To ensure all NLP systems meet the company's reliability standards, these reliability tests should be executed as a part of regular internal audits (Raji et al., 2020), investigative audits after incidents, and before major releases (especially if it is the system's first release or if it received a major update). They may also be regularly executed on randomly chosen production systems and trigger an alert upon failure.
At the independent regulator level, reliability tests would likely be carried out during product certification (e.g., ANSI/ISO certification) and external audits. These industry-level reliability standards and tests may be developed in a similar manner to the company-level ones. However, we expect them to be more general and less comprehensive than the latter, analogous to minimum safety standards such as IEC 60335-1 (IEC, 2020). Naturally, high risk applications and NLP systems used in regulated industries should comply with more stringent requirements (European Commission, 2021).
Our proposed framework is also highly compatible with the use of model cards (Mitchell et al., 2019) for auditing and transparent reporting (Raji et al., 2020). In addition to performance on task-related metrics, model cards surface information and assumptions about a machine learning system and training process that may not be readily available otherwise. When a system has passed all tests and is ready to be deployed, its average-and worst-case performance on all tested dimensions can be included as an extra section on the accompanying model card. In addition, the perturbed examples generated during testing and their labels (x , y ) can be stored for audit purposes or examined to ensure that the tests are performing as expected.
Observing and Refining requirements. It is crucial to regularly monitor the systems' impact post-launch and add, update, or re-prioritize dimensions and thresholds accordingly. Monitoring large-scale deployments can be done via community juries, in which stakeholders who will be likely impacted (or their advocates) give feedback on their pain points and raise concerns about potential negative effects. Smaller teams without the resources to organize community juries can set up avenues (e.g., online forms) for affected stakeholders to give feedback, raise concerns, and seek remediation.
From Concerns to Dimensions
We now illustrate how reliability concerns can be converted into concrete testing dimensions (Step 1) by considering the scenario of applying automated text scoring to short answers and essays from students in the multilingual population of Singapore.
We study a second scenario in Appendix A. Automated Text Scoring (ATS) systems are increasingly used to grade tests and essays (Markoff, 2013;Feathers, 2019). While they can provide instant feedback and help teachers and test agencies cope with large loads, studies have shown that they often exhibit demographic and language biases, such as scoring African-and Indian-American males lower on the GRE Argument task compared to human graders (Bridgeman et al., 2012;Ramineni and Williamson, 2018). Since the results of some tests will affect the futures of the test takers (Salaky, 2018), the scoring algorithms used must be sufficiently reliable. Hence, let us imagine that Singapore's education ministry has decided to create a standard set of reliability requirements that all ATS systems used in education must adhere to.
Linguistic landscape. A mix of language varieties are used in Singapore: a prestige English variety, a colloquial English variety, three other official languages (Chinese, Malay, and Tamil), and a large number of other languages. English is the lingua franca, with fluency in the prestige variety correlating with socioeconomic status (Vaish and Tan, 2008). A significant portion of the population does not speak English at home. Subjects other than languages are taught in English.
Stakeholder impact.
The key stakeholders affected by ATS systems would be students in schools and universities. The consequences of lower scores could be life-altering for the student who is unable to enroll in the major of their choice. At the population level, biases in an ATS system trained on normally sampled data would unfairly discriminate against already underrepresented groups. Additionally, biases against disfluent or ungrammatical text when they are not the tested attributes would result in discrimination against students with a lower socioeconomic status or for whom English is a second language.
Finally, NLP systems have also been known to be overly sensitive to alternative spellings (Belinkov and Bisk, 2018). When used to score subject tests, this could result in the ATS system unfairly penaliz-ing dyslexic students (Coleman et al., 2009). Since education is often credited with enabling social mobility, 5 unfair grading may perpetuate systemic discrimination and increase social inequality.
Dimension. We can generally categorize written tests into those that test for content correctness (e.g., essay questions in a history test), and those that test for language skills (e.g., proper use of grammar). While there are tests that simultaneously assess both aspects, modern ATS systems often grade them separately (Ke and Ng, 2019). We treat each aspect as a separate test here.
When grading students on content correctness, we would expect the ATS system to ignore linguistic variation and sensitive attributes as long as they do not affect the answer's validity. Hence, we would expect variation in these dimensions to have no effect on scores: answer length, language/vocabulary simplicity, alternative spellings/misspellings of non-keywords, grammatical variation, syntactic variation (especially those resembling transfer from a first language), and proxies for sensitive attributes.
On the other hand, the system should be able to differentiate proper answers from those aimed at gaming the test (Chin, 2020;Ding et al., 2020).
When grading students on language skills, however, we would expect ATS systems to be only sensitive to the relevant skill. For example, when assessing grammar use, we would expect the system to be sensitive to grammatical errors (from the perspective of the language variety the student is expected to use), but not to the other dimensions mentioned above (e.g., misspellings).
Actors. Relevant experts include teachers of the subjects where the ATS systems will be deployed, linguists, and computer scientists. The stakeholders (students) may be represented by student unions (at the university level) or focus groups comprising a representative sample of the student population.
Implications for Policy
There is a mounting effort to increase accountability and transparency around the development and use of NLP systems to prevent them from amplifying societal biases. DOCTOR is highly complementary to the model card approach increasingly adopted 6 to surface oft hidden details about NLP models: Developers simply need to list the tested dimensions, metrics, and score on each dimension in the model card. Crucially, reliability tests can be used to highlight fairness issues in NLP systems by including sensitive attributes for the target population, but it is paramount these requirements reflect local concerns rather than any prescriptivist perspective (Sambasivan et al., 2021).
At the same time, the ability to conduct quantitative, targeted reliability testing along specifiable dimensions paves the way for reliability standards to be established, with varying levels of stringency and rigor for different use cases and industries. We envision minimum safety and fairness standards being established for applications that are non-sensitive, not safety-critical, and used in unregulated industries, analogous to standards for household appliances. Naturally, applications at greater risks (Li et al., 2020b) of causing harm upon failure should be held to stricter standards. Policymakers are starting to propose and implement regulations to enforce transparency and accountability in the use of AI systems. For example, the European Union's General Data Protection Regulation grants data subjects the right to obtain "meaningful information about the logic involved" in automated decision systems (EU, 2016). The EU is developing AIspecific regulation (European Commission, 2020): e.g., requiring developers of high-risk AI systems to report their "capabilities and limitations, ... [and] the conditions under which they can be expected to function as intended". In the U.S., a proposed bill of the state of Washington will require public agencies to report "any potential impacts of the automated decision system on civil rights and liberties and potential disparate impacts on marginalized communities" before using automated decision systems (Washington State Legislature, 2021).
One may note that language in the proposed regulation is intentionally vague. There are many ways to measure bias and fairness, depending on the type of model, context of use, and goal of the system. Today, companies developing AI systems employ the definitions they believe most reasonable (or perhaps easiest to implement), but regulation will need to be more specific for there to be meaningful compliance. DOCTOR's requirement to explicitly define specific dimensions instead of a vague notion of reliability will help policymakers in this blog.einstein.ai/model-cards-for-ai-model-transparency regard, and can inform the ongoing development of national (NIST, 2019) and international standards 7 .
While external algorithm audits are becoming popular, testing remains a challenge since companies wishing to protect their intellectual property may be resistant to sharing their code (Johnson, 2021), and implementing custom tests for each system is unscalable. Our approach to reliability testing offers a potential solution to this conundrum by treating NLP systems as black boxes. If reliability tests become a legal requirement, regulatory authorities will be able to mandate independently conducted reliability tests for transparency. Such standards, combined with certification programs (e.g., IEEE's Ethics Certification Program for Autonomous and Intelligent Systems 8 ), will further incentivize the development of responsible NLP, as the companies purchasing NLP systems will insist on certified systems to protect them from both legal and brand risk. To avoid confusion, we expect certification to occur for individual NLP systems (e.g., an end-to-end question answering system for customer enquiries), rather than for general purpose language models that will be further trained to perform some specific NLP task. While concrete standards and certification programs that can serve this purpose do not yet exist, we believe that they eventually will and hope our paper will inform their development. This multi-pronged approach can help to mitigate NLP's potential harms while increasing public trust in language technology.
Challenges and Future Directions
While DOCTOR is a useful starting point to implement reliability testing for NLP systems, we observe key challenges to its widespread adoption. First, identifying and prioritizing the dimensions that can attest a system's reliability and fairness. The former is relatively straightforward and can be achieved via collaboration with experts (e.g., as part of the U.S. NIST's future AI standards (NIST, 2019)). The latter, however, is a question of values and power (Noble, 2018;Mohamed et al., 2020;Leins et al., 2020), and should be addressed via a code of ethics and ensuring that all stakeholders are adequately represented at the decision table.
Second, our proposed method of reliability testing may suffer from similar issues plaguing automatic 7 ethicsstandards.org/p7000 8 standards.ieee.org/industry-connections/ecpais.html evaluation metrics for natural language generation (Novikova et al., 2017;Reiter, 2018;Kryscinski et al., 2019): due to the tests' synthetic nature they may not fully capture the nuances of reality. For example, if a test's objective were to test an NLP system's reliability when interacting with African American English (AAE) speakers, would it be possible to guarantee (in practice) that all generated examples fall within the distribution of AAE texts? Potential research directions would be to design adversary generation techniques that can offer such guarantees or incorporate human feedback (Nguyen et al., 2017;Kreutzer et al., 2018;Stiennon et al., 2020).
Conclusion
Once language technologies leave the lab and start impacting real lives, concerns around safety, fairness, and accountability cease to be thought experiments. While it is clear that NLP can have a positive impact on our lives, from typing autocompletion to revitalizing endangered languages (Zhang et al., 2020a), it also has the potential to perpetuate harmful stereotypes (Bolukbasi et al., 2016;Sap et al., 2019), perform disproportionately poorly for underrepresented groups (Hern, 2017;Bridgeman et al., 2012), and even erase already marginalized communities (Bender et al., 2021).
Trust in our tools stems from an assurance that stakeholders will remain unharmed, even in the worst-case scenario. In many mature industries, this takes the form of reliability standards. However, for standards to be enacted and enforced, we must first operationalize "reliability". Hence, we argue for the need for reliability testing (especially worst-case testing) in NLP by contextualizing it among existing work on promoting accountability and improving generalization beyond the training distribution. Next, we showed how adversarial attacks can be reframed as worst-case tests. Finally, we proposed a possible paradigm, DOCTOR, for how reliability concerns can be realized as quantitative tests, and discussed how this framework can be used at different levels of organization or industry.
Broader Impact
Much like how we expect to not be exposed to harmful electric shocks when using electrical appliances, we should expect some minimum levels of safety and fairness for the NLP systems we interact with in our everyday lives. As mentioned in §1, §3, and §7, standards and regulations for AI systems are in the process of being developed for this purpose, especially for applications deemed "high-risk", e.g., healthcare (European Commission, 2020). Reliability testing, and our proposed framework, is one way to approach the problem of enacting enforceable standards and regulations.
However, the flip side of heavily regulating every single application of NLP is that it may slow down innovation. Therefore, it is important that the level of regulation for a particular application is proportionate to its potential for harm (Daten Ethik Kommission, 2019). Our framework can be adapted to different levels of risk by scaling down the implementation of some steps (e.g., the method and depth in which stakeholder consultation happens or the comprehensiveness of the set of testing dimensions) for low-risk applications.
Finally, it is important to ensure that any tests, standards, or regulations developed adequately represents the needs of the most vulnerable stakeholders, instead of constructing them in a prescriptivist manner (Hagerty and Rubinov, 2019). Hence, DOC-TOR places a strong emphasis on involving stakeholder advocates and analyzing the impact of an application of NLP on the target community. | 2021-05-07T01:15:56.126Z | 2021-05-06T00:00:00.000 | {
"year": 2021,
"sha1": "0f71a4fa9736ae916e6aef53045f6be4c901b0ff",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.acl-long.321.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "51f9b6b7a180777549e3223033b8dd52faff97dd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238258111 | pes2o/s2orc | v3-fos-license | No evidence of spatial representation of age, but “own-age bias” like face processing found in chimpanzees
Previous studies have revealed that non-human primates can differentiate the age category of faces. However, the knowledge about age recognition in non-human primates is very limited and whether non-human primates can process facial age information in a similar way to humans is unknown. As humans have an association between time and space (e.g., a person in an earlier life stage to the left and a person in a later life stage to the right), we investigated whether chimpanzees spatially represent conspecifics’ adult and infant faces. Chimpanzees were tested using an identical matching-to-sample task with conspecific adult and infant face stimuli. Two comparison images were presented vertically (Experiment 1) or horizontally (Experiment 2). We analyzed whether the response time was influenced by the position and age category of the target stimuli, but there was no evidence of correspondence between space and adult/infant faces. Thus, evidence of the spatial representation of the age category was not found. However, we did find that the response time was consistently faster when they discriminated between adult faces than when they discriminated between infant faces in both experiments. This result is in line with a series of human face studies that suggest the existence of an “own-age bias.” As far as we know, this is the first report of asymmetric face processing efficiency between infant and adult faces in non-human primates. Supplementary Information The online version contains supplementary material available at 10.1007/s10071-021-01564-7.
Introduction
Faces convey a lot of information to humans, such as age, identity, gender, and emotional states (Bruce and Young 2012;Rhodes et al. 2011). Non-human primates can also extract various information from faces (Adachi and Tomonaga 2017;Leopold and Rhodes 2010;Parr 2011), and this includes identity (Itakura 1992;Parr et al. 2000), species (Wilson and Tomonaga 2018), sex (de Waal and Pokorny 2008; Koba et al. 2009), social rank (Dahl and Adachi 2013), emotional states (Kanazawa 1996;Parr 2003), attentional states (Tomonaga and Imura 2010), and attractiveness (Waitt et al. 2003). However, facial age perception has not been studied in non-human primates until quite recently, even though it is a well-studied topic in human face recognition (e.g., Burt and Perrett 1995; for review Rhodes 2007). Recognizing conspecific's approximate age, that is, age category is important for social primates as it enables them to behave appropriately around other individuals by changing their behavior based on age (Berry and McArthur 1986). An infant individual should be treated differently from an adult individual by conspecifics, for example, in that they are vulnerable and cannot survive without care from adults. Non-human primates may use various cues such as body size, body movement, vocalization, and odors, but facial cues can also provide reliable information on age.
Some studies have investigated how non-human primates respond to the face stimuli of adult and infant individuals. For example, Koda et al. (2013) examined whether Japanese macaques (Macaca fuscata) exhibit an attentional bias for infant faces, which has been reported in humans (Lucion et al. 2017), but they obtained no evidence to support this. Other studies found that non-human primates can differentiate between faces of different age categories (i.e., adult or infant) (Kawaguchi et al. 2019b(Kawaguchi et al. , 2020. In these studies, chimpanzees (Pan troglodytes) (Kawaguchi et al. 2020) and capuchin monkeys (Sapajus apella) (Kawaguchi et al. 2019b) were trained to discriminate between the adult and infant faces of conspecifics or humans using a symbolic matching-to-sample task. Both the chimpanzees and capuchin monkeys easily learned to do this, and this ability was generalized to the discrimination of novel stimuli. These studies demonstrated that the sensitivity to age-related facial features is shared by non-human primates and discussed what kind of facial cues the participants seemingly used for such categorizations. However, compared to the accumulation of human research, there are still a limited understanding of the perception of facial age in non-human primates. Although previous studies have found that nonhuman primates are able to visually differentiate adult faces and infant faces, it is still unknown whether non-human primates extract an age category concept from faces. They may have categorized adult and infant faces just by combining low-level features without recognizing age. Therefore, we examined whether chimpanzees recognize infants and adults in a certain relationship (i.e., time) as humans do by testing spatial mapping of face age in chimpanzees.
As illustrated by the idiom "from the cradle to the grave," humans recognize that infants and adults exist linearly in a time sequence. In other words, age has the direction and we understand that an infant will not be an infant forever and that an older person was not old when they were born. Moreover, in most cases, when people illustrate human life stages, the infant is depicted on the left, the "middle" age is placed in the middle, and the older person is presented on the right in a horizontal line. This is because we have a mental timeline, and we associate space and time in a certain direction (e.g., earlier is left; later is right) (Fuhrman and Boroditsky 2010;Santiago et al. 2007;Torralbo et al. 2006;Weger and Pratt 2008). For example, Fuhrman and Boroditsky (2010) presented pairs of pictures one after another, and the participants were required to answer whether the second picture showed either an "earlier" or "later" event than the first picture by pressing keys. The stimuli included short (e.g., filling a cup of coffee) and long (e.g., people of different age classes) time periods. English speakers were faster to make earlier judgments when the corresponding key was positioned at the left, while Hebrew speakers had the opposite pattern. Thus, the direction of mental timelines is influenced by cultural factors, such as writing direction. Furthermore, a larger congruency effect was observed when the stimuli depicted a long-time interval. Spatial representation of time is observed horizontally and vertically in some cultures (e.g., Boroditsky 2001). Moreover, the correspondence between the abstract domain and spatial domain is observed not only for time, but also for other abstract domains, including numbers (for a review see Fias and Fischer 2005), social rank (e.g., Schubert 2005), and auditory pitch (e.g., Rusconi et al. 2006). Each abstract domain is mapped horizontally, vertically, or both. One example of vertical spatial representations is social status, and it has been demonstrated that "high-ranked" individuals are represented in spatially higher positions than "low-ranked" individuals by human adults (Schubert 2005).
The correspondence between the abstract domain and space is also observed in non-human animals. For example, there is some evidence of the spatial mapping of numbers in various animals including chicks (Rugani et al. 2015(Rugani et al. , 2017, rhesus macaques (Drucker and Brannon 2014), and chimpanzees (Adachi 2014), although the direction of spatial mapping may vary within and across species (Johnson-Ulrich and Vonk 2018). Furthermore, Dahl and Adachi (2013) conducted a matching-to-sample task in which chimpanzees were required to discriminate between the face identities of familiar group members that were presented in a vertical arrangement and found that chimpanzees have a spatial mapping of the dominance hierarchy similar to humans. They reported that when the rank of the represented individual and the position in the display were congruent (e.g., a high-ranked individual was positioned higher), the response time was faster than when they were incongruent. These comparative studies suggest that spatial representation have evolutionary roots and emerged before language evolution, while they are also flexible so that their direction can be changed by culture (e.g., Shaki and Fischer 2008). One explanation of such phenomena is that space and other magnitude may be associated in animal brains when they are represented (Rugani and de Hevia 2017).
Given those evidences in non-human animals especially the one showing spatial representation of the social domain (Dahl and Adachi 2013), it is not tested but possible that non-human primates have a particular spatial representation of age as reported in the humans (Fuhrman and Boroditsky 2010). Thus, our main aim of this study was to investigate whether chimpanzees spatially represent conspecifics' adult and infant faces in order to understand whether they recognize infants and adults in time (or at least any other abstract domain which has a direction). Our prediction was that if chimpanzees refer a conceptual age category that can be recognized in a time sequence from a face, they would respond faster when the spatial arrangement of face stimuli are congruent with their time representation, if any. A previous studies have demonstrated that spatial and time judgments interact in rhesus macaques (Mendez et al. 2011;Merritt et al. 2010). However, as far as we know, no study has investigated the space-based representation of time in non-human primates.
Although testing spatial mapping of face age was the main purpose of this study, we also investigated whether the chimpanzees' performance in discriminating adult faces and infant faces is asymmetric because face processing is largely modulated by the amount of the experiences. Enhanced experiences of specific face categories in early and latelife selectively tune perceptual systems for face processing toward that category. For example, older infants (9 months) and adults can discriminate among conspecific faces, but not monkey faces, while younger infants (6 months) can discriminate both of them (Pascalis et al. 2002). Such perceptual tuning based on very early experience in life is called perceptual narrowing and is observed in other face categories such as own-versus other-race faces in humans ("ownrace bias," e.g., Kelly et al. 2007). In addition to such early perceptual tuning, later exposure or expertise throughout life also modulates face processing. For example, Koreans living among Caucasians from childhood show identify Caucasian faces better than Asian faces (Sangrigoli and Pallier 2005). Enhanced face processing by extensive exposure in later life also occurs with faces of specific age categories as "own-age bias" (Wright and Stroud 2002). This bias is a phenomenon in which human adults have superior processing for adult faces compared with processing for children's faces and vice versa. It is considered that such a bias, like other biases in face processing, results from more frequent exposure to individuals from the same age group than to others in daily social life (Rhodes and Anastasi 2012). For example, preschool teachers can recognize children's faces and adults' faces equally well (Kuefner et al. 2008).
The enhanced face processing by both early and late exposure of specific face categories has also been reported in non-human primates. Dahl and his colleague investigated captive chimpanzees' face discrimination ability for both conspecifics and humans (Dahl and Adachi 2013). They found that young chimpanzees with less exposure to humans have advantages in discriminating chimpanzees rather than human faces, while adult chimpanzees with lifelong exposure to humans have advantages with human faces over conspecific faces. However, it remains unknown whether the amount of experience with a specific age category also affects face processing efficiency in non-human primates. Therefore, we compared the performance of adult chimpanzees when they discriminated between adult faces and infant faces to explore whether they also exhibit this age-related asymmetric processing efficiency.
To investigate those two aspects, namely spatial mapping and the amount of exposure related to age, we used a matching-to-sample task in which chimpanzees were required to match the faces of either adult or infant individuals. We applied and modified the procedure of the previous study, which reported the vertical representation of dominance in chimpanzees (Dahl and Adachi 2013). In the matching-to-sample task, two comparison images were presented in vertical (Experiment 1) or horizontal (Experiment 2) arrays. We examined whether their performance differed depending on the correspondence between the position and the age category of the stimuli. To examine the spatial correspondence effect, the two comparison images were from different age categories (i.e., one adult and one infant) in one condition, and they were from the same age category in the other condition. We also compared their discrimination performance for adult faces, and that for infant faces to examine if they have age-related asymmetric processing efficiency based on the different amount of the experiences.
Participants
Six chimpanzees (Pan troglodytes verus) living at the Primate Research Institute, Kyoto University, participated in the experiments. All of them were adults (17-41 years old), and one was male (see Table 1 for more individual information). They are living as a social group made up of 11 adult individuals and all of them had experience of interacting with infants before. The chimpanzees live in an enriched environment with an outdoor compound (700 m 2 ) and an indoor enclosure. They also have access to a semi-outdoor residence (Matsuzawa 2006). They are neither food-nor water-deprived, and they live in social groups. They receive food several times each day, and they always have access to water.
The participants were called for the experiments daily, and their participation was voluntary. During the experiment, they were unrestrained, and they could stop the task whenever they wanted to. All of them had abundant experience of matching-to-sample tasks, including in Dahl and Adachi's previous study. All procedures adhered to institutional guidelines (the Primate Research Institute's 2010 version of "The Guidelines for the Care and Use of Laboratory Primates"). The experimental design was approved by the Animal Welfare and Animal Care Committee of the Primate Research Institute (2018-115) and the Animal Research Committee of Kyoto University.
Stimuli
We used six adult and six infant chimpanzee face images as the stimuli. Most of the photographic images were either taken by the author or provided by colleagues, while a few were obtained from public sources. The depicted individuals were unfamiliar to the participants, and they showed neutral expressions. Half of the adult chimpanzees were males, while the sex of some of the infant chimpanzees was unknown. Unfortunately, the exact ages of some of the infants in the images taken from public sources were also unknown. However, we selected pictures of infants who appeared to be younger than two years old when the pictures were taken. Using Adobe Photoshop Elements 15 (Adobe Inc., San Jose, CA, USA), all of the images were cropped into a square with 250 × 250 pixels (6.6 cm × 6.6 cm), their luminance was matched, and they were presented in color.
Procedure
The participants were required to perform an identical zerodelay matching-to-sample task (Fig. 1). Each trial began when the participant touched the self-start key that appeared at the bottom of the monitor after a 2-s inter-trial interval. The self-start key appeared twice in different positions at the bottom of the monitor, with the second one always being presented in the center of the bottom of the monitor. When they touched the start keys, a sample image appeared in the center of the monitor for 750 ms. Two comparison images then appeared, one of which was identical to the sample stimulus. The participants were required to choose the same image. When they chose the correct answer, a piece of apple was delivered via the universal feeder as a reward. In Experiment 1, the two comparison images were presented in a vertical array, while in Experiment 2, they were presented in a horizontal array. In both experiments, the two comparison images were from the same age category (i.e., both were adults/both were infants) in the same condition, and they were from a different age category (i.e., one was an adult and the other was an infant) in the different condition. In each experiment, there were 66 combinations of the comparison images as there were 12 stimuli in total. For each combination, there were two comparison arrays (top or bottom in Experiment 1/left or right in Experiment 2) and two sample stimuli (either of the comparison images). Hence, the total 264 trials were divided into six sessions. In one session, 20 trials were presented in the same condition, and 24 trials were presented in the different condition. The order of the conditions and stimuli was pseudo-randomized.
Behavioral data analysis
In both experiments, the number of correct responses and the response times to choose the correct answers were analyzed. The accuracy was calculated and arcsine transformed for each condition, and we conducted a 2 × 2 × 2 ANOVA of the position (top or bottom/left or right), age of the Fig. 1 An example of one trial in Experiment 1 (vertical array). The self-start key was presented at the bottom. When the participant touched it, a sample stimulus was presented in the center of the monitor for 750 ms. When the sample disappeared, two comparison images were presented, and the participant was required to touch the same stimulus. In the same condition, the two comparison images were from the same age category, while in the different condition, they were from different age categories stimuli (adult or infant), and condition (same or different) as the independent variables. For the response time, only the response times of the correct trials were analyzed. We excluded response times that were longer than the average value plus three standard deviations (SDs) as the chimpanzees were sometimes distracted by unexpected noise from outside or by something else during the experiment and took longer to respond. The response time was analyzed using a 2 × 2 × 2 ANOVA with the same independent variables as the analysis of the accuracy. All the statistics were conducted by R 4.1.0 (R Core Team 2018).
Image analysis
When a different performance of the discrimination between adult and infant faces was found, this asymmetry may be caused by the variation in the physical characteristics of the infant faces just being smaller than that of the adult faces. To compare the physical variation in the face stimuli within each age category, we conducted an image similarity analysis of the stimuli and compared it between the age categories. The similarity between each exemplar (adult faces [n = 6] and infant faces [n = 6]) was evaluated for all combinations within the same age category. We used the structural similarity index ("SSIM," Wang et al. 2004), which is widely used to measure the similarity of two images by comparing local patterns of pixel intensity. The analysis was conducted using Python (Python Software Foundation, Wilmington, DE, USA) and OpenCV (Intel Corp., Santa Clara, CA, USA). All stimuli were converted to grayscale, and the SSIM was calculated for all of the possible combinations. The SSIM could range from -1 to + 1, and if the two images were identical, the score was 1. To calculate the physical distance between each of the stimuli, this SSIM score was subtracted from 1. The calculated differential score between every stimulus combination within each age category was compared using the Mann-Whitney U-test.
Experiment 1 (vertical array)
The accuracy was almost perfect when the condition was different (average accuracy ± SD: 99.8 ± 0.5%) but slightly reduced in the same condition (93.5 ± 4.3%, Fig. 2). We analyzed the arcsine transformed accuracy by a repeatedmeasures ANOVA and found a significant main effect of the condition (F 1, 5 = 20.08, p = 0.007, η 2 p = 0.80), and an approached significant main effect of age (F 1, 5 = 4.90, p = 0.08, η 2 p = 0.50), and interactions between condition and age (F 1, 5 = 4.90, p = 0.08, η 2 p = 0.50). The other main effect and the interactions were not significant (all ps > 0.38). The post hoc analysis (adjusted using Shaffer's procedure) indicated that the accuracy was greater in the different condition than the same condition when the stimulus was an adult (F 1, 5 = 11.91, p = 0.02, η 2 p = 0.70) and an infant (F 1, 5 = 14.71, p = 0.01, η 2 p = 0.75). The accuracy for adult stimuli compared with infant stimuli was slightly greater when the condition was the same (F 1, 5 = 4.90, p = 0.08, η 2 p = 0.50), but the performance was perfect for both stimuli types when the condition was different.
A repeated-measures ANOVA of the response time revealed a significant main effect of position (F 1, 5 = 7.25, p = 0.04, η 2 p = 0.59) (Fig. 2, see also Supplementary Information), condition (F 1, 5 = 28.44, p = 0.003, η 2 p = 0.85), and age (F 1, 5 = 6.78, p = 0.05, η 2 p = 0.58), and an approached significant interaction between condition and age (F 1, 5 = 5.84, p = 0.06, η 2 p = 0.54). The other Fig. 2 The average accuracy and the response time in Experiment 1 (vertical array) interactions were not significant (all ps > 0.16). The post hoc analysis indicated that the response time in the same condition was greater than in the different condition when the stimuli were infants (F 1, 5 = 71.96, p < 0.001, η 2 p = 0.94), but this tendency was not robust when the stimuli were adults (F 1, 5 = 4.05, p = 0.10, η 2 p = 0.45). The response time was greater for infant stimuli than adult stimuli in the same condition (F 1, 5 = 9.64, p = 0.03, η 2 p = 0.66), but not in the different condition (F 1, 5 = 0.16, p = 0.71, η 2 p = 0.03). These results indicated the following. First, the response time when the target was presented at the top of the monitor was consistently longer than when it was presented at the bottom (i.e., the effect of position). This probably occurred as touching the top part of the monitor was simply physically more demanding because of the touch panel's structure. Second, differentiating between faces from the same age category was more difficult than differentiating between faces from different age categories (i.e., the effect of the condition). This suggests that the faces from the different age categories were perceptually more different from each other than those from within the same age category. Third, the chimpanzees took more time when the target was an infant than when it was an adult, especially when they needed to discriminate between two different infant faces (i.e., the interaction effect between age and condition). On the other hand, the results did not show a congruency effect between the target's age and position (i.e., the interaction effect between age and position). Hence, there was no evidence of correspondence between vertical space and adult/infant faces. Although the sample size was quite small, visual inspection of demographic factors (i.e., sex and birth experience) did not find any systematic individual differences (see also Table 1 for participant information).
Experiment 2 (horizontal array)
The accuracy was again almost perfect when the condition was different (99.1% ± 0.8%), but slightly reduced in the same condition (96.0 ± 3.8%, Fig. 3). We analyzed arcsine transformed accuracy by a repeated-measures ANOVA. We found no main effect or interactions was significant (all ps > 0.12).
A repeated-measures ANOVA of the response time revealed a significant main effect of the condition (F 1, 5 = 27.47, p = 0.003, η 2 p = 0.85), but not of position (F 1, 5 = 0.35, p = 0.58, η 2 p = 0.07) or age (F 1, 5 = 1.74, p = 0.24, η 2 p = 0.26, Fig. 3, see also Supplementary Information). The interaction between condition and age was significant (F 1, 5 = 12.15, p = 0.02, η 2 p = 0.71), but the other interactions were not (all ps > 0.17). The post hoc analysis indicated that the response time was greater for the same condition than for the different condition when the stimulus was an adult (F 1, 5 = 11.43, p = 0.02, η 2 p = 0.70) and an infant (F 1, 5 = 22.48, p < 0.01, η 2 p = 0.82). The response time was greater for infant stimuli than adult stimuli in the same condition (F 1, 5 = 9.38, p = 0.03, η 2 p = 0.65), but not in the different condition (F 1, 5 = 2.45, p = 0.18, η 2 p = 0.33). As before, these results suggest that differentiating between faces from the same age category was more demanding than differentiating between faces from different categories (i.e., the effect of condition). Additionally, it took more time for the chimpanzees to discriminate between two different infant faces than in the other conditions (i.e., the interaction effect of age and condition). We did not find any effect of the position of the target, including the interaction between position and age. Therefore, there was no evidence of correspondence between horizontal space and adult/infant faces. When we look the results individually, the response time tended to be slightly shorter in adult-right and/or infantleft condition in many participants (see also Supplementary Information). It is noted that two individuals who show the Fig. 3 The accuracy and the average response time in Experiment 2 (horizontal array) 1 3 opposite pattern (Ai and Chloe) were females who had birth experience, although it is difficult to conclude on it due to our small sample size. Figure 4 illustrates the differential score between each stimulus within each age category, which was calculated based on the SSIM. If this value is 0 it means that the two images are the exactly same, while if it is greater it means that there is a larger difference between the stimuli. This differential score was compared using the Mann-Whitney U test. The results demonstrated that there was no difference between the average similarity of the adult and infant stimuli among the same age category (U = 106.5, p = 0.82). The findings indicate that the physical variation in the stimuli within each age category was not significantly different between the adult and infant faces in terms of low-level features. It is therefore unlikely that the reason for the chimpanzees' asymmetric performance when differentiating between adult and infant faces is that the infant stimuli were more similar to each other than the adult stimuli.
Discussions
The present study explored face processing related to age recognition from the aspect of a spatial mapping of face age in chimpanzees. The analysis of the performance and the response time indicated no effect of the position corresponding to the age category of the stimuli. That is, the results do not support the existence of the spatial representation of facial age in either a vertical (Experiment 1) or horizontal (Experiment 2) array in chimpanzees. The non-significant result of the correspondence between space and facial age implies some possibilities. First, there is a possibility that the variation of the results among the relatively small sample size (n = 6) may have masked the subtle effect, if any. This is because in the horizontal array (Experiment 2), there was a weak tendency, where the response time tended to be slightly shorter in adult-right and/or infant-left condition. Thus, a weak horizontal spatial mapping might exist in chimpanzees, but such a modest spatial association might not be robust to any artifacts (e.g., an individual's position bias).
Second possibility is that chimpanzees may not recognize faces as "adult" or "infant" as we do; in other words, they may not extract conceptual age categories from faces. Previous studies have demonstrated that non-human primates also recognize a face image as representing faces by reporting the neural activities that are selective for faces (e.g., Tsao et al. 2003Tsao et al. , 2008. Moreover, the present study indicates that the face discrimination performance differed between the same condition versus the different condition. This indicates that the faces across the different categories were perceptually more different than the faces within the same category for the chimpanzees. A previous study also demonstrates that chimpanzees can differentiate adult faces and infant faces (Kawaguchi et al. 2020). This evidence indicates that chimpanzees explicitly extracted shared visual features within each category. Therefore, the chimpanzees should have at least recognized that the stimuli we used were representing faces, which can be dissociated into two categories. However, that category may not have been based on age, but something else such as low-level features including the color difference.
The other possibility is that even though the chimpanzees extracted conceptual age category from face images, they may have not associated it with space for some reasons. As what we know about time recognition in non-human primates is quite limited and our study was explorative, it is difficult to conclude whether chimpanzees do not recognize the infant-adult in time sequence, or if they recognize it as related to time but do not associate time with space. Previous research has suggested that some time-related recognition in humans is shared with non-human primates. For example, mental time travel, in which past events are reconstructed, and the future is imagined, is partially shared with non-human primates (for review, Suddendorf and Corballis 2010). However, how similar their time recognition is to humans or whether they have concept of time is still unclear. This is because previous studies have focused specifically on the aspect related to decision-making based on episodic memory or future planning instead of testing a time concept itself. Therefore, how non-human primates comprehend time, especially longer time such as recognizing another Fig. 4 The differential score within each age category. The score was calculated based on the structural similarity index, and a greater mean value indicates that there is a larger difference between the stimuli. The statistical analysis found no significant difference between the adult and infant stimuli individual across decades from their infancy to adulthood, should be examined further.
Another finding of the present study is that our chimpanzees had a faster response time when discriminating between adult faces than when discriminating between infant faces. These results did not occur because of the difference of physical similarities among the adult faces versus the infant faces, as the image analysis demonstrated that both were comparable. Human own-age bias is usually considered to reflect "more extensive, recent experiences with one's own-age group relative to other-age groups" (Rhodes and Anastasi 2012, p.146). Similarly, this chimpanzees' asymmetric efficiency in face processing probably arose because they were attuned to processing adult faces based on their daily face experiences. Our chimpanzees have experience of interacting with infants in the past, but they had not seen infants for a while. However, they were living socially and interacting with other adult group members in their daily life. These asymmetric amounts of experiences of adult and infant conspecifics have likely led to the current results. This is probably not specific to our chimpanzees but is likely more general. Given that chimpanzee adults generally have more interactions with adults than individuals belonging to different age categories, they likely have a superior face processing ability for adult than infant individuals.
These results are understandable in line with previous human studies that suggest the existence of the ownage bias. In our chimpanzees, extensive exposure to adult conspecific faces in their daily life has likely shaped their perceptual system toward expertise for adult faces. Nevertheless, infantile face coloration in chimpanzees may also be particularly responsible for the impaired discrimination performance for infant faces. Chimpanzee infant faces are different from adult faces, both in shape and color (Kawaguchi et al. 2020). Previous studies found that chimpanzees specifically pay attention to the conspicuous infant face coloration, which is a much paler color than adults (Kawaguchi et al. 2019a(Kawaguchi et al. , 2020. Therefore, it is possible that the chimpanzees' attention was attracted by the unfamiliar face color (i.e., infantile face color), and their fluent face processing was subsequently impaired. It is worth testing which particular facial feature causes impaired face processing for infant faces in chimpanzees.
The present study has some limitations. First, it is challenging to interpret the null result of spatial mapping of face age only from the present study. As mentioned earlier, some possibilities remain. We can tell from the results that the positive evidence that chimpanzees were extracting the age concept from faces was not found, yet we cannot fully deny that possibility. However, given that recognition of age concept in non-human primates has been seldom studied, the result can be a stepping stone for future comparative cognitive studies of age or time recognition, including mental timelines. On the other hand, we found that chimpanzees show asymmetric performance for discriminating between adult faces and infant faces, which is seemingly similar to human own-age bias. Nevertheless, we cannot conclude that the efficient face processing for adult faces in our chimpanzees is the same phenomenon as own-age bias in humans. This is because it is unclear whether chimpanzees of otherage classes such as juveniles also have efficient face processing selectively for their cohort's faces. To understand whether this bias in chimpanzees is identical to the own-age bias in humans, a future study needs to examine this issue using chimpanzees from a wider age range, both as participants and as stimuli.
In conclusion, the present study explored two dimensions of facial age recognition in chimpanzees: spatial mapping and the effect of the different amount of experience. The current data did not support the existence of spatial mapping of the age categories in chimpanzees. However, we found the evidence of the superior processing of adult faces compared to infant faces in adult chimpanzees. As far as we know, this is the first report of an asymmetric face processing efficiency between infant and adult faces in non-human primates. This finding revealed a new aspect of chimpanzee's face recognition related to age, which is seemingly similar to that of humans.
Care and Use of Laboratory Primates"). The experimental design was approved by the Animal Welfare and Animal Care Committee of the Primate Research Institute (2018-115) and the Animal Research Committee of Kyoto University.
Consent to participate Not applicable.
Consent for publication All authors agree with the contents and submission of the paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-10-05T06:22:53.839Z | 2021-10-02T00:00:00.000 | {
"year": 2021,
"sha1": "0662a82e0d290b9603aefe42ec14268e200002b1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10071-021-01564-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8e4ba0b2622933e577e0d8c0590c7b770f12651",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84929869 | pes2o/s2orc | v3-fos-license | Predicting Three-way Interactions of Proteins from Expression Profiles Based on Correlation Coe ffi cient
: In this study, we propose a new method to predict three-way interactions among proteins based on correlation coe ffi cient of protein expression profiles. Although three-way interactions have not been studied well, this kind of interactions are important to understand the system of life. Previous studies reported the three-way interactions that based on switching mechanisms, in which a property or an expression level of a protein switches the mechanism of interactions between other two proteins. In this paper, we proposed a new method to predict three-way interactions based on the model in which A and B work together to e ff ect on the expression level of C . We present the algorithm to predict the combinations of three proteins that have the three-way interaction, and evaluate it using our real proteome data.
Introduction
Interactions among proteins have been regarded as a key issue to understanding the systems of living creatures, because they consist of vast assortment of proteins and their bodies are maintained by the complex interactions among these proteins. Although there is considerable knowledge about the interactions among proteins, it is still not enough to construct a global image of biological activities.
Many studies have been conducted to investigate pairwise interactions between two genes or two proteins. In case of genes, correlation coefficients of the expression levels in microarray expression profiles are often used for this purpose. As for pairwise protein protein interactions (PPIs), many methods have been proposed because a variety of data are available to predict direct interactions of proteins. The most direct approach to tackle PPIs is to identify their evidence of PPIs through in vitro or in vivo experiments, such as the yeast two-hybrid [1] or tandem affinity purification methods [2]. Pairwise interactions can also be predicted using public databases. Several studies use sequence data such as the method based on conservation of gene neighborhoods [3], the Rosetta Stone method [4], [5], and the sequence-based co-evolution method [6]. Many advanced methods are proposed [7], [8], [9] that utilize public data such as 3D-structures, domains, motifs, pathways, and phylogenetic 1 Faculty of Systems Engineering, Wakayama University, Wakayama 640-8510, Japan tac@sys.wakayama-u.ac.jp profiles. These methods and their results are available on the Web [10], [11], [12], [13], [14].
To infer more complex interactions, studies to identify interaction networks from expression data exist, such as the Boolean network model [15], [16] and the Bayesian network model [17]. Note that in many cases, these models treat gene interaction networks, but it is surely possible to treat protein networks. These studies infer a network that representing causal relationships among proteins, including the interactions among more than two proteins. However, the inferred networks include both two-way and more than three-way interactions so that the combinatorial effects that emerge only when the related proteins gather cannot be retrieved separately. Note that this property also appears in the multiple linear regression analysis, which is one of the basic statistical analyses to retrieve the relation among more than three variables. Another drawback of the Boolean and Bayesian network model is that, to infer reliable interaction networks, these methods require large samples of expression data.
Only a few studies have been conducted so far on three-way interactions. Zhang, et al. studied the interaction among a triplet of genes by comparing the correlation coefficients of genes A and B in two cases, when another gene C expresses and when it does not express [18]. Kayano, et al. used expression profiles and genotype data to detect the switching of the correlation sign, i.e., positive and negative correlations, occurred according to the genotype [19]. Their three-way interactions are pure three-way interactions separated from the two-way interaction effect, but they are quite limited because they detect the interaction of two genes based on an interaction that is switched by another binary state property.
In this paper, we present another method to infer three-way inc 2012 Information Processing Society of Japan teractions among proteins from expression profiles. Our method is based on the PPI model in which a pair of proteins A and B work together to effect the expression level of C, and the amount of the effect is proportional to the number of A-B pairs that works together with C. The remainder of this paper is organized as follows. In Section 2, we describe the protein interaction model used in our method and present the basic idea to retrieve the combinatorial effect among the three proteins. In Section 3, we describe the statistical operations to estimate the size of the combinatorial effect. In Section 4, we evaluate our method by applying it to real protein expression profiles, and finally, in Section 5 we conclude our study.
Estimating Total Interaction Effect among
Three Proteins
Expression Profile
An expression profile is the data that consists of expression levels e i j of proteins i ∈ I included in a biological sample j ∈ J, where I is the set of proteins and J is the set of samples. Because we also refer to proteins as A, B, C, and so on, the expression level of protein A in sample j is denoted by e A j . Expression profiles are frequently used in biological analysis since several high-throughput experiments to obtain expression profiles became popular. For proteins, experiments such as 2D-electrophoresis, protein chips, and mass spectrometry based methods are available. Typically, the number of proteins included in a profile range from several hundreds to thousands. Note that in this study, we apply our method to protein expression profiles because our interaction model supposes a relationship among proteins. However, it is possible to apply our proposed method to gene expression profiles. For genes, the microarray technique is the most popular method of obtaining expression profiles, where thousands to tens of thousands of genes are treated simultaneously. The number of samples is usually several tens, and at most hundreds.
For example, Fig. 1 illustrates the process of a 2D electrophore- sis based experiment [20] from which we obtained the expression profiles used in the evaluation part of this paper. First, we obtain a 2D electrophoresis image from each target sample through biological experimental processes. Second, we identify areas where proteins are separated using image processing software, and we compute the expression level of each spot. Third, we match the spots of the same protein in the images. Finally, we normalize the values of expression levels using a normalization method as a preprocessing step to the data mining that follows. As a result, we obtain a set of protein expression values for each protein in each sample, called expression profiles, as shown in Fig. 2.
Basic Strategy to Predict Interactions
The PPI model that we propose in this study is shown in Fig. 3. Three proteins, A, B, and C, would be committed in this model. Proteins of A and B each directly or indirectly interact with C, but if both A and B are expressed together, they have a significantly larger effect on C. In this study, we call the two-way effect the sole effect, i.e., the effect of protein A on C and B on C. The three-way effect on C that emerges only when two proteins A and B express together is called as the combinatorial effect. Then, we call the composition of the two sole effects and the combinatorial effect as the total effect. From our expression dataset, we aim to retrieve the combinatorial effect of A and B, which is not seen if A c 2012 Information Processing Society of Japan and B could exist independently. To measure this combinatorial effect, we first estimated the total effect of A and B on C, and then we subtracted the two sole effects of A and B from the total effect.
Our algorithm to estimate the combinatorial effect level is based on the correlation coefficient. The outline of our algorithm is shown in Fig. 4. For a triplet of proteins, A, B and C, the following four steps are used. 1) First we compute the two sole effect levels. The sole effect level between two proteins is simply computed as the correlation coefficient between them. We denote the sole effect level from A to C by α, and that from B to C by β, respectively. That is, α = cor(A, C) and β = cor(B, C), where the function cor denotes the correlation coefficient. 2) Second, we estimate the total effect level t using the algorithm described in Section 2.3. 3) Third, we perform a statistical simulation to compute the total effect level under the two assumptions that the two sole effect levels are α and β, respectively, and no combinatorial effect exists among the triplet. Note that as α and β, we use the sole effect levels obtained from the target triplet A, B, and C in the real data. Through a sufficient number of repetitions, the simulation generates the distribution S . The detail of the simulation is explained in Section 3.1. Because S represents the distribution of total effect levels under no combinatorial effect, the location of t on S shows how rare the computed total effect level t is, and it directly indicates the combinatorial effect level. 4) Fourth, we measure the probability of the value t occurring with respect to S , as the statistical z value. The z value is defined as z = (t−u) σ , where μ is the average and σ is the standard deviation of S . This z value is the estimated strength of the combinatorial effect of the target triplet A, B, and C; if z is high, then the combinatorial effect level among them is also high.
To complete our algorithm, in Section 2.3, we show the algorithm to estimate the total effect level t. In Section 3.1 we provide the detailed algorithm of the statistical simulation.
Estimating the Total Effect Level t
We estimate the total effect level t by means of the correlation coefficient. According to our protein interaction model, the number of A and B working units indicates the total effect level. If we can assume that the same amount (in expression level) of A and B forms a working unit, we need to consider only min(e A j , e B j ) for the number of working units in sample j. However, this is not correct because the expression level per molecule is different among proteins. Thus, we should find the optimum ratio between the expression levels of A and B, i.e., the point at which they have the largest effect on C. Figure 5 illustrates this problem. In Fig. 5 (a), the number of working units is not correctly expected because the optimum ratio of A and B is not achieved. As a result, min(e A j , e B j ) and C do not result in a high correlation. However, if the ratio of A and B is optimal, the correlation value is good as shown in Fig. 5 (b), and the number of working units will fit for C, resulting in a high correlation coefficient.
If there is no interaction among A, B, and C, then the correlation coefficient of the working units and C would not be high. To compute the optimum ratio of A and B, we attempt to compute every possible ratio of A and B and choose the best one, i.e., we choose the ratio that provides the highest correlation coefficient. Now we describe our data mining process to find all possible ratios of A and B. See Similarly, let k max be the maximum ratio, i.e., k max = max j∈J (e B j /e A j ), then the correlation coefficient always takes the same value if k ≥ k max . This indicates that we should use values of k between k min and k max . To examine the values of possible correlation coefficients between M kA,B and C, we try every possible value of k = e A j /e B j ( j ∈ J) and take the maximum correlation coefficient. If |J| is too large, we can uniformly skip several values of k to reduce the computational load.
In summary, for a protein expression data set including |I| proteins and |J| samples, we compute the correlation coefficients between M k j A,B and C for every distinct k j = e B j /e A j ( j ∈ J), and find the minimum value. This is the total effect level among A, B, and C denoted by t.
Computing the Distribution S: Total Interaction Level without Combinatorial Effects
In this section, we present the algorithm and the statistical model to compute the distribution S . for a particular combination of proteins A, B, and C, where S is assumed to be the statistical distribution of the total effect levels. This is under the assumption that there is no combinatorial effect among A, B, and C. In other words, we assume only the two sole effects α and β over A, B, and C, and do not consider any other effect among them.
Note that in our simulation, we use the normal distribution for A, B, and C as the most general distribution. Furthermore, as shown in Section 4, a considerable number of proteins follow the normal distribution in the protein expression profiles used in our evaluation.
To meet the above constraints, we first generate the artificial distribution of A, B, and C by generating expression values as random variables following the normal distribution with a common average and a standard deviation. That is, μ A = μ B = μ C and σ A = σ B = σ C , where we let μ A and σ A be the averages and the standard deviations of protein A, respectively. We discuss the validity of this condition in Section 3.2. In addition, because of the constraint of sole effects, the distributions should hold cor(A, C) = α and cor(B, C) = β. To make the correlation coefficients of A-C α, we repeat the exchange of two expression values of A (i.e., we exchange the expression values of two samples) as long as the correlation coefficient of A-C approaches α. The same step is repeated for B until the correlation coefficient of B-C reaches β.
In this manner, we obtain the random normal distribution of A, B, and C where cor(A, C) = α and cor(B, C) = β. By applying the algorithm presented in Section 2.3 to these artificial distributions, we obtain the total effect level under the assumption of the no combinatorial effect. With a sufficient number of repetitions c 2012 Information Processing Society of Japan of this process, i.e., distribution generation of A, B, and C, and total effect level computation, we finally obtain the distribution S , which represents the probability of the total effect levels under the assumption of the no combinatorial effect. Computing the distribution S for every combination of proteins, however, requires considerable computational run time. To reduce the computational time, we prepare, in advance of the computation of total effect levels, the distribution table with the average and the standard deviation of the distribution S for each of the values of α and β. In this study we computed the table with the interval of α and β 0.05, as shown in Fig. 7. From the table, we use the values of α and β nearest to the value of the given triplet as the approximate value.
Discussions of the Distribution S
Now we describe the distribution S varies when the averages and the standard deviations of A, B and C vary, and we conclude that using the common average and the standard deviation in computing the distribution S is appropriate.
We first note that μ C and σ C have no effect on the distribution S , because the correlation coefficient is the same even when we add or multiply a constant to all the expression values of C. Therefore, we concentrate on the averages and the standard deviations of A and B.
Without loss of generality, we can fix μ B and σ B and vary μ A and σ A . Figure 8 shows the distribution S where σ A and σ B are fixed at 1, μ B at 10, and μ A varies between 10 and 30. This result is obtained through the computation described in Section 3.1 where α and β are both 0.4, the number of trials is 10,000,000 times. This result clearly shows that the correlation coefficient between M kA,B and C takes a lower value as the difference between μ A and μ B increases, and it takes the highest value when μ A = μ B .
Regarding the variation of σ A , in our method, we select the best correlation coefficient between M kA,B and C among several possible ratios k. This indicates that if σ A and σ B differ, such as in the case where σ B = pσ A , then the average μ B = pμ A has the same total effect level as in the case where μ A = μ B and σ A = σ B . c 2012 Information Processing Society of Japan (Note that μ A and σ A are both multiplied by p when all the expression levels are multiplied by p.) This indicates that the case of μ A = μ B and σ A = σ B takes the maximum value of total effect levels.
The above discussion shows that the precomputed distribution table, shown in Fig. 7, gives the largest estimated values of S for each α and β. Therefore, in our method, the combinatorial effect cannot be overestimated, i.e., it is always estimated at less than or equal to the true value.
Note that in the simulation, we can use any value of μ A = μ B = μ C and σ A = σ B = σ C , because the obtained z values are independent of these values as long as μ A = μ B = μ C and However, the distribution S does not necessarily follow a normal distribution, although the curve of S is similar to the normal distribution curve. However, this does not violate the validity of our algorithm, because the z-value is generally used for a singlepeak mountain shape distribution, even if it is not exactly the normal distribution.
Experiment
We evaluate the proposed method by applying it to real protein expression profiles obtained by a 2D electrophoresis-based experiment [20]. We implemented the proposed method in the C++ language. The input data set includes 195 samples and 879 proteins, and the data is processed by global normalization [21] in advance.
Because our method uses the normal distribution for expression levels of A, B, and C, we first confirm whether the expression data follows the normal distribution. For each protein, we omit values that depart from the average by more than 2.5 times the standard deviation as outliers. We apply the Jarque-Bera test [22] to judge whether the expression of each protein follows the normal distribution. The result shows that 454 out of the 879 proteins follow a normal distribution with the significance level of 5%. In the following evaluation, we use these 454 proteins.
To maintain the reliability of the results, we performed several manipulations over the expression data. First, we omitted outliers of expression levels by ignoring values that are greater than 2.5 times the standard deviation from the average. Second, we omitted the combination of A, B, and C if the number of non-null expression values is less than 80% of all the expression values of A, B, and C. Finally, for the scale k that gives the best correlation coefficient between M kA,B and C, if more than 70% of the values in M kA,B are chosen from either kA or B, we exclude the combination of A, B, and C.
Results
The histogram of the retrieved combinations with a z value of more than 7 is shown in Fig. 9. Figure 10 is the histogram expanded from Fig. 9. Figure 11 shows the histogram of the case where we assume that there is no combinatorial effect, which is calculated by accumulating the normal distribution trials for the same number of combinations as in the input data. From the comparison of these histograms, the real data has a significantly larger z value than in the case that assumes no combinatorial effects. These results infer that the real data input includes a significant number of combinations that have the combinatorial effect. Figure 12 shows the scatter plots of the best-score combination as a typical example. The vertical axis represents the expression level of C, and the horizontal axis represents the expression levels of A, B, and M kA,B . In this case, the correlation coefficient of A-C is 0.0449609, B-C is 0.0233452, and M kA,B -C is 0.450916. The correlation coefficients of A-C and B-C are quite low, but the value is significantly high for M kA,B -C, which results in a very high z value of 12.35. We confirmed that similar to the values in this example, the majority of high z value combinations do not have large correlation coefficients of M kA,B -C.
Effect of Outliers
It is well known that outliers significantly affect correlation coefficients. Because our method is based on correlation coefficients, the results are also significantly affected by outliers. In this section, we show the effect of outliers and the necessity of normality filtering as a preprocess for our algorithm.
We first show a typical example of the outlier effect. Figure 13 (c) shows a significant outlier that reduces β (= cor(B, C)). With this incorrect value of β, the combinatorial effect among them is estimated to be significantly larger than the true value. This type of "false positive" can be excluded by, for example, removing outlier samples that deviate from the average by 2.5σ as we did in the evaluation. Furthermore, we found another type of a false-positive as illustrated in Fig. 14. In the (b)A-C distribution, many samples are assembled on the leftside and several outlier-like samples are sparsely plotted on the rightside. Because these values are not considered to be in M kA,B , the combinatorial effect is not appropriately estimated. That is, this type of false positives occur due to their abnormal distributions. In fact, the number of this type of false positives is large; the ratio of false positive distributions are shown in Fig. 15. This Figure shows the histogram of z values without applying the normality test filter. Here, we judged whether each high z value combination was a false positive. Although the judgment is done subjectively, it is apparent that sig- For this purpose, in our algorithm, we applied filtering with the normality test. (As described in Section 4.1, we selected 454 out of 879 proteins using the Jarque-Bera test.) As a result, the filtered results shown in Figs. 9 and 10 include very few false positive distributions. From the above results, we concluded that the normality test filter has a significant effect in excluding false positive distributions.
We further note that limiting the range of k is also a method of excluding outlier effects, although the effect is much less than that of the normal distribution test.
Validation
As a result of applying our proposed method to a real protein expression dataset obtained by a 2D-electrophoresis proteomic c 2012 Information Processing Society of Japan 16 Building of an interaction network with a combination of proteins (A, B, and C) retrieved from our proposed methods using the base of four available public repositories for PPIs. Carbonic anhydrase 2 (CA2) as protein A and vimentin (VIM) as protein B have an independent and direct interaction with dynein light chain 8 (DYNLL1) as protein X. In addition, DYNLL1 indirectly interacts with heat shock 60 kDa protein 1 (HSPD1) as protein C on intervening with heat shock cognate 71 kDa protein (HSPA8) and heat shock protein 70 (HSPA1A). Experimental confirmation methods of PPI were mass spectrometry with (a, b, c, and e)or without co-immunoprecipitation (d).
analysis and cutting off more than 7.0 of z values, we obtained 107 combinations of all three known proteins, which were estimated to show the combinatorial effects. To predict the retrieved combinatorial effects among the three proteins, we validated the interaction network of these proteins by using four available public PPI repositories: the Biomolecular Object Network Databank (BOND) [11], the IntAct database [12], the Molecular INTeractions (MINT) database [13], and the Human Protein Reference Database (HPRD) [14]. First, we assumed that proteins A and B directly associated with each other and a complex of proteins A and B directly or indirectly interacted with protein C. In this case, in all the databases, there were no data sets that directly detected interaction relationships between proteins A and B. Hence, we could not find features of the combinatorial effects. Next, we hypothesized that each of the proteins A and B directly interacts with protein X as the fourth protein but A and B have no direct interac-tion each other, and protein X directly or indirectly interacts with protein C. In this situation, we discovered one combination with Dynein light chain 8 (DYNLL1) as protein X as a candidate, in which carbonic anhydrase 2 (CA2) as protein A, vimentin (VIM) as protein B, and heat shock 60 kDa protein 1 (HSPD1) as protein C were included as shown as Table 1. Figure 16 shows an interaction network built with these proteins retrieved from our proposed method along with the PPI repositories. The predicted interaction network comprises a total of five proteins, in which PPIs were identified using the yeast two-hybrid system and mass spectrometry with co-immunoprecipitation. Published literature reveals that the identified interaction networks may be involved in the apoptotic pathway [24], [25], [26]. Thus, this result suggests that the retrieved combinatorial effect derived from applying our proposed method to a real protein expression data set can predict a network topology. Furthermore, our proposed method tends to c 2012 Information Processing Society of Japan be able to help for deducing an interaction network of proteins that cannot predict in the observed biology.
Discussion
In this study, we demonstrated an example of predicting an interaction network by applying our proposed method to a proteomic dataset. One of the difficulties in evaluating our method is that we could not know whether other combinations of three known proteins are false positives, because the interactions recorded in all public databases represent only part of the primary literature. From the same reason, it is currently difficult for us to expect known typical three-way interactions to be retrieved with our method. Nevertheless, further investigation with more data sources is expected to confirm the accuracy of our proposed method.
Conclusion
In this paper, we proposed a method to retrieve three-way interactions among three proteins by using correlation coefficients. Our method estimates the combinatorial effect level by subtracting two sole effects A-C and B-C from the total effect. Because our method uses correlation coefficients, we can predict threeway interactions by using a smaller number of samples campared with Bayesian or Boolean networks.
We applied the proposed method into a real protein-expression data set [20], From the result, we inferred that several hundreds of combinations have the three-way interaction. Note that it is currently difficult to precisely confirm the accuracy of our result because various types of indirect interactions are possible among proteins, and only some of interactions are currently reported in the literature. However, by identifying a combination of three proteins having a combinatorial interaction, we showed the validity of the proposed method in helping to explore protein interactions.
Note that the cells contain various types of protein interaction networks: binary interactions, pathways, complex, and network topology [23]. Analysis of protein interaction networks can uncover unforeseen biological functions of known proteins. Therefore, predicting PPIs by proposing computational models is important for understanding cellular roles of proteins in the cell.
In future, to increase the accuracy and the validity of the proposed method, we plan to identify more combinations of proteins in which three-way interactions are identified. | 2019-03-22T16:11:35.623Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "b2b985ee8238c34321ca66cde05e72a2ad896c4a",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/ipsjtbio/5/0/5_34/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "852ae72a9a4c3445a538b2e3fd3ef742c26067a7",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119084538 | pes2o/s2orc | v3-fos-license | Energy spectrum of two-dimensional excitons in a non-uniform dielectric medium
We demonstrate that, in monolayers (MLs) of semiconducting transition metal dichalcogenides, the $s$-type Rydberg series of excitonic states follows a simple energy ladder: $\epsilon_n=-Ry^*/(n+\delta)^2$, $n$=1,2,\ldots, in which $Ry^*$ is the Rydberg energy scaled by the dielectric constant of the medium surrounding the ML and by the reduced effective electron-hole mass, whereas the ML polarizability is only accounted for by $\delta$. This is justified by the analysis of experimental data on excitonic resonances, as extracted from magneto-optical measurements of a high-quality WSe$_2$ ML encapsulated in hexagonal boron nitride (hBN), and well reproduced with an analytically solvable Schr\"odinger equation when setting the electron-hole potential in the form of a modified Kratzer potential. Applying our convention to other, MoSe$_2$, WS$_2$, MoS$_2$ MLs encapsulated in hBN, we estimate an apparent magnitude of $\delta$ for each of the studied structures. Intriguingly, $\delta$ is found to be close to zero for WSe$_2$ as well as for MoS$_2$ monolayers, what implies that the energy ladder of excitonic states in these two-dimensional structures resembles that of Rydberg states of a three-dimensional hydrogen atom.
Coulomb interaction in a non-uniform dielectric medium 1,2 , is one of the central points in investigations of large classes of nanoscale materials, such as, for example, graphene 3,4 and other atomically thin crystals including their heterostructures 5 , as well as colloidal nanoplatelets 6 , and two-dimensional perovskites 7,8 . This problem has been, in recent years, particularly largely discussed in reference to a vast amount of investigations of excitons in monolayers (MLs) of semiconducting transition metal dichalcogenides (S-TMDs) [9][10][11][12][13] . Surprisingly, at first sight, the Rydberg series of s-type excitonic states in these archetypes of two-dimensional (2D) semiconductors, does not follow the model system of a 2D hydrogen atom [14][15][16] , with its characteristic energy sequence, ∼ 1/(n − 1/2) 2 , of states with a principal quantum number n. The main reason for that is a dielectric inhomogeneity of the 2D S-TMD structures, i.e., MLs surrounded by (deposited/encapsulated on/in) alien dielectrics. At large electron-hole (e-h) distances, the Coulomb interaction scales with the dielectric response of the surrounding medium whereas it appears to be significantly weakened at short e-h distances by the usually stronger dielectric screening in the 2D plane. A common approach to account for the excitonic spectra of S-TMD MLs refers to the numerical solutions of the Schrödinger equation, in which the e-h attraction is approximated by the Rytova-Keldysh (RK) potential 1,2 . The RK approach has been used to explain a number of excitonic features in S-TMD MLs 17 . However, it is only solvable numerically. A more phenomenological and intuitive approach, presented below, might be an optional solution to this problem.
In this Letter, we demonstrate that the energy spectrum, ǫ n (n=1, 2,. . . ), of Rydberg series of s-type excitonic states in S-TMD MLs may follow a simple en-ergy ladder: ǫ n = −Ry * /(n + δ) 2 . From magneto-optical investigations of a WSe 2 ML encapsulated in hexagonal boron nitride (hBN), we accurately establish that Ry * =140.5 meV and δ=-0.083 in this particular S-TMD system. The ǫ n spectrum, with δ∼0, turns out to closely reflect the characteristic spectrum of a three-dimensional (3D) hydrogen atom. The ǫ n = −Ry * /(n + δ) 2 ansatz is well reproduced with an analytical theoretical approach in which the e-h potential is assumed to have the form of a modified Kratzer potential 18 . Ry * is identified with the effective (3D) Rydberg energy, Ry µ ε 2 m0 , scaled by the dielectric constant ε of the surrounding hBN medium and the reduced e-h mass µ = (m e m h )/(m e + m h ), with Ry=13.6 eV; m e and m h are, correspondingly, the electron and hole effective masses, and m 0 is the free electron mass. Dispersion of Ry * and δ parameters in different studied samples, WSe 2 , MoSe 2 , MoS 2 , and WS 2 MLs encapsulated in hBN, is discussed and the reduced e-h masses in these ML structures are estimated.
To accurately determine the characteristic ladder of s-type excitonic resonances in the experiment, we profited of a particularly suitable for this purpose method of magneto-optical spectroscopy 19,20 . The active part of the structure used for these experiments was a WSe 2 ML embedded in between hBN layers. More details on samples' preparation and on the experimental techniques can be found in the Supplemental Materials (SM). We measured the (circular) polarization resolved magnetophotoluminescence (PL) at low temperatures (4.2 K) and in magnetic fields up to 14 T, applied in the direction perpendicular to the monolayer plane. Here we focus on magneto-PL spectra of our WSe 2 ML, observed in the spectral range from ∼1.7 to ∼1.9 eV. As shown in Fig. 1(a) and (b), these spectra are composed of up to five PL peaks, which are clearly resolved in the range of high magnetic fields. Following a number of previous investigations 17,21-24 on similar structures, the observed PL peaks are identified with a series of excitonic resonances forming the 1s, 2s, . . . , 5s Rydberg series of the so-called A exciton 11,12 . Each ns PL peak, n=1, 2, . . . , 5, demonstrates the valley Zeeman effect. This is illustrated in Fig. 1(c) in which the energies of the σ + and σ − polarized PL peaks are plotted as a function of the magnetic field. In accordance with previous reports we extract g=-4.1 for valley g-factor of the 1s resonance, but read a significantly stronger valley Zeeman effect for all excited states (g ∼-4.8). The later observation is intriguing and should be investigated in more details, which is, however, beyond the scope of the present paper.
The magnetic field evolution of the mean energies of σ + and σ − PL peaks is illustrated in Figs 1(d) and (e). These energies, E ns , are plotted as a function of the magnetic field B in Fig. 1(d), and as a function of B 2 in Fig. 1(e), which illustrates the characteristic but distinct behavior of ns states in the so-called low-and high-field regime 14,20 . The high field limit, for a given ns resonance, appears when l B ≪ r ns , or conversely when binding energy of the ns state E ns b ≪ ω * c /2. Here r ns and E ns b denote, correspondingly, the mean lateral extension and the binding energy E ns b =E g -E ns of a given ns state at B=0, ω * c = eB/µ, and other symbols have their conventional meaning. In the high-field limit, the energies of E ns resonances approach a linear dependence upon B, with a slope given by (n − 1/2) ω * c . In the low field limit (l B ≫ r ns , E ns b ≫ ω * c /2), the ns resonances display the diamagnetic shifts: E ns (B) = E ns (B = 0) + σB 2 , where σ = (er ns ) 2 /8µ is the diamagnetic coefficient. The 1s and 2s resonances follow the low-field regime in the entire range of the magnetic fields investigated, see Fig. 1(e). The high field regime is approached for the 5s resonances with an approximate linear dependence of E 5s with B, in the range above ∼8 T. This linear dependence, marked with a solid line in Fig. 1(d), displays a slope of 2.1 meV/T, which if compared to 9/2 ω * c dependence, provides an estimate of 0.25 m 0 for the reduced mass in the WSe 2 ML. However, one may also argue that working with magnetic fields up to 14 T only, the high field limit is still barely developed even for the 5s state. In this context, our estimation of the reduced effective mass should be seen as its upper bound and, in the following we assume µ=0.2 m 0 for the WSe 2 ML, following the results of experiments performed in fields up to 60 T 17 .
In the following we focus on the energy sequence E ns of 1s, 2s, . . . , 5s excitonic resonances as they appear in the absence of magnetic field. As shown in Fig. 1(e), the apparent E ns values are accurately determined with linear extrapolations of E ns versus B 2 dependences to B=0. Next, we put forward a hypothesis that the energy sequence E ns obeys the following rule: where, at this point, E g , Ry * , and δ should be regarded as unknown adjustable parameters. To test the above formula against experimental data, we note that Eq. 1 implies that (for example) the ratio (E 3s −E 1s )/(E 2s −E 1s ) only depends on δ, and, reading this ratio from the experiment, we extract δ=-0.083. With this value we find (see Fig. 2) that our experimental E ns series perfectly matches Eq. 1, and, at the same time, we determine two other parameters, E g =1.873 eV and Ry * =140.5 meV (or conversely, exciton binding energy E b = E g − E 1s = Ry * /(1 − 0.083) 2 =167 meV). The above E g and E b values are in very good agreement with those already reported in the literature 17 . Relevant for our further analysis, is the observation that the derived value for Ry * coincides well with the effective (3D) Rydberg energy Ry * =13.6 eV·µ/(ε 2 m 0 )=134.3 meV, scaled by the dielectric constant of the surrounding hBN material ε = ε hBN =4.5 25 and the reduced effective mass µ=0.2 m 0 17 of the WSe 2 ML. Intriguingly, the extracted δ-parameter is close to zero which implies that the ǫ n =E ns -E g Rydberg series found in a 2D system resembles that of a 3D hydrogen atom (ǫ n ∼−1/n 2 ).
On the theoretical ground, the problem of excitonic spectrum in S-TMD MLs is commonly solved by invoking the Rytova-Keldysh potential 1,2 U RK (r) (see purple curve in Fig. 3) to account for a specific character of the e-h attraction in these systems. At large e-h distances r, U RK (r) coincides with a usual Coulomb potential U RK (r) ∼ −e 2 /εr (see blue curve in Fig.3), which scales with the dielectric constant ε of the material surrounding the monolayer. On the other hand, U RK (r) ∼ log(rε/r 0 ) when r is small, what accounts for the effective dielectric screening length r 0 = 2πχ 2D in the system, where χ 2D is the 2D polarizability of S-TMD ML.
Whereas previous efforts have been largely focused on the numerical study of such problem, we show that our model provides the analytical solution, which is in a good agreement with the experimental results discussed above. We propose to replace U RK (r) with the approximate potential U app (r), taken in the form of piecewise function. Namely, the sub-function U cor (r) defines U app (r) at small distances r (the core domain), while the external potential U ext (r) corresponds to U app (r) in the region outside of the core.
We choose the external potential in the form of the modified Kratzer potential 18 where r * 0 = r 0 /ε is the reduced screening length and g is a tunable parameter. For the case of g 2 = 0.21, U ext (r) fits U RK (r) in the region r > r min = 0.46 r * 0 with the relative deviation less than 5%. For the WSe 2 ML encapsulated in hBN, the minimal distance r min = 4.6Å is comparable with the lattice constant a=3.28Å 26 of WSe 2 (see Fig 3 for comparison).
The Schrödinger equation with the Kratzer potential (2) is exactly solvable providing the excitonic spectrum of the s-type states (see SM for details): in which κ 2 = 2r * 0 /a * B and a * B = 2 ε/µe 2 is the effective Bohr radius. The effective Rydberg constant Ry * = e 2 /2εa * B sets the energy scale in the system, while δ = gκ − 1/2 defines the relative positions of the energy levels in the spectrum. Since gκ ∝ √ µr 0 /ε, the parameter δ is system dependent and its value can be tuned, in particular, by modifying the dielectric constant ε. We note that for a given material, Ry * ∝ 1/ε 2 and δ + 1/2 ∝ 1/ε. Such scaling laws as well as the energy sequence (3) can be derived from numerical simulations with U RK (r) potential at relatively large ε (see SM for details). Note that Eq. 3 is an analogous of our experimentally found relation given by Eq. 1.
In the following, we introduce U cor (r) which replaces the Kratzer potential at small distances r, comparable with the lattice constant of WSe 2 in our particular case. We choose the constant attractive potential U cor (r) = V 0 . Below we demonstrate, that U cor (r) does not change ∝ (n + δ) −2 behaviour of the spectrum and modifies only δ parameter. FIG. 4. Low temperature PL spectra of S-TMD MLs at T =5 K. The pink vertical arrows denote the estimated bandgap energies Eg. The chosen spectral regions are scaled for clarity. Typically for S-TMDs monolayers, the most pronounced emission feature seen in our spectra is due to the 1s excitonic resonance accompanied by low energy peaks commonly assigned to different excitonic complexes 23,27-38 .
We consider the Kratzer and constant potentials as external and core ones, respectively. We choose the parameter g 2 = 0.21 and the region of validity of the Kratzer potential up to its minimum ξ 0 = 2g 2 , where ξ = r/r * 0 . The parameter V 0 of the core potential is chosen as an average value of U RK (ξ) in the domain ξ ∈ [0, ξ 0 ]: where θ(x) is the step-function and v 0 = 1.71134. We restrict our consideration to the s-type excitonic states and derive the following formula (a detailed description is given in SM) Both found values of Ry * = 134 meV and δ = −0.099 match their experimentally obtained counterparts (with the aid of Eq. 1) 140.5 meV and -0.083, respectively.
The model proposed above accounts well for the experimental results obtained for the WSe 2 monolayer and it is obviously interesting to test this model for other S-TMD materials, as well. Unfortunately, the observation of the rich Rydberg spectrum of excitonic states in S-TMD MLs seems to be, so far, uniquely reserved for WSe 2 MLs. Nevertheless, for all other S-TMD MLs studied, i.e., MoS 2 , WS 2 , and MoSe 2 MLs encapsulated in hBN, we do experimentally observe the 2s in addition to the 1s excitonic resonance (PL and reflectance contrast spectra), see Fig. 4 and Fig. S5 in SM. The energy positions, E 1s and E 2s , of, correspondingly, the 1s and 2s resonances (of A exciton) are directly read from the data shown in Fig. 4. Of interest is the energy difference (E 2s -E 1s )=∆E exp 2s−1s listed in Table I, for all four MLs investigated. As shown in Fig. 4, the PL peaks associated with the excited excitonic states are followed by noticeable PL tails developed at higher energies. We believe that these PL tails penetrate above the band-gap energies which are, however, not spectacularly marked in the spectra. We note, however, that in the case of our exemplary WSe 2 ML, the PL intensity at the band-gap energy (accurately estimated from magneto-PL data and marked with a pink arrow in Fig. 4) consists of 5% of the intensity of the 2s exciton PL peak. Applying the same convention to all spectra presented in Fig. 4, we estimate the band gaps in the three other MLs, as illustrated with pink arrows in this figure. Most critical is estimation of the band gap in MoSe 2 ML, which requires a convolution of the PL spectra due to an additional signal associated with the B-exciton resonance (see SM for details). With estimation of the band gap and reading the energies of 1s excitonic resonances directly from the spectra (see Fig. 4), we extract exciton binding energies E exp b = (E g − E 1s ) and show their values in Table I. Having estimated ∆E exp 2s−1s and E exp b parameters, and following our predictions that E ns = E g − Ry * /(n + δ) 2 , where Ry * = Ry×µ/(ε 2 hBN m 0 ), we derive the δ exp and µ exp parameters for all MLs studied, see Table I. We found very good agreement between our estimations and results of DFT calculations 26 for the reduced masses in WS 2 and MoS 2 MLs, while we note an apparent discrepancy for WSe 2 and MoSe 2 MLs.
Concluding, the presented experimental and theoretical study let us proposed that the ns Rydberg series of excitonic states in S-TMD monolayers encapsulated in hBN follows a simple energy ladder: ǫ n = −Ry * /(n+δ) 2 . Ry * = Ry×µ/(ε 2 m 0 ), where Ry is the Rydberg energy, µ denotes the reduced e-h mass, and ε is the dielectric constant of the surrounding material. The dielectric polarizability χ 2D of a monolayer is only encoded in δ which, in the first approximation, is given by δ = 0.21κ − 1/2, where κ 2 = 2µr 0 e 2 / 2 ε 2 and r 0 = 2πχ 2D is a characteristic 2D screening length. Strikingly, δ is found to be close to zero for WSe 2 (and MoS 2 ) ML whose ǫ n spectrum resembles that of a 3D hydrogen atom. The proposed model may be applicable to other Coulomb bound states (e.g. donor and/or acceptor states), also to other systems, such as colloidal platelets 6 or 2D perovskites 7 . Finally, we note that our ǫ n = −Ry * /(n + δ) 2 solution coincides with that expected for a hypothetical hydrogen atom in fractional dimension N , (N = 2δ + 3), which was indeed speculated 16 to mimic the spectrum of Coulomb bound This supplemental material provides: S1 description of preparation of the studied samples and used experimental setups, S2 excitonic spectrum and eigenfunctions in the Kratzer potential, S3 numerical analysis of the excitonic spectrum in the Rytova-Keldysh potential, S4 derivation of the excitonic spectrum in WSe 2 monolayer encapsulated in hBN, S5 dependence of excitonic diamagnetic coefficients in WSe 2 monolayer, S6 low temperature reflectance contrast spectra of the investigated monolayers, S7 estimation of the band-gap energy in MoSe2 monolayer.
S1. Samples and experimental setups
The active parts of our samples consist of a monolayer (ML) of semiconducting transition metal dichalcogenides (S-TMD), i.e. WSe 2 , MoS 2 , WS 2 , and MoSe 2 , which has been encapsulated in hexagonal boron nitride (hBN) and deposited on a bare Si substrate. They were fabricated by two-stage polydimethylsiloxane (PDMS)-based 39 mechanical exfoliation of S-TMD and hBN bulk crystals.
Micro-magneto-PL measurements were performed in the Faraday configuration using an optical-fiber-based insert placed in a superconducting magnetic coil producing magnetic fields up to 14 T. The sample was mounted on top of an x − y − z piezo-stage kept in gaseous helium at T = 4.2 K. The excitation light was coupled to an optical fiber with a core of 5 µm diameter and focused on the sample by an aspheric lens (spot diameter around 1 µm). The signal was collected by the same lens, injected into a second optical fiber of 50 µm diameter, and analyzed by a 0.5 m long monochromator equipped with a CCD camera. A combination of a quarter wave plate and a polarizer are used to analyse the circular polarization of signals. The measurements were performed with a fixed circular polarization, whereas reversing the direction of magnetic field yields the information corresponding to the other polarization component due to time-reversal symmetry.
Investigations at zero magnetic field were carried out with the aid of a continuous flow cryostat mounted on x − y motorized positioners. The sample was placed on a cold finger of the cryostat. The excitation light was focused by means of a 50x long-working distance objective with a 0.5 numerical aperture producing a spot of about 1 µm. The signal was collected via the same microscope objective, sent through a 0.5 m monochromator, and then detected by a CCD camera.
S2. Excitonic spectrum and eigenfunctions in the Kratzer potential
We solve two-dimensional (2D) Schrödinger equation with the Kratzer potential 18 for wave-function ψ(r) = ψ(r, ϕ) in which U ext (r) = −e 2 /r 0 (r * 0 /r − g 2 r * 2 0 /r 2 ) is the modified Kratzer potential. r is in-plane electron-hole distance, µ denotes the reduced electron-hole mass, ε represents the dielectric constant of the material surrounding the monolayer, r * 0 = r 0 /ε is the reduced screening length, and g is a tunable parameter. Taking ψ m (r, ϕ) = e imϕ φ m (r)/ √ 2π and introducing the new variable ξ = r/r * 0 (r * 0 = r 0 /ε is the reduced screening length), we obtain the equation with k 2 = −2µǫr * 2 0 / 2 > 0 and κ 2 = 2µr 0 e 2 / 2 ε 2 > 0. The solution to this eigenvalue problem is φ n,m (r) = β n,m √ 2n + 2δ m − 1 (n − |m| − 1)! Γ(n + |m| + 2δ m ) × (β n,m r) M e −βn,mr/2 L 2M n−|m|−1 (β n,m r), with M = m 2 + g 2 κ 2 , δ m = M − |m| and β n,m = 2µe 2 / 2 ε(n + δ m − 1/2), respectively. n=1, 2. . . is a principal quantum number, m=0, ±1, ±2. . . is an angular momentum quantum number, and L α n (x) is the modified Laguerre polynomial. The energy spectrum for such system is described with: For g=0, our result coincides with 2D Hydrogen model 40 . In the case of s-type states (n=1, 2. . . and m=0), the excitonic spectrum simplifies to We mention the following consequences of this model: (i) the energy scale (prefactor in ǫ n ) does not depend on the screening length r 0 . It coincides with the Rydberg constant Ry * for an exciton with reduced mass µ in an environment with dielectric constant ε, i.e. Ry * = µe 4 /2 2 ε 2 ; (ii) the information about relative positions of the energy levels of the system is encoded in the denominators in Eq. 9 and 10; (iii) the Kratzer potential lifts the Coulomb degeneracy of the s-(m=0) and p-type (m=±1) states, as one can be noticed from Eq. (9); (iv) since Ry * ∝ 1/ε 2 and δ + 1/2 ∝ 1/ε, the energy ladder of the excitons can be progressively tuned by changing the dielectric constant ε of the surrounding medium. Surprisingly, the results of numerical simulations performed in the Rytova-Keldysh potential 1,2 , discussed in the next section, demonstrate the similar behavior. This fact can be interpreted as the indirect confirmation, that the Kratzer potential is a good approximation for the considered model. Using the wave-functions obtained in Eq. (8), we calculate the mean value of r 2 , which can be useful for analysis of diamagnetic shift of excitons, which reads For s-type states characterized by m=0, it takes the form Moreover, for the special case gκ = 1/2, while Eq. 10 resembles the three-dimensional (3D) hydrogen model, the mean value of the r 2 parameter is given by where a * 0 = 2 ε/µe 2 is the effective Bohr radius. It is interesting to note that the latter formula coincides with the mean value of r 2 for 3D hydrogen atom 41 .
The eigenfunctions of the Shrödinger Hamiltonian with the Kratzer potential (8) tend to zero at r → 0. This is the consequence of the repulsive part of the potential at short distances. Therefore, such solutions can not be good approximation for exciton wave-functions at small distances, since the Rytova-Keldysh potential is attractive. In order to improve the current result, one needs to modify the Kratzer potential at small distances.
S3. Numerical analysis of the excitonic spectrum in the Rytova-Keldysh potential
We solve numerically the eigenvalue problem for 2D Schrödinger Hamiltonian with the Rytova-Keldysh potential 1,2 . We analyse the scale laws for the spectrum both as a function of principal number n and dielectric constant of surrounding medium ε. We start from the radial equation on wave function φ(r) for the s-type states characterized by zero angular momentum: where H 0 (x) and Y 0 (x) are the zeroth order Struve and Neumann functions. Introducing new variables ξ = rε/r 0 = r/r * 0 and ǫ = (µe 4 /2 2 ε 2 )W = W Ry * , we rewrite the equation with b = 2 ε 2 /(µe 2 r 0 ) = a * B /r * 0 -the ratio of the natural scales in the system. We derive the spectrum of this differential equation as a function of ε and n.
According to our hypothesis described in the main text, the excitonic spectrum should be described with: where α ≃ 1, while β (and hence δ = β/α) strongly depends on ε. Therefore, we estimate the linear behaviour of (−W n ) −1/2 with n.
The results of numerical simulations (performed in "Mathematica") for the case of WSe 2 (µ=0.2 m 0 17 , r 0 = 45Å 42 ) are presented in Figs S1 and S2. Indeed, the linear growth of (−W n ) −1/2 as a function of n for different values of ε can be appreciated in Fig. S1. The Fig. S2 qualitatively confirms that α ≃ 1. The precision of this result (relative deviation of the curve (−W n ) −1/2 − n from its average value) becomes higher for larger values of ε. We consider the solution of 2D Schrödinger equation in the potential, defined in the main text, which reads We restrict our consideration only to the s-type states characterized by zero angular momentum. The regular radial solution of the Schrödinger equation in the region ξ < ξ 0 is where J 0 (x) is the zero-order Bessel function of the first kind, E = ǫ/U 0 and v 0 = 1.71134. The solution for the region ξ > ξ 0 has the form with g 2 =0.21, k 2 = −2µǫr * 2 0 / 2 > 0 and Ψ(a, c; z) is the Tricomi's function, which is regular at ξ → ∞ and solves the degenerate hypergeometric equation 43 Introducing the normalized logarithmic derivatives for both solutions f n (E) = κ −1 [d ln φ n (ξ)/dξ] ξ=ξ0 , n = 1, 2 The excitonic spectrum, obtained within the described above method, can be presented in the same form as for the Kratzer potential (compare with Eq. 5 in the main text) and is given by This result demonstrates the good coincidence with the excitonic spectrum reported in Ref. 17, with relative errors 8% for n = 1, 3.5% for n = 2 and less than 2% for higher excited states. In order to check the precision of the graphical method, we applied it for the case, when the core potential is described by the Kratzer one with the same parameter g 2 = 0.21. For the Kratzer potential, graphical solution provides the excitonic spectrum, which follows Eq. 10. In this case, the logarithmic derivative is where 1 F 1 (a, c; z) is the confluent hypergeometric function of the first kind, and corresponds to the blue curve in Fig. S3.
One can mention, that the calculation with the potential U app (ξ) does not approximate perfectly the 1s-exciton state. From the technical point of view such discrepancy can be the consequence of the modification of the Rytova-Keldysh potential at small distances. In order to check this hypothesis, we estimate the ground state energy of exciton using another method. Namely, we derive the ground state energy of S-TMD excitons using the Ritz variational procedure. We take the variational wave-function in the form ψ 0 (r) = β exp(−βr/2)/ √ 2π and evaluate the average of the Hamiltonian The kinetic energy can be calculated directly with To determine the potential energy, a few steps need to be performed. First, we present the Keldysh potential in the integral form 44 Then, we substitute it in the expression for the average potential energy Consequently, after evaluation of integrals and adding the kinetic energy part, we determine formula ǫ(a) = e 2 r 0 2 ε 2 8µr 0 e 2 a 2 + af (a) , where a = βr * 0 is the dimensionless parameter and The minimum of ǫ(a) can be found straightforwardly (with the help of "Mathematica", for example). For the case of µ = 0.2 m 0 17 , we got the ground state energy of exciton, ǫ 0 =-157 meV, which is in good agreement with the results of numerical simulation reported in Ref. 17, obtained for the same values of parameters. Moreover, the relative deviation of the variational exciton ground-state energy, calculated for the different values of ε, deviates from the numerical results discussed in Ref. 17 less than 2%.
Surprisingly, the numerical simulations for WSe 2 monolayer encapsulated in hBN with an effective mass µ=0.21 m 0 are in better agreement with the experimentally obtained excitonic spectrum,ǫ n = −140.5 meV/(n − 0.083) 2 (see the main text), than for µ=0.20 m 0 and leads to the energy ladder of excitons given by The excitonic binding energy, E b =162 meV, calculated with the aid of the Ritz variational method also nicely matches to the experimental one, E b =167 meV, obtained in the main text.
S5. Dependence of excitonic diamagnetic coefficients in WSe2 monolayer
To test our assumption that the excitonic spectrum in the WSe 2 ML encapsulated in hBN resembles a 3D hydrogen atom (∼ −1/n 2 ), we investigate dependence of the obtained diamagnetic coefficients σ of excitonic states, ns, in this system. We found theoretically (see Eq. 13) that the mean value of r 2 calculated with the aid of our Kratzer potential approach approximate the one for 3D hydrogen atom, in which r 2 parameters of excitonic states scales with n 4 . With the aid of Eq. 13 and σ = (er) 2 /8µ, we calculate theoretical σ values of excitons for WSe 2 ML encapsulated in hBN with parameters: µ=0.2 m 0 17 and ε = 4.5 25 . The theoretical values are compared with the experimental diamagnetic coefficients in Fig. S4. The theoretical dependence fits very well the experimental data up to the 4s state.This additionally confirms that the excitonic spectrum of the WSe 2 monolayer encapsulated in hBN corresponds to that of a 3D hydrogen atom. The apparent discrepancy between the theory and the experiment for the 5s state results in our opinion from the small range of the low-field limit, which affects the determined σ value for this state.
S6. Low temperature reflectance contrast spectra of S-TMD monolayers
The low temperature RC spectra of WSe 2 , MoS 2 , WS 2 , and MoSe 2 encapsulated in hBN are presented in Fig. S5. We define the RC spectrum as RC(E) = R(E)−R0(E) R(E)+R0(E) × 100%, with R(E) and R 0 (E), respectively, the reflectance of the dielectric stack composed of a monolayer encapsulated in hBN supported by a bare Si substrate and of the two alone layers of hBN on top of Si substrate. Note that the presented spectra correspond to the PL ones shown in Fig. 5 in the main text. First, the spectra display two pronounced resonances labelled 1s A,B which arise from the ground state of the so-called A and B excitons [45][46][47][48] . In addition to them, less pronounced features, labelled 2s A,B and 3s A , appear at about 200 meV higher in energy as compared to the 1s A,B ones. The assignment of the 2s A and 3s A features to the first and the second excited state of the A exciton is straightforward and corresponds to many other investigations on S-TMD MLs encapsulated in hBN 17,33,[49][50][51] . The origin of the 2s B is less clear, as it has not been reported so far. Due to the similar energy separation between 1s B and 2s B as compared with the 1s A and 2s A , we ascribed tentatively the 2s B features to the first excited states of the B exciton, which, however, requires further investigations. As it has been discussed in the main text, the estimation of band-gap energy is essential for our analysis of excitonic ladder in S-TMD MLs. As it is discussed in the main text, the estimation of band-gap energy can be easily carried out for WSe 2 , MoS 2 , and WS 2 MLs, however, this issue is more complex for the MoSe 2 one. This is, because PL | 2019-02-11T16:09:25.000Z | 2019-02-11T00:00:00.000 | {
"year": 2019,
"sha1": "09523d82fbb2b8f8f1be1cafc102a548e4b24bb8",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.123.136801",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c4c11709dd30239152f7ab7059d55d81b4acac1a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science",
"Physics"
]
} |
233036845 | pes2o/s2orc | v3-fos-license | Type I Chiari Malformation Without Concomitant Bony Instability: Assessment of Different Surgical Procedures and Outcomes in 73 Patients
Objective Posterior fossa decompression is the treatment of choice in type 1 Chiari malformation (CM-1) without bony instability. Although surgical fixation has been recommended by a few authors recently, comparative studies to evaluate these treatment strategies using objective outcome tools are lacking. Methods Seventy-three patients with pure CM-1 (posterior fossa bony decompression [PFBD], n = 21; posterior fossa bony and dural decompression [PFBDD], n = 40; and posterior fixation [PF], n = 12) underwent a postoperative outcome assessment using Chicago Chiari Outcome Score (CCOS). Logistic regression analysis detected predictors of an unfavorable outcome. Results Minimally symptomatic patients generally underwent a PFBD while most of the clinically severe patients underwent a PFBDD (p = 0.049). The mean CCOS score at discharge was highest in the PF (12.0 ± 1.41) and lowest in PFBDD group (10.98 ± 1.73, p = 0.087). Patients with minimal preoperative clinical disease severity (adjusted odds ratio [AOR], 4.58; 95% confidence interval [CI], 1.29–16.31) and PFBDD (AOR, 7.56; 95% CI, 1.70–33.68) represented risks for an unfavorable short-term postoperative outcome. Though long-term outcomes (CCOS) did not differ among the 3 groups (p = 0.615), PFBD group showed the best long-term improvements (mean follow-up CCOS, 13.71 ± 0.95), PFBDD group improved to a comparable degree despite a poorer short-term outcome while PF had the lowest scores. Late deteriorations (n = 3, 4.1%) occurred in the PFBDD group. Conclusion Minimally symptomatic patients and PFBDD predict a poor short-term postoperative outcome. PFBD appears to be a durable procedure while PFBDD group is marred by complications and late deteriorations. PF does not provide any better results than posterior fossa decompression alone in the long run.
INTRODUCTION
Type 1 Chiari malformation (CM-1) is characterized by a caudal displacement of the cerebellar tonsils due to an over-crowded posterior fossa. 1,2 This often leads to a true or a functional blockade of cerebrospinal fluid (CSF) circulation across the foramen magnum (FM) leading to a plethora of symptomatology. Nearly two-thirds of such patients develop syringomy-
Patient Population and Clinical Assessment
We retrospectively studied 89 consecutive patients with CM-1 operated at our institute between January 2013 to June 2019. The inclusion criteria were: (1) patients without radiological evidence of AAD or BI on preoperative radiology; (2) operated for the first time at our institute. The criteria used for defining AAD and BI and exclusion of these cases were as follows: AAD was defined as an increased atlantodental interval (> 4 mm in children and > 2.5 mm in adults); and BI was defined as location of the tip of the odontoid above the McRae line, the Wackeheim clival canal line and > 3 mm above the Chamberlain line or > 4.5 mm above the McGregor line on the sagittal plane (i.e., extension of the odontoid tip satisfied all the known radiological criteria). The latter was often associated with AAD and described as group A BI by Goel. 22 Our study included a few patients of CM-1 with basilar impression or the so-called type B BI. This bony anomaly is a common companion of CM-1 and contributes to the small posterior fossa in these patients. Here, the odontoid process maintained its relationship with the anterior arch of the atlas and the clivus, thus remaining below the McRae and Wackenheim lines but the tip of the odontoid extended beyond the limit on Chamberlain and McGregor lines. Figs. 1 and 2 are 2 representative cases from our series showing the bony anatomy associated with CM-1. Out of the 89 patients satisfying our inclusion criteria, 16 patients were excluded due to inadequate follow-up information. Therefore, 73 patients (mean age, 26 years; range, 9-60 years; male:female = 48:25) were finally analyzed.
Clinically, we divided the patients into 2 categories, namely 'minimally symptomatic CM-1' and 'clinically severe CM-1. ' The former category comprised patients with headache or neck pain with or without mild paresthesia that was not bothersome (n = 26, 35.6%). On the other hand, the presence of severe paresthesia, motor symptoms (myelopathy), atrophy of the hands with or without dissociative anesthesia were considered as clinically severe disease (n = 47, 64.4%).
Neuroimaging Evaluation
All patients underwent magnetic resonance imaging (MRI) of the cervical spine including the cervicomedullary junction. MRI images included one scout image of the entire spine and cranium, apart from the detailed sections of the cervical spine and the posterior fossa to detect an associated syrinx or a spinal curvature anomaly, if any. Radiological findings noted were; extent of tonsillar descent (below the McRae line, expressed in millimeters as well as with respect to the posterior element of the atlas or the axis), presence of syringomyelia (vertical extent), the pB-C2 distance (perpendicular distance between the tip of the odontoid from the line joining the tip of the clivus to the postero-inferior point of the C2 vertebral body), distance between the posterior part of the odontoid and opisthion, craniocervical angle and hydrocephalus. Similar to the clinical catego-
Fig. 1. (A)
A set of images of a patient with Chiari malformation type 1 with a normal bony anatomy. Sagittal section of the magnetic resonance imaging of cervical spine shows a cervical syrinx, tonsillar displacement below the foramen magnum reaching just above the posterior arch of atlas. On computed tomography evaluation, there are no abnormal bony fusions (B, C) and the odontoid tip is not extending more than 3 mm from the Chamberlain line (yellow) and lying below the McRae (red) and Wackenheim line (green).
Fig. 2. (A)
A set of images of another patient with Chiari malformation type 1 with an abnormal bony anatomy. Sagittal section of the magnetic resonance imaging of cervical spine shows a cervical syrinx, tonsillar displacement below the foramen magnum, and ventral encroachment of the medulla by the retroverted odontoid. (B-D) On computed tomography evaluation, there was assimilation of atlas, C2-3 fusion, platybasia with a retroverted odontoid. The odontoid tip is extending more than 3 mm from the Chamberlain line (yellow) but lying below the McRae (red) and Wackenheim line (green), suggesting a basilar impression or type B basilar invagination. In the panel B, the opisthion has been considered to be the point where the 2 cortices of the occipital squama join in view of assimilation of the atlas.
A B
C D rization, we divided the radiologically perceived severity of the disease into: radiologically mild disease (no syrinx or limited syrinx [cervical/cervicodorsal syrinx not extending to lower thoracic level i.e., below T4 vertebral level] with a tonsillar descent not beyond lower border of C1) and radiologically severe disease (either an extensive syrinx or tonsillar descent upto C2 or below or satisfying both conditions). Extensive syrinx was categorized as: holocord syrinx, distal (below T4) dorsal syrinx, lumbar syrinx, or cervicodorsal syrinx extending below the T4 vertebra. We modified our previously published classification (4 types) of tonsillar descent into 2 types. 16 Additionally, all patients underwent dynamic plain skiagrams or computed tomogram of the cervical spine to assess the bony anatomy.
Surgical Treatment
Our patients underwent 2 sets of procedures: the PFD only (n = 61, 83.6%) or PF (n = 12, 16.4%). Bony PFD (n = 21, 28.8%) was accompanied by a division of the dural band in all cases. Augmentation duraplasty was generally added to the bony decompression (PFBDD, n = 40, 54.8%) when there was a syrinx associated with CM-1 (a strategy validated by Lin et al). 10 However, a few patients underwent PFD even with a localized syrinx and some underwent PFBDD even in the absence of syrinx. For duraplasty, the materials used were artificial dura (n = 6), and locally available fascia in the rest (n = 34). Very few underwent additional intradural procedures like arachnoid lysis and tonsillar shrinkage (n = 12). Surgical fixation as a primary treatment was purely based on the surgeon's choice and based on the recent publications. 12,[23][24][25] In 9 of these patients, a bony decompression (removal of the posterior rim of FM) was also added.
Outcome Assessment
At discharge as well as at the last available follow-up, both clinical and radiological assessments were carried out. Clinical evaluation included gestalt questionnaire and CCOS. 19 The outcome was dichotomized as per a cutoff suggested by Hekman et al. into favorable outcome (scores 11-16) and unfavorable outcome (scores 4-10). 26,27 This score was used at the time of discharge as well as at last follow-up visit/interview.
As per this scoring system, all but 1 patient had follow-up CCOS score of 11 or more. Therefore, we used a new matrix to estimate the long-term outcomes using the difference between follow-up scores and the discharge scores for every patient. The median value of this score difference (value of 2 in this study) was used to dichotomize the patients into: marked improvement (difference of ≥ 2) or minimal/ nonimprovement (difference of less than 2).
Only those who did not report any satisfactory improvement or had a clinical worsening following surgery underwent a follow-up MRI. The parameters noted were: reduction in the extent of the syrinx, patency of the neo-cisternal magna, changes in the pB-C2 distance (on MRI), or iatrogenic C1/2 joint dislocation (antero-posterior or supero-inferior or rotational) on postoperative computed tomography scan of the CVJ. 19,28
Statistical Analysis
Normality of data was examined and normally distributed data were presented as mean ± standard deviation whereas nonnormal data were presented as median (interquartile range). For the continuous data, a comparison between the 2 groups was done using the independent t-test or the Mann-Whitney U-test. For a comparison of mean among the 3 groups, Oneway analysis of variance or Kruskal-Wallis test was used. Categorical data were represented as frequencies and compared using χ 2 test/Fisher exact test. For the assessment of predictors of the outcome, a binary logistic regression analysis was applied and uni as well as multivariate analyses were performed. IBM SPSS Statistics ver. 22.0 (IBM Co., Armonk, NY, USA) was used for data analysis. Table 1 summarizes the clinical and radiological findings of our study. Males (p = 0.007) and younger adults (18-40 years) were more significantly affected (p ≤ 0.001). Majority of the patients presented with symptoms of more than 1-year duration (n = 55, 75.3%; mean, 30.2 months; range, 1-146 months). The most common symptom was sensory paresthesia with a variable degree of sensory loss (n = 52, 71.2%) followed by pain at the nape of neck (n = 50, 68.5 %), out of which 12 patients also complained of headache. Interestingly, only 8 (11%) patients had urinary symptoms. Fifty-one patients (69.9%) had motor weakness either as hand grip weakness or as limb weakness. Forty-seven patients (64.4%) had some signs of myelopathy like hypertonia, weakness, hyperreflexia, etc. Frank atrophy of the thenar/hypothenar muscles was seen in 9 patients (12.3%), although a far more number of patients complained of weakness. Ten patients (13.7%) had lower cranial nerve signs while a similar percentage of patients had subtle cerebellar signs. There was a significant difference in the clinical disease severity amongst Median (range) 13 (11)(12)(13)(14)(15)(16) Change in score from discharge the patients in the different treatment arms (p = 0.049). The group who underwent a PFBDD had a significantly more severe disease while patients undergoing PFBD were mainly minimally symptomatic (Table 2). Average displacement of the tonsil was 10.44 ± 5.3 mm (range, 6-30 mm). While 44 patients (60.3%) had tonsillar decent upto the C1 arch, 29 had tonsillar decent below the C1 arch (39.7%). Out of the 73 patients, 65 (89%) had a syrinx. Thirty-three patients (45.2%) had cervicodorsal syrinx, constituting the most common type of syrinx in our study. Using our criteria, 37 patients had extensive syrinx in our series (50.7%) and 28 patients had a limited syrinx (38.4%).
Clinicoradiological Findings
Twenty-three of our patients (31.5%) had an associated bony anomaly out of which occipitalization of the atlas was seen in 18 patients (24.7%) while C2-3 fusion was present in 12 patients (16.4%). Nine patients (13.2%) had basilar impression (type B-BI) with platybasia. The distribution of these bony anomalies did not differ significantly among the 3 groups (p = 0.73). The incidence of these anomalies in the 3 groups was as follows: 23.8% (n = 5 of 21) in PFD group, 35% (n = 14 of 40 ) in the PFBDD group, and 33.3% (n = 4 of 12 ) in the PF group. The individual incidences of the 3 bony anomalies i.e., assimilation of atlas, C2-3 fusion, and basilar impression with platybasia also did not vary significantly among the 3 groups ( Table 2). The mean pB-C2 distance in our study was 8.4 ± 2.8 mm (range, 4.5-16.3). The radiological severity as well as bony anomalies were insignificantly distributed amongst the 3 groups (Table 2).
Short-term Postsurgical Outcome
There was no mortality following the primary surgery but there were 10 patients (13.7%) with postoperative complications. Motor worsening was the commonest complication after surgery (n = 4, 5.5%). Two of these patients had a pre-existing minor motor weakness. With respect to the bony anatomy, 2 of them had assimilation of atlas with C2/3 fusion and basilar impression with platybasia. None of these patients had a demonstrable pre-or postoperative instability. Two of them had undergone a PFBDD, one patient each had undergone PFBD and PF respectively. Three patients gradually attained the preoperative power on their own while a salvage PF was performed in one patient (patient No. 1, Table 3). Four patients (5.5%) developed transient CSF-leak related complications out of which 1 patient developed a surgical site infection. Postoperative worsened lower cranial nerve dysfunction was noted in 2 patients (2.7%). While the deficit was transient in one, another patient required a tracheostomy for airway protection. This patient had Predictors of the unfavorable outcomes at discharge in the study patients were assessed using the binary logistic regression analysis. In univariate analysis, type of surgery (p = 0.039), male sex (p = 0.007), and complication (p = 0.018) were found statistically significant whereas clinical severity (p = 0.114), age groups (p = 0.384), and radiological severity (p = 0.871) were insignificant. On multivariate analysis, variables with a p < 0.2 (modified cutoff value) were included in the analysis. From the variables included, only 2 variables i.e., clinical severity and type of surgery were showing significant (p < 0.05) and independent factor for the bad outcomes in patients. OR, odds ratio; CI, confidence interval; AOR, adjusted odds ratio; PFBD, posterior fossa bony decompression; PFBDD, posterior fossa bony and dural decompression; PF, posterior fixation. *p < 0.05, significant differences. † Univariate/ ‡ Multivariate binary logistic regression analysis used.
undergone a PFBDD. She gradually recovered and could be discharged with a CCOS score of 10 which improved to a score of 12 at last follow-up (5 years). Although the number of complications did not differ significantly among the 3 groups (p = 0.26), 80% (n = 8) of the complications occurred in the PFBDD group as depicted in Table 2.
On the Gestalt questionnaire, 33 patients (45.2%) reported an improvement, mainly in headache and paresthesias while 38 (52%) were unchanged and 2 patients reported worsening (the patient needing a PF and the patient with a lower cranial nerve paresis needing tracheostomy). However, on the CCOS scale, 48 patients (65.7%) were improved while 25 patients (34.2%) had unfavorable scores. Therefore, it appeared that nearly onefifth of the patients (n = 15, 20.5%) who reported an unchanged symptomatology (on gestalt questionnaire) at discharge actually had an improved score on CCOS scale. Thus, the patient's perception (subjective) and CCOS scores (objective) did not correlate. We performed an univariate and multivariate analysis of the factors affecting the outcome scores at discharge and found that a minimal clinical disease severity (AOR, 4.58; 95% CI, 1.29-16.31) and the PFBDD (AOR, 7.56; 95% CI, 1.70-33.68) were the main contributors of an unfavorable outcome ( Table 4). As far as the surgical groups were concerned, the PFBDD group showed the worst mean CCOS score at discharge (p= 0.08), primarily stemming of the complications arm of the scoring system.
Long-term Postsurgical Outcome
The mean duration of follow-up in our series was 45.1 months (range, 6-92 months). Purely in terms of the stratified CCOS scoring (a score of 11 or more being an indicator of a favorable outcome), 19 all except 1 patient (who died after a second surgery 3 months later) had a favorable outcome (n = 72, 98.6%; mean CCOS score, 13.3; range, 11-16) ( Table 1). Almost all of the patients (98.4%) reported a relief of headache and suboccipital pain, out of which, 54.4% patients could completely discontinue their medications. Functionality improved in 59 patients (89.4%), out of which 15.1% patients (n = 10) were completely functional and 74.2% (n = 49) patients could function at > 50% of their preoperative states. With respect to the nonpain symptoms, 86.4% (n = 57) showed improvement, out of which medications could be totally withdrawn in 24.2% patients (n= 16) with 13.6% (n = 9) showing no improvement.
On the gestalt questionnaire, 43 patients (58.9%) had an improvement, 27 patients (37%) reported no change while 3 patients deteriorated (4.1%), including one death. Therefore, the comparison of gestalt questionnaire and CCOS scale revealed a pattern similar to the short-term outcome (gestalt method underestimated improvements, 58.9% vs. 98.6%). Many of the patients with so-called unchanged symptoms did, therefore, improve objectively in the long term on CCOS scale. At the same time, we observed that the proportion of delayed deterioration was slightly underestimated by the CCOS score using the score cutoff of 11 (n = 1 [1.4%] vs. n = 3 [4.1%]).
The mean change of CCOS score was 2.15 (median, 2) ranging from -1 to 5. Table 2 shows the scores in various surgical subgroups. However, the proportion of patients with a median positive score change of 2 or more was higher (p = 0.06) in the PFBDD group (77.5%) and least in the PF group (50%). Interestingly, the patient perceived level of long-term satisfaction following surgery correlated well with a positive gain of at least 2 points in the CCOS scoring system (n = 49 [67.1%] satisfied vs. n = 24 [32.9%] dissatisfied, p < 0.001). Three patients (4.1%) developed a delayed clinical deterioration in the follow-up (details shown in Table 3). All the late deteriorations followed PFB-DD. Only one patient among the 3 had an assimilated atlas with C2-3 fusion with asymmetric lateral elements. This patient also had a high pB-C2 distance. The average time to clinical deterioration was 4 years. All of them underwent a redo surgery with distraction of the craniocervical junction and surgical fixation.
DISCUSSION
CM-1 is characterized by a narrow posterior fossa leading to the caudal displacement of the tonsils (> 5 mm) and a CSF circulation disruption at the level of the FM. 1-4 Syringomyelia often coexists and symptoms can be myriad. CM-1 may or may not be associated with a bony instability like AAD or BI. [1][2][3][4][5][17][18][29][30][31] When CM occurs without a bony instability, the condition can be termed as a "pure CM" and traditionally, PFD has remained their treatment of choice. 9,[26][27][28][29][30][31][32][33][34][35][36][37] We found that nearly 31% of the patients had some bony anomaly like basilar impression, platybasia, assimilation of atlas, and C2-3 congenital fusion. However, the incidence of these anomalies did not vary significantly among the 3 treatment groups. Goel 22 analysed the bony anomalies accompanying pure CM and noted that basilar impression or type B BI was present in some of these cases and the anomaly was one of the responsible factors for the reduction in posterior fossa volume. This anomaly, however, does not compress the neural structures and does not represent a bony instability. Eleven of the 40 patients (27.5%) had bony anomaly in the series reported by Salunke et al. 18
Surgical Strategies and Their Efficacies
Surgical decompression remains the mainstay of treatment in symptomatic CM patients. The role of decompressive surgery in asymptomatic or oligosymptomatic CM-1, however, remains controversial. In a recent systematic review, Langridge et al. 23 studied asymptomatic adult CM and found that these patients remain stable despite the presence of syringomyelia and advo-cated surgical decompression only in overtly and chronically symptomatic patients. Pomeraniec,31 in another study, found that the overwhelming majority of the patients (92.9%) remained clinically stable on a conservative treatment. Nearly 40% of these nonoperated patients even reported an improvement in symptoms. However, their group concluded that symptoms like sleep apnea/dysphagia or the presence of a syrinx must be viewed as surgical indications due to significant improvements found with a timely surgery. In another large series, Strahle7 found that the natural history of conservative management in asymptomatic CM was mainly benign and stable. However, those who show a change in the status, improvements occur less commonly than disease progression. Therefore, there is definitely a role of nonoperative treatment in these patients. That said, in symptomatic patients, presence of large syringomyelia, or setups where patients may not agree for regular follow-up visits, surgical decompression remains a valid choice. Herein comes the importance of choosing a surgical approach with minimum complications.
The majority of our patients underwent a PFD (n= 61, 83.6%). In a meta-analysis, Förander et al., 9 showed that bony decompression only did not differ significantly than the dural decompression with respect to important outcomes parameters like the rates significant postoperative clinical improvement and resolution of syringomyelia. Their study, however, showed that dural decompression could lead to a significant decrease in the reoperation rates related to clinical nonimprovement (2% vs. 8%,) but at the cost of a higher CSF-related complications (7% vs. 0%,). 9 Lin et al. 10 reported that PFBDD provided a better clinical result in the presence of a syringomyelia. In a recent series on pediatric CM-1, Massimi et al. 38 have found a better result with only bony decompression. Expansile dura allowing an expansion of the space after bony decompression along with a higher risk of CSF leak due to a poorly developed musculature are the primary reasons for preferring only a bony decompression in this age group. Massimi et al., 38 in a meta-analysis, noted that PFBD was sufficient in children without a syringomyelia while PFBDD was the procedure of choice in adults or in large syringomyelia irrespective of age. Our patient cohort had a mix of pediatric and adult patients but 89% of the patients had a syrinx. PFBDD in our experience had a very high share of the complications (80%) and associated significantly with an unfavorable short-term outcome. The higher rate of complications in PFBDD is well known and it evidently negated the advantages of neuraxial decompression in our series. Although the patients undergoing PFBDD were more severe clinically, the same can-not entirely explain the poor short-term outcomes as the PF group also had similar patients albeit with a better outcome. Therefore, associated complication represents the Achilles' hill for PFBDD. We also noted a delayed deterioration in 3 patients undergoing a PFBDD. This was in contradistinction to the study of Förander et al. 9 Despite our observations, PFBDD remains the first choice of treatment in adult CM-1, particularly in the presence of a syringomyelia and in resurgery cases. [36][37][38] Considering the risks of CSF leak that can offset the results of a good surgical decompression, some authors have utilized intraoperative ultrasonography in deciding the need for duraplasty, adding more objectivity in the decision making. 39,40 PFD was surprisingly associated with a comparatively better outcome in this series. The age groups did not differ between the treatment groups. Various authors have found that PFBD was a treatment of choice in children but not in adults. This procedure was done mainly in the minimally symptomatic patients (p = 0.049) and in the short term, the majority of the patients did not report any change in their symptoms (71%). However, a lack of complications characterized PFD group in our series. Förander et al. 9 had noted a higher reoperation rate with PFBD. In our experience, a resurgery was needed in only one patient in this category. To add, these patients had the best follow-up scores amongst the 3 groups.
We also treated some of our patients with a PF, despite the lack of obvious CVJ instability (n = 12). As Table 2 demonstrates, there was no difference amongst the 3 surgical groups with respect to bony anomalies or radiological disease severity. PF in our series was chosen mostly in patients with a clinically severe disease, similar to PFBDD. We saw that this group of patients had the best mean CCOS scores in the postoperative period, primarily from a reduced complication rate. It is interesting to note that 9 of these patients also underwent a removal of the posterior margin of the FM before fixation and distraction. Therefore, a combination of mechanisms might have resulted in a clinical improvement in this group. Moreover, we believe our experience with surgical fixation for bony CVJ anomalies and the volume of cases performed in our centre ensured that complications of fixation were very low. That said, we need to remember that PF has certain inherent issues like an increased cost of treatment, increased hospital stay, possible vascular injury, restricted neck movement, suboccipital hypesthesia, etc. A strategy of uniform C1/2 fixation has been advocated by in all CMs. Salunke et al. 17,18 recently examined this strategy where uniform surgical fixation for pure CM-1 was performed and noted that 30% patients did not improve satis-factorily. The authors concluded that a distraction of the odontoid process led to a vertical expansion of the CSF space leading to a symptomatic improvement. Our study has shown that despite a good immediate outcome, PF remains a sparingly used surgical technique for pure CM-1 (16% of the cases) and the long-term benefits are not sustained (mean long term CCOS score being lowest in PF group).
Issues With the Outcome Assessment Tools
Traditionally, the outcomes after Chiari decompression have been reported as gestalt improved, unchanged, worse pattern. In 2012, Aliaga et al. 19 proposed a novel scoring system, called the Chicago Chiari Outcome Score (CCOS) to provide an objective way of postoperative outcome assessment. This system has been externally validated as a better outcome assessment tool, despite it not being a traditional preoperative vs postoperative comparison tool. They maintain that the CCOS score captures the preoperative clinical patient status more accurately and hence allows a better outcome assessment than the traditional system. We found a disparity between patients' perception and CCOS scores in our study. 41 This could be because the CCOS scoring has 4 different components and despite improvements in some of them, a reduction in score in a particular component, felt important by the patient, might have the patient to report an unchanged or a worsened postoperative status. Aliaga et al. 19 noted a good reliability between gestalt outcome and CCOS score but they also noted some outliers where the gestalt outcomes and CCOS scores did not correspond. This was evident in the short-term outcome for PFBDD group. The mean short-term CCOS in this group was worst despite the fact that 47% patients had improved on gestalt system, more than the PFD group (Table 2). This can be explained by the fact that the patients undergoing PFBDD had a significantly higher share of clinically severe patients. Therefore, despite the patient improving subjectively, the scoring did not add upto 11 or more. As far as the long-term outcome was concerned, 98.6% of the patients had a favorable outcome on CCOS scale, whereas the improvement on the gestalt scale was 58.9% (n = 43), indicating that patient perception is often different than the actual changes in the score. Interestingly, a positive change in the score as the duration of follow-up increased correlated well with the patients' satisfaction (p ≤ 0.001). This leads us to believe that change in score at follow-up, rather than the follow-up score per se, maybe a better matrix of the long-term postoperative outcome assessment. PFBDD group, despite a significantly higher number of unfavorable outcome, showed the maximum mean increase in the follow-up scores. While this finding may be attributed to the low short-term scores, the long-term outcomes of PFBDD are well proven. Another interesting finding of our study was that CCOS score at follow-up revealed a poor outcome in 1 patient only whereas the gestalt outcome score revealed 3 such patients. Therefore, the score can overestimate the improvement as well. This happens in patients with a high CCOS score at discharge who despite a clinical worsening, still manage to have a score of at least '11. ' Therefore, the score difference appears to a better matrix of assessing long-term outcomes, both favorable and unfavorable, after surgery in CM-1.
Outcome Predictors
Apart from PFBDD, another significant predictor of an unfavorable outcome in our series was the preoperative clinical disease severity. Patients with minimal symptoms or symptoms that are not advanced and do not call for an urgent surgery have always remained a dilemma as far as decompression surgery is concerned. 19 Their decision of surgery is always relative, with a view to stop the disease progression. Our findings suggest that this group of patients does not perceive the benefits of surgery immediately and rather report a worsening. Thus, a proper discussion about the risks and benefits of surgery with the patient and the family members is necessary in these patients. The exact opposite is also true. The importance of preoperative clinical disease severity as a cause of apparent lack of improvement was pointed out by Aliaga et al. 19 Like minimal symptoms, a patient with an advanced disease also fails to perceive the benefits of surgery. We observed that the difference in the score increased with follow-up, the patient perceived satisfaction also improved significantly. This indicates that minor improvements are picked up only by a validated scoring system and often not reported by the patients. Therefore, the way the outcome is documented is very important and there is a need to use validated outcome scales like the CCOS uniformly, rather than relying entirely on the patient's response to a gestalt questionnaire. 26
Early and Delayed Postoperative Motor Deterioration and Perspectives
We had 4 patients in our series who had a persistent postoperative motor deterioration requiring a second surgical intervention (Table 3). While one such deterioration was detected immediately after surgery, the other 3 occurred after at least 3 years. Immediate postoperative motor deterioration was observed in a total of 4 patients in this series (5.4%), 3 of whom spontaneously improved. Two of these patients had a pre-exist-ing minor motor weakness a couple of them had an associated bony anomaly. None of these patients had a demonstrable preor postoperative instability. Two of them had undergone a PF-BDD, 1 patient each had undergone PFBD and PF respectively. Therefore, a pre-existing weakness, acute clival-cervical angulation associated with type B BI and a potential intraoperative cord manipulation (PFBDD in 2 and PF in 1) could be responsible for an immediate postoperative motor deterioration, although we cannot definitely claim so. Therefore, exercising due precaution to avoid excessive manipulation of the area especially in those who already have a motor impairment and utilization of intraoperative monitoring techniques could help prevent this relatively rare but important complication. 42 We encountered 3 late deteriorations in our study (4.1%). In a series of CM-1 patients uniformly treated with PF, Salunke et al. 17,18 noted a delayed deterioration in 3 patients (7.5%). Aliaga et al. 19 noted that clinical deterioration in these patients tends to be apparent only after a year of surgery. All the late motor deteriorations followed PFBDD and after an initial improvement. Imaging before the second surgery did not reveal any iatrogenic instability in these patients but did reveal a persistent syringomyelia or a new onset syringomyelia in all 3 patients with late deteriorations. Therefore, an inadequate PFD with a persistent CSF circulation block and subsequent disease progression was the likely cause for the late deteriorations. Therefore, a late deterioration may merely reflect an inadequate PFD. Other possible causes can be a latent bony instability that is ignored at initial evaluation. Sometimes a significant ventral brainstem compression (pB-C2 distance > 9 mm) can also be the cause. 15 Therefore, appropriate case selection for PFD, paying attention to decompress adequately at first surgery, close postoperative followup etc. remain possible measures. Although these patients were treated with a PF in our series, the improvement after surgery was only marginal (gain of CCOS score of only 1) while 1 patient died after surgery. Our decision for PF was on a presumptive basis, with an idea of doing the "maximum" i.e., a re-exploration of the PFD and distraction while performing PF, in an effort to provide adequate decompression. In absence of a demonstrable (preoperative radiological and intraoperative inspection of the C1/2 joints) iatrogenic instability in any of these "late deterioration" cases, the salvage PF perhaps benefited by providing a vertical decompression of the FM from the associated distraction. PF as a primary treatment of pure CM-1 may be an overkill. 26,42
Limitations of the Study
This study is limited by the retrospective design and a lack of a detailed evaluation of the commonly used radiological metrices. Moreover, the numbers of patients undergoing PF was low and the 3 groups of patients did not have similar number of patients. Therefore, the comparison was not very robust, rendering many observations statistically insignificant. Our study was also limited by a lack of follow-up information at different intervals which could have given a better insight into the longterm outcome dynamics. However, our study attempts to characterize the outcomes of various contemporary surgical strategies employed for pure CM-1 using a validated outcome assessment tool. Our study also introduces certain clinicoradiological and outcome measurement matrices which could be useful in the future research. A randomized trial comparing the various techniques will be the best tool to evaluate the various treatment strategies in CM-1.
CONCLUSION
PFD is the most commonly performed surgery and still remains the gold standard in "pure CM-1. " Current surgical strategies are generally successful in providing a favorable long-term outcome. Outcome measured on CCOS is a composite score and despite a lack of correlation with the gestalt system on a few occasions, should be used to bring in objectivity and uniformity in Chiari research. Minimally symptomatic patients and PFBDD predict a poor short-term postoperative outcome. PFBD appears to be a durable procedure while PFBDD group is marred by complications and late deteriorations. PF does not provide any better results than PFD alone in the long run. Late deteriorations after PFD are rare (4.1%) and represent a continued disease progression from an inadequate primary PFD. | 2021-04-07T06:16:52.283Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "2fb399e7d100d24acd5f3b06fb786cdfe4132df4",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-neurospine.org/upload/pdf/ns-2040438-219.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e26d6a57e90390825fc09d74b3a97197323757b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245224205 | pes2o/s2orc | v3-fos-license | Objectively Measured Physical Activity, Sedentary Behavior and Functional Performance before and after Lower Limb Joint Arthroplasty: A Systematic Review with Meta-Analysis
Patients after joint arthroplasty tend to be less physically active; however, studies measuring objective physical activity (PA) and sedentary behavior (SB) in these patients provide conflicting results. The aim of this meta-analysis was to assess objectively measured PA, SB and performance at periods up to and greater than 12 months after lower limb arthroplasty. Two electronic databases (PubMed and Medline) were searched to identify prospective and cross-sectional studies from 1 January 2000 to 31 December 2020. Studies including objectively measured SB, PA or specific performance tests in patients with knee or hip arthroplasty, were included in the analyses both pre- and post-operatively. The risk of bias was assessed using the Scottish Intercollegiate Guidelines Network (SIGN). After identification and exclusion, 35 studies were included. The data were analyzed using the inverse variance method with the random effects model and expressed as standardized mean difference and corresponding 95% confidence intervals. In total, we assessed 1943 subjects with a mean age of 64.9 (±5.85). Less than 3 months post-operative, studies showed no differences in PA, SB and performance. At 3 months post-operation, there was a significant increase in the 6 min walk test (6MWT) (SMD 0.65; CI: 0.48, 0.82). After 6 months, changes in moderate to vigorous physical activity (MVPA) (SMD 0.33; CI: 0.20, 0.46) and the number of steps (SMD 0.45; CI: 0.34, 0.54) with a large decrease in the timed-up-and-go test (SMD −0.61; CI: −0.94, −0.28) and increase in the 6MWT (SMD 0.62; CI: 0.26–0.98) were observed. Finally, a large increase in MVPA (SMD 0.70; CI: 0.53–0.87) and a moderate increase in step count (SMD 0.52; CI: 0.36, 0.69) were observed after 12 months. The comparison between patients and healthy individuals pre-operatively showed a very large difference in the number of steps (SMD −1.02; CI: −1.42, −0.62), but not at 12 months (SMD −0.75; −1.89, 0.38). Three to six months after knee or hip arthroplasty, functional performance already exceeded pre-operative levels, yet PA levels from this time period remained the same. Although PA and functional performance seemed to fully restore and exceed the pre-operation levels at six to nine months, SB did not. Moreover, PA remained lower compared to healthy individuals even longer than twelve months post-operation. Novel rehabilitation protocols and studies should focus on the effects of long-term behavioral changes (increasing PA and reducing SB) as soon as functional performance is restored.
Introduction
Osteoarthritis (OA) is the most prevalent degenerative disease of the musculoskeletal system [1]. In many cases, especially in older adults, it causes joint pain and limits functional ability. When the joints of the lower limbs are affected, weight-bearing activities, such as walking or kneeling, can be severely impaired [2]. Because severe OA of the lower limb joints drastically affects quality of life, hip and knee arthroplasty are being considered as viable treatment options when conservative treatment does not lead to the relief of symptoms. The increasing prevalence of OA associated with ageing of the population is reflected in a proportionally higher number of total joint arthroplasty procedures. Consequently, total hip and knee arthroplasties are among the most common elective surgical procedures worldwide [3,4]. The commonly expected outcomes of surgery are pain relief, increased mobility, function and higher quality of life, which are associated with increased physical activity (PA) levels and sport participation [5][6][7].
Sedentary behavior (SB) is one of the major risk factors for developing the chronic non-communicable disease. Together, these diseases are estimated to cause 71 percent of all deaths worldwide [8]. Therefore, reducing SB is useful for improving longevity, long-term health and well-being. It is even better to replace SB with PA, which is highly effective in preventing and treating diseases [9,10]. According to the latest guidelines, 150-300 min of moderate to vigorous intensity or 75-150 min of vigorous intensity PA per week is considered sufficient to maintain health and well-being [11]. Because of its physiological benefits, PA is a part of rehabilitation for a variety of disabilities, including osteoarthritis and joint arthroplasty. The dose-response relationship between levels of PA and health in middle-aged and older adults is strong [12]. The positive effects of physical activity on health are well established in practice and have a strong theoretical background. Finally, the cost-effectiveness of physical activity from the perspective of an individual's health and social and economic well-being is undisputed [13,14].
Over the last two decades, the development of high-technology measurement tools has enabled the quantification of PA and SB levels. Compared to questionnaires, newgeneration accelerometers provide a more reliable and objective insight into an individual's physical activity profile [15]. Many systematic reviews and meta-analyses have examined objectively measured and self-reported PA levels or sport participation in patients after lower limb arthroplasty [7,[16][17][18][19]. Notably, physical function is shown to recover to approximately 80% of that of controls [20]. To our knowledge, only one systematic review considered objectively measured SB after TKA and provided heterogenous results. The authors suggested that knowing SB post-operative trajectories in detail would help in tailoring targeted interventions [16]. Although total joint arthroplasty increases functional capacity and relieves OA-related pain, it remains unclear whether and to what extent patients change their objectively measured PA, SB and functional performance post-operation [17,21].
For this purpose, the primary aim of this paper was to review the studies that prospectively investigated objectively measured PA, SB and functional performance after lower limb arthroplasty for up to 12 months. The secondary aim of the study was to pre-and post-operatively compare the OA patients with healthy peers. We hypothesized that (a) patient groups will exhibit lower PA, higher SB and lower functional performance compared to healthy peers and (b) these outcomes will improve throughout the first 12 months following the operation. While some systematic reviews regarding this topic already exist, the number of studies reporting objectively measured PA and SB have been increasing in the last years. Numerous studies published enabled us to provide objectively measured PA and SB change trajectories. By including functional performance changes, a coherent interpretation off all three determinants together could be of high value for future clinical research and rehabilitation interventions of patients after lower limb arthroplasty in accordance with the Global Physical Activity Action Plan [22]. The review was performed according to the PRISMA 2020 statement for reporting systematic reviews [23].
Search Strategy
The study protocol was conducted in accordance with the Cochrane Collaboration guidelines [24]. Two electronic databases, including PubMed and Medline, were searched from 1 January 2000 to 31 December 2020. The literature search was conducted using the keywords and terms used in the search strategy, including: "physical*" "active*" OR "sedentary" OR "sitting" AND "endoprosthesis" OR "replacement" OR "arthroplasty" OR "arthrosis" OR "arthritis" OR "osteoarthrosis" OR "osteoarthritis" AND "knee" OR "hip" OR "ankle". One author searched the databases for the consistency of search hits. In addition, the reference lists of all included articles were searched for relevant studies that were not covered by the constructed search strategy.
Eligibility Criteria
Studies were included if they met the following inclusion criteria: (a) the study design was longitudinal or cross-sectional and included pre-operative and at least 3 months of post-operative data, (b) the study reported on objectively measured PA or SB or measured functional performance by using the 6 Minute Walk Test (6MWT) or Time Up and Go (TUG) and (c) included adult participants with a diagnosis of knee or hip OA who underwent primary or recurrent unilateral or bilateral total or complementary joint arthroplasty. For PA, only objective methods (uni-, bi-or triaxial accelerometers or pedometers) were considered. For SB, only accelerometers were accepted as an objective method of assessment. Reviews were not included in the systematic review, but the reference list of reviews found was examined, and all studies that met the inclusion criteria were included. Studies were excluded from the review if they (a) measured PA or SB using subjective questionnaires or performance level with other tests or (b) the results were obtained only for a single timeframe (i.e., only 12 months post-operation).
Study Selection and Quality Assessment
The abstracts and titles of the articles were screened for eligibility by two reviewers. The full text was than obtained and reviewed by two reviewers for articles with limited eligibility and potentially eligible articles. The articles whose eligibility criteria were accepted by both reviewers after the full text review were included in the review. Any ambiguities or conflicts were resolved through discussion. To detect bias and assess methodological quality, the Scottish Intercollegiate Guidelines Network (SIGN) methodology for cohort studies was used (Table S1). One author reviewed the quality of the studies and graded them as high (++; major of criteria met), acceptable (+; most criteria met) or low (0; most criteria not met) [25].
Data Items
For data extraction, a prepared data sheet was used by one reviewer and checked for accuracy and consistency by a second reviewer. For all included articles, the first author, year of publication, study design, type and location of surgical procedure (knee arthroplasty (KA) or hip arthroplasty (HA) and type of procedure), participant characteristics (age, body mass index and gender distribution), type of accelerometers and outcomes were extracted and summarized in tables in sequential order. For outcomes PA, SB and functional performance, data were collected in chronological order at one pre-operative and four post-operative time periods. Means, standard deviations and sample sizes were extracted from included studies. For articles in which results were reported in medians, interquartile ranges and confidence intervals, means and standard deviations were calculated using the methods of Wan [26] and Moher [27].
Data Analysis
Meta-analyses were performed using Review Manager software (version 5.3, Cochrane Collaboration, London, UK). The inverse variance method with the random effects model was used. Differences (between patients and controls or between time points) were expressed as standardized mean differences (SMDs) and corresponding 95% confidence intervals (Cis). When possible, mean differences were also calculated in units of measurement. Statistical heterogeneity between studies was assessed by calculating the I 2 statistic. According to the Cochrane guidelines, I 2 statistics of 0% to 40% may not be important, whereas 30% to 60% represent moderate heterogeneity, 50% to 90% represent substantial heterogeneity and 75% to 100% represent substantial heterogeneity. Sensitivity analysis was performed by eliminating studies one-by-one and checking whether the statistical significance of the pooled effect was affected.
Results
The initial search yielded 6059 results in the databases. Thirty-four studies met the inclusion criteria and were included in the review ( Table 1). The number of steps was reported for 919 subjects (689 female and 230 male) with a mean age of 63.7 (±7.5) and 29.1 (3.7) BMI; MVPA for 358 subjects (266 female and 92 male) with a mean age of 67.9 (4.9) and 27.1 (2.7) BMI; SB for 230 subjects with a mean age of 66.5 (3.6) and 29.7 (2.9) BMI; 6MWT for a mean age of 63.7 (3.2) and 30 (2.4); and TUG for 192 subjects with average age 65.1 (2.9) and 28.6 (2.9) BMI. The flowchart of the selection process according to the PRISMA statement [27] is shown in Figure 1. Twenty-seven studies were classified as acceptable (+), seven studies were classified as high quality (++) and only one study was classified as low quality (0) (
Data Analysis
Meta-analyses were performed using Review Manager software (version 5.3, Cochrane Collaboration, London, UK). The inverse variance method with the random effects model was used. Differences (between patients and controls or between time points) were expressed as standardized mean differences (SMDs) and corresponding 95% confidence intervals (Cis). When possible, mean differences were also calculated in units of measurement. Statistical heterogeneity between studies was assessed by calculating the I 2 statistic. According to the Cochrane guidelines, I 2 statistics of 0% to 40% may not be important, whereas 30% to 60% represent moderate heterogeneity, 50% to 90% represent substantial heterogeneity and 75% to 100% represent substantial heterogeneity. Sensitivity analysis was performed by eliminating studies one-by-one and checking whether the statistical significance of the pooled effect was affected.
Results
The initial search yielded 6059 results in the databases. Thirty-four studies met the inclusion criteria and were included in the review ( Table 1). The number of steps was reported for 919 subjects (689 female and 230 male) with a mean age of 63.7 (±7.5) and 29.1 (3.7) BMI; MVPA for 358 subjects (266 female and 92 male) with a mean age of 67.9 (4.9) and 27.1 (2.7) BMI; SB for 230 subjects with a mean age of 66.5 (3.6) and 29.7 (2.9) BMI; 6MWT for a mean age of 63.7 (3.2) and 30 (2.4); and TUG for 192 subjects with average age 65.1 (2.9) and 28.6 (2.9) BMI. The flowchart of the selection process according to the PRISMA statement [27] is shown in Figure 1. Twenty-seven studies were classified as acceptable (+), seven studies were classified as high quality (++) and only one study was classified as low quality (0) ( Table S1). Performance seems to increase 6 months post-operation. Patients with OA showed improved physical function and activity as early as six weeks and up to six months after THA.
Comparison of PA between Patients and Healthy Individuals Pre-and Post-Operation
In the first set of analyses (Table 2), we compared patients with matched control groups pre-operation. There was a very large difference in the number of steps (Figure 2; SMD = −1.02; p < 0.001), with the patients performing, on average, 2892.2 less steps per day compared to the control group. In the second set of analyses, we compared patients with matched control groups 12 or more months post-operation. The amount of moderate to vigorous physical activity (MVPA) (SMD = −0.97), as well as the number of steps (SMD = −0.75), tended to be lower in patients. However, despite large effect sizes, the differences were not statistically significant (p = 0.180-0.190), likely due to the large heterogeneity between studies (I 2 = 94-97%).
Comparison of PA between Patients and Healthy Individuals Pre-and Post-operation
In the first set of analyses (Table 2), we compared patients with matched control groups pre-operation. There was a very large difference in the number of steps (Figure 2; SMD = −1.02; p < 0.001), with the patients performing, on average, 2892.2 less steps per day compared to the control group. In the second set of analyses, we compared patients with matched control groups 12 or more months post-operation. The amount of moderate to vigorous physical activity (MVPA) (SMD = −0.97), as well as the number of steps (SMD = −0.75), tended to be lower in patients. However, despite large effect sizes, the differences were not statistically significant (p = 0.180-0.190), likely due to the large heterogeneity between studies (I 2 = 94-97%).
Comparison of Functional Performance in Patients Pre-and Post-Operation
In the first 3 months post-operation, the patients tended to improve their 6MWT (+59.9 m; SMD = 0.64) compared to the pre-operative level, but not enough to confirm this statistically (p = 0.15) ( Figure 6). At this time, the results of the TUG test still indicated a slight, statistically non-significant impairment in function (+1.58 s, SMD = 0.27, p = 0.770).
Comparison of Functional Performance in Patients Pre-and Post-Operation
In the first 3 months post-operation, the patients tended to improve their 6MWT (+59.9 m; SMD = 0.64) compared to the pre-operative level, but not enough to confirm this statistically (p = 0.15) ( Figure 6). At this time, the results of the TUG test still indicated a slight, statistically non-significant impairment in function (+1.58 s, SMD = 0.27, p = 0.770). Three months post-operation, the results of the 6MWT (+90.2 m; SMD = 0.87) were largely and statistically significantly (p = 0.008) improved compared to pre-operation and remained improved between 6 and 9 months (+71.84 m; SMD = 0.62; p < 0.001). The TUG test was also largely and significantly (−1.91 s; SMD = −0.61; p < 0.001) improved between 6 and 9 months compared to pre-operative levels (Figure 7). Three months post-operation, the results of the 6MWT (+90.2 m; SMD = 0.87) were largely and statistically significantly (p = 0.008) improved compared to pre-operation and remained improved between 6 and 9 months (+71.84 m; SMD = 0.62; p < 0.001). The TUG test was also largely and significantly (−1.91 s; SMD = −0.61; p < 0.001) improved between 6 and 9 months compared to pre-operative levels (Figure 7). Table 3 summarizes the obtained evidence across the different time intervals. Table 3. Comparison of the patients in PA, SB and functional performance pre-operation and at <3 months, 3-6 months, 6-9 months and >12 months post-operation.
Sensitivity Analyis
According to the sensitivity analyses for all outcomes at all time points, no single studies were identified that would change the statistical significance of the pooled effect size.
Discussion
The primary aim of this systematic review was to determine whether total arthroplasty is associated with changes in objectively measured PA, SB and functional performance at different post-operation follow-up periods to examine the progression of recovery after lower limb arthroplasty.
Comparison of PA between Patients and Healthy Subjects Pre-and 12 Months Post-Operation
Patients were less physically active than healthy peers both pre-and 12 months post-operation ( Figure 2). Specifically, a lower number of steps was observed in patients pre-operation. Studies by Fujita et al. [39] and Matsunaga-Myoji et al. [51] suggest that the level of light PA is also lower in OA patients awaiting surgery than in healthy individuals. In addition, Moellenbeck et al. [55] and Matsunaga-Myoji et al. [51] found that healthy subjects spend more time doing MVPA than OA patients. The observed trend could be a logical consequence of OA symptoms preventing weight-bearing PA and increasing SB. It seems that the discrepancy in MVPA and performance between controls and patients is still evident 12 months or more after total joint arthroplasty [46,62]. Although OA pain has been shown to be decrease post-operation, patients do not reach the performance and activity levels of healthy individuals even at 12 months or more post-operation.
Comparison in PA, SB and Performance of Patients Pre-and Post-Operation
Prospective observations of patients are more numerous in the literature than comparisons with healthy controls. In this section, pre-operative PA and SB levels as well as functional performance at different time intervals were compared to post-operative levels.
Post-Operative Changes in PA
In the first three months post-operation, the number of steps remained the same as those observed pre-operation. When the study by Güler et al. [40] is excluded from the analysis, significantly more steps are taken pre-operation ( Figure 2). Nevertheless, it may not be reliable to draw conclusions about the general level of PA based on the number of steps, as steps are only a partial survey of the frequency of physical activity. Intensity and duration need to be recorded to more accurately assess the level of PA. In the prospective study, Frimpong et al. [37] observed that within the first three months post-operation, patients spent 27 min per day less engaging in moderate intensity PA than pre-operation and maintained the level of SB. Similarly, Höll et al. [43] observed that about 300 fewer steps per day were taken shortly after surgery at light-intensity PA and also noted a slight increase in MVPA. When making comparisons between studies, we must consider that the measurements were taken at different post-operative times: 2 and 10 weeks. Therefore, those who measured PA earlier likely noted greater impairment due to higher acute inflammatory processes limiting overall activity and function. Intensive physiotherapy during this period must also be taken into account as it may affect the patient's overall PA levels. It is very likely that post-operative limitations such as pain and protection of the affected limb affect PA within the first three months post-operation [63,64].
Four studies were included in the analysis of the number of steps and three studies in the analysis of MVPA between 3 and 6 months post-operation (Figures 1 and 2). Very little difference was observed in the number of steps (+333 steps/day), while no difference was observed in MVPA. Only two studies [56,58] observed SB, and in both, patients already reached but did not exceed pre-operative levels. Light-intensity physical activity remained the same, as noted in the studies by Thewlis et al. [58] and Oka et al. [56] (from 329 pre-to 316 min/day post-and from 239 pre-to 232 min/day post-, respectively). At 6 to 9 months post-operation, MVPA and the number of steps were above preoperative levels. These observations are supported by previous findings by Mills et al. [17], who only found a minimal increase in MVPA 6 months post-operation. The slight discrepancy in MVPA between studies could be due to differences in the interpretation of variables describing PA and differences in methodology and the number of studies included in the analyses. The main determinants in the study by Mills et al. [17] were time spent in locomotion and time spent active, while we used MVPA and SB as PA determinants.
The results of the prospective studies with a follow-up period of 12 months or longer showed high homogeneity and a trend toward increased number of steps ( Figure 4). Three studies in which MVPA was observed provided a large effect size, indicating a long-term post-operative increase in PA. The results of our study showed that patients fully regained and exceeded their pre-operative PA level after 6 and 12 months. Similar results were previously reported in two systematic reviews by Arnolad et al. [65] and Mills et al. [17], where improvements in PA over time, especially after 12 or more months, were evident.
Post-Operative Changes in SB
Three studies compared SB less than 3 months [37,58] and from 3 to 6 months [56,58] post-operation. The time trajectory of SB is clearly seen from the study by Thewlis et al. [58], in which SB increases from pre-operation to 2 weeks post-operation for 120 min/day and decreases back to pre-operative levels after 12 weeks (620 min/day to 624 min/day). A similar trend was observed by Oka et al. [56], while Frimpong et al. [37] reported that SB was already recovered after 6 weeks post-operation.
Most of the SB data were provided from 6 to 9 months post-operation. On average, the values of SB remained constant and did not significantly decrease 6 months post-operation, similar to the study by Frimpong et al. [38] in which only a small decrease in sitting time was observed at 6 months (~5%). In general, SB is not significantly reduced in a time period from 6 to 9 months, but in the study by Möellenbeck et al. [54], sedentary activities longer than 60 min were significantly reduced in older adult patients. As long-term objectively measured SB data are lacking and studies report divergent SB variables, future research should primarily focus on methodological quality and long-term outcomes.
Post-Operative Changes in Functional Performance
In the first three months post-operation, 6MWT and TUG increased slightly but not significantly. It is logical that patients regain pre-operative functional performance levels more related to general locomotion ability, such as walking at a steady pace, sooner after arthroplasty than more complex tasks such as explosive strength and agility. The subsequent restoration of TUG could be due to the fact that surgery-related swelling and pain do not enable one to perform at peak functional capacity since strength deficits of lower limbs might be present [66].
At 3 to 6 months, the 6MWT was significantly higher than the pre-operation, suggesting that the surgically induced pain might be reduced and functional capacity restored by arthroplasty [66]. It has been reported that the subjective perception of functional ability at this time period was lower than pre-operation [67], so there may be some discrepancies between actual functional ability and the subjective perception of functional ability due to post-operative anxiety and uncertainty of the patients. Our results showed that six months post-operation, patients exceed their pre-operation functional ability. Restored functional capacity allows patients to participate in sport, as Witjes et al. [68] reported that the majority of patients after lower limb arthroplasty who were previously physically active returned to high-impact sports after 26 weeks.
Coherent Interpretation of PA, SB and Functional Performance Changes
Overall, objectively measured PA appears to increase over time, but patients are unable to reach pre-operative levels of PA until about 6 months post-operation, when PA values match and then exceed pre-operative levels. Functional performance, on the other hand, tends to increase earlier (3 months post-operation) and continues to develop up to 12 months post-operation. Although functional performance is restored at 3 months, MVPA remains at pre-operative levels. Moreover, SB remains unchanged in the period between 6 and 9 months. Since the main goals of rehabilitation after joint arthroplasty are to maximize functional performance, optimize lifestyle and promote patient independence to improve overall health [69], increasing PA and reducing SB should be incorporated and encouraged as soon as possible. The results of this study show that even if functional performance is increased, patients remain as sedentary as they were before arthroplasty. Only PA recovered, albeit with a delay, and significantly exceeded the pre-operative level at 12 months post-operation. Although arthroplasty significantly restored patients' function, patients remained less physically active than their healthy peers 12 months post-operation.
According to Bull et al. [11], adults and older adults should spend at least 150-300 min per week in MVPA to achieve substantial health benefits of physical activity. Only in the studies by Moellenback et al. [55], Oka et al. [56] and Hylkema et al. [70] did patients achieve these recommendations 6 and 12 months post-operation. On average, functional performance and PA increased after the arthroplasty, but not enough to reach the general health guidelines. More importantly, SB remained unchanged. Therefore, novel rehabilitation protocols could solve the problem of achieving sufficient PA and reducing SB in the long term. Peter and colleagues [71] already proposed to expand the current recommendations for the rehabilitation process after lower limb arthroplasty. A behavioral approach could be applied in line with recent guidelines, which aim to limit SB and frequently interrupt it by PA to make it less harmful to overall health [11]. In addition, supervision and a motivational approach to rehabilitation after KA and HA have provided promising results [72][73][74], so the idea of a rehabilitation protocol incorporating high technology supporting a contemporary behavioral therapy approach seems reasonable preference in patients undergoing lower limb arthroplasty.
Because the study has several potential limitations, the results must be interpreted with caution. First, high heterogeneity was found in the analyses for some outcomes. This could be due to the fact that we comprehensively analyzed HA and KA, so high heterogeneity between the two subgroups could bias the results. Second, the data came from studies that used different activity monitors (bi-, uni-or triaxial accelerometers) with different reliability, validity and outcomes. Third, some authors have pointed out that measuring PA with objective data alone underestimates the realistic level of PA, so a combination of subjective and objective measurements is more appropriate when interpreting PA of patients after total joint arthroplasty [17]. Finally, we used the number of steps as a PA determinant, although it is not as well associated with general health as PA and SB [75]. We chose to do so because it has been most commonly used as a measure of PA and may help, at least in part, to explain the trajectories of PA in combination with MVPA and SB. To provide stronger evidence for practical implications, future studies examining PA and SB in patients after lower limb arthroplasty should use similar methodology and reliable instruments that provide objective data. The limitation of the current literature is that the studies either (a) compared patients and controls in single time points or (b) only tracked patients prospectively. Therefore, the progress of the patients with respect to the control groups is difficult to discern.
Conclusions
The results suggest that objectively measured PA and functional performance increase while SB remain unchanged after lower limb arthroplasty. However, an optimal lifestyle is not achieved. Functional performance soon after surgery exceeds pre-operative levels and increases over time. In the period before and more than 12 months post-operation, patients tend to have lower functional capacity and are less physically active than their healthy comparison groups. Since the main goal of rehabilitation after lower limb arthroplasty is to improve patients' functional ability and general health, novel long-term rehabilitation approaches should be adapted to influence patients' lifestyle and in a way that maintains or even improves patients' overall health in the long term.
Funding:
We want to acknowledge the support of the European Regional Development Fund and Physiko-and Rheumatherapie Institute through the Centre of Active Ageing project in the Interreg Slovakia-Austria cross-border cooperation program (partners: Faculty for Physical Education and Sports, Comenius University in Bratislava: Institute for Physical Medicine and Rehabilitation, Physiko-and Rheumatherapie GmbH). Authors M.S., N.S. and Z.K. acknowledge the European Commission for funding the InnoRenew CoE project (Grant Agreement 739574) under the Horizon2020 Widespread-Teaming program. The funders had no role in study conceptualization, data acquisition, data analysis or manuscript preparation. The funders provided parts of authors salaries and will fund the submission fee if the paper is accepted.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-17T16:16:24.405Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "73bda1d373284edef986966e32e5e537732166ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/24/5885/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6db21c62f524a709c5d3d48a5183fce2b124570c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226226613 | pes2o/s2orc | v3-fos-license | Social networks, confirmation bias and shock elections
In recent years online social networks have become increasingly prominent in political campaigns and, concurrently, several countries have experienced shock election outcomes. This paper proposes a model that links these two phenomena. In our set-up, the process of learning from others on a network is influenced by confirmation bias, i.e. the tendency to ignore contrary evidence and interpret it as consistent with one's own belief. When agents pay enough attention to themselves, confirmation bias leads to slower learning in any symmetric network, and it increases polarization in society. We identify a subset of agents that become more/less influential with confirmation bias. The socially optimal network structure depends critically on the information available to the social planner. When she cannot observe agents' beliefs, the optimal network is symmetric, vertex-transitive and has no self-loops. We explore the implications of these results for electoral outcomes and media markets. Confirmation bias increases the likelihood of shock elections, and it pushes fringe media to take a more extreme ideology.
The strongest bias in American politics is not a liberal bias or a conservative bias; it is a confirmation bias, or the urge to believe only things that confirm what you already believe to be true. Not only do we tend to seek out and remember information that reaffirms what we already believe, but there is also a "backfire effect" which sees people doubling down on their beliefs after being presented with evidence that contradicts them.
"Your facts or mine?" by E. Roller, NYT, Oct 25th, 2016. Social networks are increasingly becoming the primary channel for people to acquire information and form opinions. In an experiment involving 60m Facebook users prior to the 2010 US elections, Bond et al. [2012] showed they could generate 340,000 additional votes using a social message that informed a user about friends that had voted, compared to an informational message without social network information. Unlike traditional media or gatherings in the local church, club or pub, online social networks make it very easy for a user to "unfollow" someone who does not share their opinion.
In this way, they exacerbate the role of confirmation bias -people's tendency to ignore information contrary to their view and reinterpret it as agreeing with their own (Pariser [2011]). A natural question is whether the growing importance of online social networks in opinion formation, and the corresponding heightened role of confirmation bias, affects the democratic process and whether it is a driver of shock elections results such as Trump's win or Brexit.
The aim of this paper is to examine how confirmation bias affects the process of learning from others, and its consequences for elections and the media. We are interested in learning at the societal level, so individuals are embedded in a large network. They are endowed with an initial belief at time 0 and learn according to the well-known DeGroot [1974] behavioral rule: they update their beliefs by taking weighted averages of their neighbours' beliefs. Experimental evidence shows that this DeGroot learning rule is a good predictor of how people learn from others, and it is particularly suitable to model learning in a large network where it is unrealistic to assume individuals are Bayesian updating and process information conditioning on the network structure. 1 In the spirit of Rabin and Schrag [1999], confirmation bias in the model implies that when an individual learns that someone has beliefs too different from their own, they ignore them thereafter and give more weight to their own belief instead. Specifically, after the assignment of initial beliefs each individual cuts connections with others who have beliefs further away from their own than a threshold. The individual reassigns the weight of these severed connections to themself, and they never reinstate these links after they have been cut. Mathematically, understanding the effect of confirmation bias in the model reduces to a comparison of the learning processes on the original network and the new sparser network.
The first result is that, when agents pay enough attention to themselves, confirmation bias slows down learning in any symmetric network. In the proof we apply the notion of Dirichlet energy to relate the confirmation bias parameter to the whole spectrum of eigenvalues of the network matrix, which governs the rate of convergence to a consensus. Agents need to pay enough attention to themselves so that their beliefs do not oscillate from period to period. Mathematically, this restriction ensures that the eigenvalues are positive, which guarantees that changes in size correspond to changes in magnitude. Using counterexamples we show that the result is not just a consequence of the network being sparser, but it critically relies on the key feature of confirmation bias that the individual puts more weight on their own belief: if the weight of the severed links is partly reassigned to other surviving links then learning may be faster or slower. While this result requires the network to be symmetric, we show using simulations that it largely holds in asymmetric networks too.
Confirmation bias leads to a redistribution of influence. The intuitive result that individuals who cut links increase their influence does not always hold. We can, however, show that there are individuals that we dub influencers (listeners) whose influence increases (decreases) in the presence of confirmation bias. A further consequence of confirmation bias is that society becomes more polarized at each point in time.
A natural objective of a social planner is to maximize the chance that society converges to the truth. Confirmation bias works against this goal by redistributing influence across individuals and, potentially, breaking the network into separate components -preventing the aggregation of initial signals. Assuming that a social planner does not observe the distribution of initial signals or the level of confirmation bias, we characterize the set of networks that maximize the probability that a society converges to the truth. Given a fixed budget of links to allocate, optimal networks are symmetric, have no self-links, and their unweighted equivalent is vertex-transitive -a subset of regular networks such that every node is structurally equivalent to every other in the network.
In the second part of the paper, we examine the consequences of social learning affected by confirmation bias for elections and media markets. In the first application, we embed our social learning framework into a two-candidate voting model in which sincere voting is a weakly dominant strategy so whenever there is an election individuals vote for the candidate closer to their current belief. We restrict the distribution of beliefs to focus on the interesting case when a society would vote for the same candidate before learning takes place and at the end of the learning process. We define a society as having shock elections if the other candidate wins at any point in time during the learning process. Using a mean-field assumption, we prove that a society never has shock elections without confirmation bias, but it can do if confirmation bias is high enough to remove some connections. Simulations show that this result holds even without the mean-field assumption.
Finally, we embed our social learning framework in a Hotelling-style model of a media market. Media players choose their editorial line, or ideology, and only care about maximizing their audience. Individuals follow one and only one media organization due to their limited attention budgets. We focus on "fringe" media organizations that adopt an extreme editorial line, and prove the editorial line of the fringe media organization becomes more extreme as the strength of confirmation bias increases.
Literature review. This paper sits at the intersection of literatures in behavioral economics, social learning and political economy. We review each in turn, highlighting papers that are particularly relevant to this work.
Psychology. The study of confirmation bias has a long history in psychology; a comprehensive review by Nickerson [1998] shows its relevance to a large range of issues, including judicial outcomes (Kuhn et al. [1994]), policymaking (Tuchman [2011]), and medical decisions (Elstein and Bordage [1979]). 2 A difficulty posed by the vastness of this literature is, quoting Nickerson, that "confirmation bias has been used in the psychological literature to refer to a variety of phenomena" (p. 175). Nickerson [1998]'s working definition is "unwitting selectivity in the acquisition and use of evidence" (p. 175) and he puts special emphasis on the non-deliberate nature of the bias which emerges as a heuristic to quickly process information. The core idea present throughout the psychology literature is that people are biased against information which conflicts with their own beliefs.
Behavioral economics. In the economics literature, Rabin and Schrag [1999] formulate a model of how confirmation bias affects individual decision-making. In their set-up, there are two states of the world. Each time a new signal arrives, the agent performs Bayesian updating, with the twist that when a signal runs counter to the agent's current hypothesis then there is a probability q that the agent misinterprets it as actually confirming her hypothesis; i.e. q is the strength of confirmation bias. The main result of their paper is that confirmation bias leads to overconfidence. Epstein [2006] presents a model of Non-Bayesian updating, where the agent is 'tempted' to change their belief after receiving a signal. 3 This model is able to nest a version of confirmation bias by choosing an appropriate specification of how to agent is tempted to update their belief.
The main contribution of our paper to behavioral economics is to analyze the effects of confirmation bias in processing information in a context with multiple agents who learn from each other through their social connections. This is arguably becoming more relevant nowadays, as individuals increasingly learn by sharing information on social media rather than individually processing information from a media source. A crucial step is to model confirmation bias as reducing the range of opinions an individual is willing to listen to. This translates to ignoring information in a similar way to the single agent in Rabin and Schrag [1999]'s framework. In our basic model, an 2 Nickerson [1998]'s opening paragraphs states: "If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration. Many have written about this bias, and it appears to be sufficiently strong and pervasive that one is led to wonder whether the bias, by itself, might account for a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations." The careful reader will note that the pervasiveness of confirmation bias may extend to the quote itself.
3 It is similar to Gul and Pesendorfer [2001]'s model except that it is beliefs, rather than utilities, that change.
agent always ignores information too far away from their view rather than only some of the time, as in Rabin and Schrag [1999]. Relaxing this assumption does not affect the main results. 4 Social learning. Research on social learning in economics began with the seminal papers by Banerjee [1992] and Bikhchandani et al. [1992] in which rational players take actions in succession and each mover can see the actions of their predecessors. A sizeable branch of subsequent work has enriched this basic framework by embedding agents in a network and relaxing the assumption of sequential moves. In this more complex set-up, however, tractability is a challenge and assuming full Bayesian rationality tends to limit the results to showing convergence to consensus in the long-run (see Golub and Sadler [2016] for a comprehensive review). Moreover, the sophistication required by Bayesian reasoning in this set-up is unrealistic in large societies, and Corazzini et al.
[2012] show experimentally that even in small groups it is a poor predictor of how individuals learn.
An alternative approach is to assume agents are non-Bayesian and use a behavioral rule to learn from others. DeGroot [1974] proposed the simple rule that agents update their beliefs by taking a weighted average of their neighbours' opinions, and he shows that the process reaches a consensus under mild regularity conditions. This set-up gained traction in economics with DeMarzo et al.
[2003], who obtain novel results on convergence speed and relate each agent's contribution to the consensus to their respective network position. More recently, Golub and Jackson [2010] give new results on network structures that lead to society correctly aggregating information.
The primary contribution of our paper to the social learning literature is to study how confirmation bias affects the outcomes of the learning process, including convergence to a consensus, speed of learning and the influence of agents. To our knowledge, this is the first paper to examine how a psychological bias affects DeGroot-type learning. Aside from its intrinsic interest and applications, it also provides a check on whether existing DeGroot-type learning results are robust to a bias that is ubiquitous in reality. The objective is similar to Golub and Jackson [2012]: they examine how homophily, a ubiquitous feature of social networks, affects the speed of social learning and consensus. 5 Voting. In recent years political experts have been surprised by several electoral outcomes, including the election of Donald Trump in the United States, and the outcome of the U.K.'s Brexit referendum. It may not be entirely coincidental that these events occurred alongside a shift in news consumption from traditional media outlets to online social networks (Gottfried and Shearer [2016]). For instance, an extensive study of 10.1 million U.S. Facebook users by Bakshy et al. [2015] shows that people tend to predominantly share news with friends that is in line with the recipient's ideology, and this filtering by friends is more powerful than Facebook's algorithmic selection on the news feed to limit exposure to distant viewpoints.
Alongside this is a growing literature on 'fake news' and its potential impact on elections.
4 See Appendix A.1 for further discussion. 5 Other papers using the DeGroot framework include Acemoglu et al. [2010], Gallo [2014] and Jadbabaie et al. [2012]. Allcott and Gentzkow [2017] provide an overview of some recent work, and also sketch a model of fake news. They suggest that if agents have a preference for confirmatory news reporting, then news reporting can become distorted, possibly reducing the ability of democracies to choose high-quality candidates.
This paper shows how confirmation bias' impact on the way we learn from others can lead to surprising election outcomes. In particular, confirmation bias prevents us from directly learning from other people whose information conflicts with our own views. This means that our immediate friends are unrepresentative of society as a whole. Therefore, we can be swayed one way in the medium term, even though the weight of information on aggregate points the other. A consequence of this in the short/medium-term is that a society may vote for a candidate that would not be supported by the majority once long-run information aggregation has occurred. Confirmation bias implies that a society can choose policies that, in the long term, it would not want.
Model
This section presents the main elements of the model: the network and initial signals, the learning process, and the way we model confirmation bias.
Endowments. Consider a finite set of agents N = {1, 2, ..., n} who communicate through a directed, weighted network T ∈ T . T is the set of all networks with n nodes. The entry T ij ∈ [0, 1] denotes how much weight agent i places on the views of agent j and j∈N T ij = 1 for all j; so T is row-stochastic. The self-link T ii is the weight an agent places on her own view. A directed path of length l between i and j is a sequence of links T ik 1 , ..., T k l−1 j such that no two nodes on the path are the same. We assume that T is strongly connected -there is a directed path from any agent to any other agent -and aperiodic -there are no cycles.
We say that agent i listens to j if T ij > 0, and i is listened to by j if T ji > 0. Denote by N out i (T ) = {j ∈ N |T ij > 0} the out-neighbourhood and by d out Each agent is endowed with a signal θ i ∈ [0, 1] about the underlying state of the world, and we make the standard assumption in the literature that agent's i initial belief x i0 at time t = 0 is equal to the initial signal received by i. 6 Notice that the initial belief x i0 of agent i is independent of i's position in network T .
Learning. In each time period an agent updates her belief by taking a weighted average of her current belief and the beliefs of agents she listens to. Mathematically, agents' beliefs at time t are 6 We leave the distribution of signals unspecified because it does not matter for the results in the paper.
equal to x t = T * x t−1 . Iterating, we have that x t = (T * ) t x 0 so we can derive agents' beliefs at time t from the initial signals and the network.
Confirmation bias. In the first step of the learning process, agents truthfully share their signals and therefore an agent learns the initial belief of their neighbours. We assume that when an agent with confirmation bias learns that the difference between a neighbour's belief and her own belief exceeds a threshold, she ignores that neighbour and transfers the weight she would have put on that neighbour's belief to her own belief. Moreover, she never listens again to information from that neighbour for the rest of the learning process.
Definition 1. A society on a network T in which agents have confirmation bias q communicates according to a network T * such that: Mathematically, a society in which agents have confirmation bias q communicates through a network T * that has had links cut compared to T , and where the weight of the links that have been cut is redistributed to self-links. Notice that the threshold is defined as 1−q so that a higher q corresponds to increasing confirmation bias. Understanding the impact of confirmation bias in the model means comparing the learning processes on T and T * . Unless stated otherwise, we assume that T and T * are both strongly connected. There are some implicit assumptions in modeling confirmation bias in this way which are worth discussing and motivating upfront.
1. Agents completely cut links. This is a realistic assumption for online social networks where users have to make a binary decision on whether to follow or unfollow someone. The results are, however, robust to relaxing this assumption. In Appendix A.1 we show that the main result concerning the speed of learning holds if agents affected by confirmation bias weaken rather than cut links, and the weakening can be either by a common factor or proportionally to the difference in beliefs.
2. Agents redistribute weights of cut links to themselves. This is an important feature of confirmation bias from a large body of experimental evidence (Nickerson [1998]) as well as consistent with the "backfire effect" mentioned in the initial quote. If agents were to redistribute the weight to, say, other neighbours then this would not be confirmation bias and, as we will show, it would lead to different results.
3. Agents never reinstate links they have cut. This is consistent with the standard assumption of myopic learning in the DeGroot model, in which the weights an agent gives to different neighbours are fixed at the beginning and never change over time. It also helps in terms of tractability because time-varying weights would significantly increase the complexity of the Markov process.
How confirmation bias affects learning
This section examines how confirmation bias affects social learning. Section 2.1 considers speed of learning, section 2.2 considers agents' influence, and section 2.3 considers polarization of beliefs.
Finally, section 2.4 characterizes the optimal networks to minimize the adverse consequences of confirmation bias on the learning process.
Speed of Learning
The assumptions that T and T * are strongly connected and T is aperiodic ensure that in the long run the society converges to a consensus where all agents agree with one another. Reaching a consensus may, however, take a long time and the purpose of this section is to characterize how this convergence time varies with confirmation bias.
In the Markov chain literature there are different definitions of convergence or mixing time.
In general, mixing time depends on the spectrum of eigenvalues, which is often well-approximated by the second largest eigenvalue (Montenegro et al. [2006]). In this paper we adopt the following definition of convergence time that takes into account the full spectrum of eigenvalues of the matrix T .
Average convergence time captures how long it takes for agents, on average, to get within a distance of their invariant distribution -the proportion of total attention they indirectly pay to all agents in the long run. Notice that the initial assignment of beliefs does not enter explicitly in the definition. In economics terms, what we care about is the speed of convergence of beliefs to a consensus and the initial assignment matters for this -if an agent's initial belief is extreme then it will take more time for her to converge to the consensus. The evolution of beliefs, however, closely tracks, on average, the evolution of our measure, and we validate this in a simulation study in section 5. Moreover, Appendix A.1 shows that our results are robust to adopting the definition of convergence in Golub and Jackson [2010] based on the second eigenvalue and the worst-case scenario in the assignment of initial beliefs.
Armed with this definition, we can prove that confirmation bias always weakly increases convergence time in any symmetric network provided that agents listen mostly to themselves.
Theorem 1. When T ii ≥ 1 2 for all i, then for any symmetric network T , the average convergence time τ is (weakly) monotonically increasing in the amount of confirmation bias q.
The proof consists of two steps. First, we show that the Dirichlet energy of T * is lower than the one of T . Second, we rely on results from the Markov chain literature to relate the Dirichlet energy to the whole spectrum of eigenvalues of T and T * . The result holds for the large class of symmetric networks, which captures many types of real social networks, such as trust, friendship and family networks. It does, however, exclude some other types of social networks, such as Twitter. The limitation to symmetric networks is necessary for tractability because it implies that the removal of links does not change the influence each agent has on the final consensus outcome. In other words, it simplifies the analysis by disentangling the effect of confirmation bias on convergence time from its effect on the distribution of influence, which may also affect convergence time and is the focus of the next section. The simulations in section 5 show that the result in Theorem 1 largely holds in asymmetric networks as well.
From a technical standpoint, the restriction that T ii ≥ 1 2 for all i ensure the Markov chain has no negative eigenvalues -this is a standard approach in the mathematics literature. 7 It ensures we avoid situations where the Markov chain is "nearly" periodic. Intuitively, we can think of the periods in the model as corresponding to relatively short periods of chronological time, so it would be unreasonable for agents to have large swings in their belief from one period to the next.
A tempting interpretation of Theorem 1 is that convergence time is longer with confirmation bias simply because T * is a sparser network than T . This is not correct. Recall that there are two features that define what it means to have confirmation bias. The first one is that someone with confirmation bias ignores information from others whose beliefs are too different -this is the removal of links that makes the network sparser. The second one is what the initial quote dubs the backfire effect -the weight of these links is fully redirected to the self-link. The counterexample in Figure 1 shows that both features of confirmation bias are essential for the result in Theorem 1 to hold.
In particular, consider the following extended model that nests confirmation bias as a special case. As in the current set-up, i removes a link with j if |x i0 − x j0 | > 1 − q. The extension is that i reroutes a fraction φ ∈ [0, 1] of the severed link to herself, and spreads out a fraction 1 − φ across her remaining links in proportion to the strength of each link. Clearly, φ = 1 is the special case in which the rerouting of links is determined by confirmation bias. Consider network T and the allocation of initial beliefs in Figure 1a. If q ∈ [0, 0.3] and φ = 1, so rerouting of links is determined by confirmation bias, then the resulting network is T * in Figure 1b. As expected from Theorem 1, the convergence time in T * is 135 periods, which is longer than the 33 periods required in T when there is no confirmation bias. If, instead, φ = 0 then the resulting network T * 0 is displayed in Figure 1c and convergence time is 12 periods. Notice that T * 0 is sparser than the initial network T , but convergence in T * 0 is faster than in T . It turns out that convergence is faster for most values of φ, Figure 1d displays network T * 0.65 for the critical value of φ = 0.65 such that the convergence time is the same as in T -for any φ < 0.65 the convergence time is faster than in T even though all these networks are sparser than T . 8 Figure 1 (a) Network T . Convergence takes 33 periods.
Influence and influencers
A single Bayesian agent who aggregates all the information in a society would weight each initial signal equally in their posterior. However, when the society learns through DeGroot social learning, the weight that an agent's initial signal has in the final consensus depends on that agent's network position. Confirmation bias alters the network structure, so it may affect an agent's influence on the consensus. Agents who cut links put more weight on their initial signal and less on others' beliefs.
Intuitively, we might expect that this means they have greater influence on the final consensus.
This section examines how confirmation bias changes and redistributes influence, and shows that this intuition does not follow through.
An appealing feature of the DeGroot framework is that an agent's influence is equal to their eigenvector centrality, and this notion is captured by the following standard definition.
Definition 3. The influence, s i , of agent i is the i th entry in the left-hand unit eigenvector associated with the first eigenvalue λ 1 ≡ 1: T ji s j
Our first result identifies a class of networks in which confirmation bias does not affect agents'
influence on the final consensus.
Remark 1. If T is symmetric and T * is strongly connected, then all agents have equal influence and confirmation bias does not alter any agent's influence.
In a symmetric network all agents have the same influence. 9 Given that the level of confirmation bias q is the same for all agents, if i cuts the T ij link then j will also cut the T ji link. The resulting network T * is, as a consequence, also symmetric, and therefore all agents will continue to have the same influence as in T .
When the network is not symmetric, agents' influence varies with their position in the network.
Because confirmation bias removes some links from the network, it will also redistribute the influence across agents. The following remark shows that when there is just a single agent who cuts links due to confirmation bias, their influence unambiguously rises.
Remark 2. If exactly one agent i cuts one or more links, then their influence, s i , strictly increases.
An appealing conjecture is that this extends to a situation where many agents cut links. Unfortunately, Figure 2 demonstrates that this is not the case. When the level of confirmation bias is in the range q ∈ [0.3, 0.7), and initial beliefs are (0.2, 0.5, 0.75, 0.9), the listening structure T on the left changes to T * on the right because A removes links with C and D, and D removes the link with A. Among these agents who cut links, A's influence rises in T * , but D's influence decreases in T * despite the fact she has removed a link. The reason for this decrease is that in T * the most influential agent A does not listen to D any more, and therefore D does not directly affect the beliefs of the most influential agent. Conversely, B does not cut any links but her influence rises because she is the only agent in T * that the influential agent A still listens to. The message from the counterexample is that understanding the change in an agent i's influence involves keeping track of the change in influence of the agents who listen to i, in addition to whether i cuts any links. Definition 4 captures this notion by identifying an influencer as someone who (1) cuts links with other agents, but (2) no other agent cuts a link with her, and (3) continues listening only to agents who satisfy (1) and (2).
The opposite of an influencer is a listener -an agent who (1) does not cut any links with other agents, but (2) others cut links with her, and (3) continues to be listened to only by other agents who satisfy (1) and (2) after confirmation bias has changed the network.
Definition 5. An agent i is a listener if d i,out (T * ) = d i,out (T ) and d i,in (T * ) < d i,in (T ); and, for every j ∈ N * in (i), we have that d j,in (T * ) < d j,in (T ) and d j,out (T * ) = d j,out (T ).
A final restriction needed to understand shifts of influence at the individual level is to focus on a society that is "wise", as defined by Golub and Jackson [2010]. Informally, a network is wise when each agent has a negligible influence on the outcome the society converges to. The following definition states this formally.
Definition 6. A large network T is wise if and only if s i ∼ O( 1 n ) for all i, and therefore s i ≈ 0 as n gets large.
Armed with these definitions, we can characterize how confirmation bias changes the influence of influencers and listeners in a wise society.
Proposition 1. Suppose T and T * are large, wise networks, the influence of influencers rises due to confirmation bias and the influence of listeners declines due to confirmation bias.
This result shows that confirmation bias increases the influence of a particular subgroup of agents -the influencers -who are embedded in a neighbourhood of other agents like them. Confirmation bias has the opposite effect on listeners. Given the restrictions imposed by definitions 4 and 5, a large network would typically have few (if any) influencers/listeners. An implication is that we are able to characterize changes in influence at the individual level only for a small subset of agents.
Technically, the wisdom assumption allows us to ignore the effect on an agent's influence of changes in the neighbours' neighbours influence, which helps with tractability.
Polarization
A commonly held view is that our society is becoming increasingly polarized. Intuitively, confirmation bias may be a force that pushes society toward greater polarization by preventing communication between individuals with different views, and therefore delaying the process of finding common ground. Section 2.1 confirms this intuition in the long-run -a society without confirmation bias converges earlier than one with confirmation bias, and therefore is, by definition, less polarized in the gap between the two convergence times. This section shows that, subject to a mean-field assumption, confirmation bias causes society to be more polarized at each point in time.
There are numerous metrics in economics and other social sciences that capture the notion of polarization (e.g. Esteban and Ray [1994]). We adopt what is perhaps the most basic metric in a set-up with a continuum of beliefs.
Definition 7. The polarization of a society at time t is equal to var( While the asymptotic behavior of Markov systems is well-studied, it is notoriously challenging to make statements about their intermediate state. For tractability reasons, we assume that the distribution of initial signals is uniform, and also make the following mean-field assumption to reduce the level of noise in the system. Assumption 1 (Mean-field). Each agent i's neighborhood is representative of the society as a whole, so j∈N (i)\{i} x j0 ≈ µ for all i ∈ N and x j ⊥ ⊥ T ij for all i.
There are two justifications for using a mean-field approach. Theoretically speaking, it allows us to isolate the randomness introduced by confirmation bias from the intrinsic randomness of the system. The short and medium term evolution of beliefs will depend on the random initial allocation of signals. For instance, a handful of agents may stick to an extreme position for a while because they happen to form a close-knit community with similar beliefs to begin with.
This particular instance of the evolution of the system will, therefore, have a high initial level of polarization that is unaffected by confirmation bias, but may make it more difficult to identify the effect of the bias on polarization. By assuming that each agent's neighborhood is representative of society as a whole, we block this channel -allowing us to explore whether confirmation bias on its own increases polarization.
In practice, the mean-field assumption is a reasonable approximation in a large society if we ignore the presence of homophily -the tendency to associate with like-minded people. In a large society, individuals have several friends and therefore the size of one's neighborhood ensures it is approximately a representation of the wider society. The simulations in section 5 show that the following result on polarization holds even if we drop the mean-field assumption.
Proposition 2. Assume that x 0 ∼ U [0, 1] and the mean-field assumption holds. Then at each point in time polarization is (weakly) monotonically increasing in the strength of confirmation bias q.
There are two steps to the proof. First, we show that, under the mean-field assumption, we can characterize the belief for each agent at time t as a weighted average between the initial belief and the average belief µ in society. Second, we prove that the variance of these beliefs is increasing in the strength of the self-loops, and, therefore, in the amount of confirmation bias.
Optimal networks
A benevolent social planner would want to maximize the chance that society converges to the truth.
Confirmation bias works against the objective of the social planner in two ways. We examined the first one in section 2.2 -it redistributes the influence across agents so that some initial signals are weighted more than others. The second one is that confirmation bias may break the network into different components, which leads to information loss as the content of some initial signals is not aggregated. Throughout the paper we have ignored information loss by assuming that T * is strongly connected, but in this section we relax this assumption to investigate the social planner's decision.
A benchmark case is an omniscient social planner who can observe the initial allocation of signals and the level of confirmation bias, and can then engineer the network. Appendix A.3 shows that in this case, if convergence to a consensus is possible, the planner can always guarantee society converges to the truth by constructing an "octopus" network -an agent at the center who only listens to herself, and everyone else listening to the center directly or indirectly, depending on how far their signal is from that of the center. 10 Convergence to the truth requires an octopus, rather than a simple star, network to prevent confirmation bias from breaking the network into separate components.
In a more realistic and interesting set-up, the social planner does not know the distribution of initial signals or the level of confirmation bias. We still assume, however, that she can specify the network structure T subject to a budget of B ≡ nd links, where d > 0. The social planner can, therefore, specify the optimal network, and the following definition formalizes what optimality means.
Definition 8. A network T is optimal given confirmation bias, q, and the budget of links, B, if: An optimal network maximizes the probability that society converges to the truth. The truth is the average of the initial signals, x i0 , and is exactly the value a single Bayesian agent would reach after aggregating all of the initial signals. To reach the truth, society needs to incorporate all signals, and weight them all equally. Maximizing the probability of reaching the truth therefore requires minimizing the chance that one or more agents' initial information is lost because the network breaks into multiple components. Notice that this condition does not depend on the size of the breakaway component(s) -the objective is to minimize any breakaway and not its size.
Weighting all of the signals equally can then be achieved by network symmetry.
Before stating the main result of this section, we need one assumption to help with tractability.
This assumption means we can ignore correlations among links that are removed due to confirmation bias. In particular, it assumes that the probability a link from i to j is removed does not depend on whether/how many other agents k listening to j have cut links. Clearly this is an approximation because it is more likely that links to an agent with an extreme belief will be removed, but this type of correlations are challenging to handle in a network context and putting them aside allows us to prove the following statement.
Proposition 3. Assume link independence and a budget of B ≡ nd links. Then T is optimal if it is symmetric, has no self-links, and its unweighted equivalent is vertex-transitive with degree d.
As Remark 1 showed, symmetric networks ensure that every initial signal receives equal weight in the learning process. The absence of self-links ensures we allocate all links in the budget to increase the robustness of the network to breakaway components. Vertex transitivity means that the network is completely homogeneous so every agent is identical in the structure of their interactions.
Therefore, by exhausting the available budget of links, this minimizes the probability of a single and/or set of agents breaking into a separate component. Notice that the statement does not constrain the distribution of link weights so the set of networks that are optimal is quite large.
Once we ignore link weights, however, all these networks are a member of the small class of vertex transitive networks, which is a subset of regular networks.
Definition 8 focuses exclusively on the final outcome of getting to the truth, but one may argue that a social planner would also care about achieving this quickly. This would entail characterizing the distribution of link weights that maximize convergence speed in the class of optimal networks.
To the best of our knowledge, this is an unsolved problem in the graph theory literature. 11 There are, however, algorithms to obtain an approximate characterization. Appendix A.3 discusses two well-known ones -the Maximum Degree Heuristic and the Metropolis-Hastings algorithm. Both suggest that, given the constraints imposed by Proposition 3, unweighted networks are likely to converge quickly. In other words, a social planner that also cares about the speed of convergence would engineer a network that is symmetric, unweighted and vertex transitive.
Shock elections
The previous section has shown that confirmation bias affects the process of learning from others by making it slower, more polarized and by redistributing individuals' influence. In the past decade, the advent of social media has arguably increased the visibility and weight that information we learn from others has on our views, and this has become increasingly relevant in the context of elections (see, e.g., Kohut et al. [2008], Braha and De Aguiar [2017] and Weeks et al. [2017]). In order to examine how the impact of confirmation bias on learning affects electoral outcomes, we embed the learning framework in section 1 into a basic voting model. The objective is to understand whether "shock" elections are more likely in a world where learning is affected by confirmation bias.
We assume there are two candidates Y = {0, 1} whose belief, or ideology, is fixed at x Y =0 = 0 and x Y =1 = 1. There are n voters and each voter has an initial belief Denote by f i the fraction of voters assigned initial belief x i , with 0 < f i < 1 and ER i=EL f i = 1. For expository purposes, we can think of 0 as the "Left" candidate, and 1 as the "Right" candidate. Initially, the spectrum of voters' preferences spans "Extreme Left" (EL), "Center Left" (CL), "Swing voters" (S), "Center Right" (CR), and "Extreme Right" (ER).
Voters communicate through a network T according to the model in section 1. Each voter i's utility function U it = u(|x it − x y |) is strictly decreasing in the distance between their own belief (at the time voting takes place) and the belief of the winning candidate. The winner of an election at time t is determined by simple majority -the candidate with the most votes wins and a tie is resolved by a coin toss. This set-up implies that sincere voting is a weakly dominant strategy by application of the standard Median Voter Theorem result. 12 Thus, a voter i facing an election at time t casts a vote v i,t according to the following strategy: Without loss of generality, throughout this section we assume that f EL + f CL > f CR + f ER and ER i=EL x i0 f i < 0.5. The first assumption guarantees that if an election were to happen at time t = 0 before any learning takes place then the Left would win it. The second assumption states that a society which correctly aggregates all initial information would in the end vote for the Left as well. In order to focus our attention on the interesting case in which the learning process converges to the truth, we further assume that the society is "wise" as defined in Definition 6.
These assumptions restrict our attention to a society that votes for the same candidate -the Left one without loss of generality -before learning takes place and after learning has occurred.
The outcome we are interested in is whether the presence of confirmation bias makes the shock electoral results of the Right candidate winning more likely.
Definition 9. In a society that exhibits wisdom, a shock election occurs if there exists a time 0 < t < ∞ such that the Right wins the election that occurs at t.
The following proposition shows that confirmation bias makes a shock electoral outcome possible even in a society that exhibits wisdom.
Proposition 4. In a large society where the mean-field and wisdom assumptions hold, a shock election can occur with confirmation bias, but it never occurs without confirmation bias.
As discussed in section 2.3, the purpose of the mean-field assumption is to isolate the randomness introduced by confirmation bias from the intrinsic stochasticity of the system. Without the meanfield assumption, a shock election may occur even without confirmation bias simply because beliefs have a random fluctuation toward the Right at some point in the learning process before converging to vote for the Left. The proof in Appendix B.5 shows that this does not happen with the meanfield assumption, and this provides a clear benchmark to investigate whether confirmation bias adds additional noise that may cause a shock election. Furthermore, the mean-field assumption is a good approximation in large networks and the simulations in section 5 show that the result in Proposition 4 is robust to relaxing this assumption.
We prove that a shock election can occur with confirmation bias with the counterexample in Figures 3 and 4. Consider a society with 5 voters connected by network T represented in Figure 3. Suppose initial beliefs are captured by x 0 below, it is straightforward to compute the learning process. Now suppose that the level of confirmation bias is q = 0.78. Figure 4 represents the resulting network T * after the removal of links. If we compute the learning process then, by the wisdom assumption, it converges to the same outcome as without confirmation bias, but now a shock election occurs at time t = 1 when the Right would win. What the counterexample shows is that a shock election can occur in a society that would otherwise make the correct decision if there was no learning and/or no confirmation bias. In particular, the effect of confirmation bias is that it can push a majority to support the Right candidate in the short-term even if in the long-term the society correctly aggregates the initial signals and votes for the Left candidate. In this counterexample, this happens because of two effects: (i) Swing voters stop listening to the Extreme Left/Right voters due to confirmation bias, and (ii) they put more weight on the views of Center Right compared to Center Left voters. This means that in the short-term the majority of Swing voters vote for the Right candidate who obtains a majority support. In the long-term, learning continues, leading to the correct aggregation of the initial information thanks to the wisdom assumption, but this means that there is a time window when a shock election can occur which would not happen in a world without confirmation bias.
Fringe media organizations
Changes in technology and the emergence of social media have made it easier to find news and information tailored exactly to your preferences [Sunstein, 2018, Chapter 3]. This has been accompanied by a polarization of the landscape, with outlets characterized by an extreme ideology achieving a large audience (Prior [2013] provides some support for this view). The purpose of this section is to examine the role of confirmation bias in the growth of media organizations with extreme ideologies. To do this, we extend the model in section 1 by introducing media organizations which choose their ideology and care about maximizing their audience, and by having agents form a link with a media organization before learning takes place. which is agent i's initial belief x i . 15 The novel element here is that each agent also forms a link of weight with one and only one media organization at zero cost. Both the assumptions that the link weight is and it is costless are for simplicity given that the focus of this section is on the choice of ideology by the media organizations. 16 We assume each agent i's payoff is equal to δ|x i0 −µm| < 0, i.e. agents choose to listen to the media organization whose belief is closest to their own. If there is confirmation bias of strength q then an agent i forms a link with a media organization m only if |x i0 − µ m | ≤ 1 − q.
Let Ψ q M be the set of equilibria with M firms and confirmation bias of strength q. 17 An equilibrium ψ = {µ 1 , ...µ M } ∈ Ψ q M is a set of ideologies for the media organizations. The focus of our analysis is the media organization with the most extreme ideology µ f r (q, M ) = min µm∈Ψ q M {µ m } which we dub the "fringe" media organization. Notice that, by the symmetry of our set-up, it suffices to investigate the most extreme Left ideology because for any set of ideologies that constitutes 13 The M > 5 assumption allows us to focus on the interesting and relevant case of a competitive media market. Appendix A.4 solves the model for the M ≤ 5 case.
14 This is equivalent to Hotelling [1929]'s model with fixed prices where the [0, 1] line represents the unidimensional ideology space. Eaton and Lipsey [1975] presents results for this model in the absence of confirmation bias. Anand et al. [2007] examine media bias under the same assumption that media organizations care only about profits. In contrast to this model, they assume that agents care about some objective truth, as well as ideology.
15 Notice that here we assume signals are drawn from the uniform distribution for analytical tractability. 16 Picking a weight of allows us to sidestep the issue of how each agent redistributes the weights of their outgoing links to accommodate this new connection. The assumption that each agent forms one and only one link is for analytical tractability and can be justified behaviorally by limited attention (see, e.g., Gabaix [2014], and Masatlioglu et al. [2012] and references therein).
17 An equilibrium always exists by standard existence results. For M > 5 there are infinitely many equilibria both with and without confirmation bias. an equilibrium, its mirror image is also an equilibrium.
Our first result shows that the ideology of the fringe media organization is increasing in the competitiveness of the market.
Proposition 5. The ideology of the fringe media organization becomes (weakly) more extreme as the number of media organizations M increases.
The proof is an application of a result from Eaton and Lipsey [1975, p31], and is therefore omitted. The intuition is that media organizations try to spread out in ideology space to maximize the number of agents who listen to them, and therefore an increase in the number of outlets in the market pushes the most extreme organization closer to the boundary. This is in agreement with the initial observation that the proliferation of media outlets has led to a polarization of the media landscape. The following statement shows how the presence and strength of confirmation bias affects this polarization.
Proposition 6. The ideology of the fringe media organization becomes (weakly) more extreme as the strength of confirmation bias q increases.
The presence of confirmation bias reduces the incentive of a fringe media organization to moderate its ideology. Absent confirmation bias, the most extreme organization would want to moderate its ideology in order to attract more moderate listeners as this would not cost them any extreme listeners. What limits this moderation is the presence of other more moderate organizations that might want to 'leapfrog' the fringe media if it becomes too moderate. In the presence of confirmation bias, however, the fringe media organization does not want to moderate its ideology as much, for fear of losing some of its most extreme listeners.
Simulations
At different points in the paper, we rely on restricting the class of networks and/or a mean-field assumption to obtain analytically tractable results in this paper . For example, Theorem 1 only holds for symmetric networks where agents listen mostly to themselves. The proofs to Proposition 2 (polarization) and Proposition 4 (shock elections) require the mean-field assumption. In this section we run an extensive set of simulations to show that our results largely hold even after relaxing the mean-field assumption and apply to the general class of directed, weighted networks.
The first step of the simulations is to build a network with realistic structural features. We construct a modified version of the algorithm in Jackson and Rogers [2007], which creates networks that match the main structural characteristics of actual social networks. We start at step k = 0 from an initial cluster M of m = 40 nodes such that every node is connected to every other, with the direction of the link randomly determined with 50% probability; i.e. if T ij = 1 then T ji = 0 and if T ij = 0 then T ji = 1 for all i, j ∈ M . In step k = 1 a new node is introduced and randomly "meets" m r = 20 other nodes. In each random meeting there is a p r = 0.8 probability of forming a link, with the direction of the link randomly determined with 50% probability. After these connections through random meetings have been formed, the new node meets m n = 20 neighbors of her new connections. There is a p n = 0.8 probability that she forms a link with each of these connections, and the direction of the link is randomly determined with 50% probability. In step k = 2 a new node is introduced and the process repeats itself. The network formation process stops at step K = 960 when we obtain a directed, weighted network formed by 1,000 agents. 18 An agent listens to all of their neighbors equally, but the number of neighbors differs across agents. Therefore, link weights depend on the agent who is listening, and so the network is weighted.
In each set of simulations, we generate 1,000 different networks in this manner. For each of the networks, we execute the assignment of beliefs 100 times. This gives 100 instances to compare the outcomes of the learning process with and without confirmation bias for each of the 1,000 network structures.
In the first set of simulations, we test the generality and robustness of the results in section 2. After the network T is formed, each subject is assigned an initial belief randomly drawn from a uniform distribution U [0, 1]. Once initial beliefs are assigned, we randomly draw one value of confirmation bias q from the uniform distribution U [0.05, 0.15]. Starting from T , we remove any link T ij where | x i0 − x j0 | > 1 − q to form the network T * . The simulation runs the learning process in our set-up on both T and T * and we compare outcomes between the two networks.
The histogram in the left panel of Figure 5 shows the frequency distribution of convergence time in T (light grey bars) and T * (dark grey bars). From simple visual inspection, it is clear that the presence of confirmation bias shifts the distribution to the right, in line with Theorem 1. This is despite the fact that Theorem 1 assumes that every link is symmetricand that T ii ≥ 1 2 for all i, while (by construction) none of the links in the simulated networks are symmetric and T ii = 0 for all i -the result holds despite choosing the most challenging tests of the assumption made in the theory. 19 The mean convergence time without confirmation bias is 6.6 periods (with {min, max} = [6, 8]), while it is 7.7 with confirmation bias ({min, max} = [6, 26]).
The right panel of Figure 5 shows the evolution over time of the levels of polarization averaged over the learning processes with (dashed line) and without confirmation bias (solid line). Before any learning takes place, the levels of polarization are identical, as the initial allocations of signals are the same. As soon as the learning process begins, however, the average level of polarization with confirmation bias is higher and it remains that way until everyone converges to the same belief. Given that the time to convergence is longer with confirmation bias, there is a period when the society has positive level of polarization with confirmation bias and a zero level in the absence of the bias. This shows the result in Proposition 2 is robust to relaxing the mean-field assumption. In the second set of simulations, we test the robustness of the results on voting in section 3 to relaxing the mean-field assumption. Following the set-up in section 3, we discretize the distribution , 0.25, 0.5, 0.75, 1}. 20 Denote by f i the fraction of agents assigned initial belief x i . We fix f S = 0.2 so that 20% of society is made up by swing voters. The distribution of initial beliefs of the rest of the society is randomly determined with keep taking random draws of f CR until the weighted average of initial beliefs is less than 0.5. This set-up ensures we focus on the interesting case when society would vote for the Left before learning takes place and after learning has finished.
Assuming there is an election at each point in time, Figure 6 shows the fraction of shock elections with the Right winning both with (light grey bars) and without (dark grey bars) confirmation bias.
The results clearly show that the statement of Proposition 4 holds even after relaxing the meanfield and wisdom assumptions. At its peak at time t = 1, 22.3% of elections with confirmation bias end up with the shock outcome of a win by the Right. In contrast, the maximum fraction of shock outcomes without confirmation bias is a paltry 0.03%, again at t = 1. Simulations show that the tiny fraction of shock elections without confirmation bias happen shortly after learning starts, while shock elections can occur in both the short and medium term with confirmation bias and well after the time of convergence without confirmation bias. This is closely related to the fact that convergence is relatively fast without confirmation bias, but can take a long time with it. In particular, the latest time of a shock election without confirmation bias in the 100,000 iterations was at t = 5, while, e.g., with confirmation bias at t = 10 we have that 6.4% of elections result in a shock outcome and there are still a few instances of shock outcomes when t > 100. 20 The beliefs do not need to be equally spaced on the line, but we choose this to aid exposition.
Conclusion
The advent of online social networks has dramatically expanded the number of people who shape our views. Another facet of this change is that it has become very easy to stop listening to someone; it suffices to "unfollow" them. A large number of studies in psychology show that confirmation bias is a powerful filter in how we process information, and, inevitably, the relevance of this filter increases when it becomes so easy to ignore discordant voices. This paper has incorporated confirmation bias into a model of social learning on a network to investigate its impact on the learning process and the political arena.
Confirmation bias has an unambiguously negative impact. An increase in confirmation bias slows down learning and increases polarization in society. The type of network architectures that minimize these negative effects make the position of every individual interchangeable, and they are, unfortunately, the polar opposite of the networks we find in the real world. In the political context, the presence of confirmation bias makes it possible for the worse candidate (given the available information) to win in a shock election result. Moreover, confirmation bias makes the ideology of fringe media organizations more extreme.
This paper showcases the potential of combining insights from behavioral economics and networks. Aside from a few exceptions, 21 there is a dearth of papers combining these two methodologies despite their common objective to incorporate realistic features of, respectively, the psychology of decision-making and social interactions into the standard economic framework. Our hope is that this contribution will encourage further studies at the intersection of these two literatures.
A Appendix: Extensions
This appendix contains generalizations and extensions to the main model.
A.1 Generalized model
The model in Section 1 assumes that the strength of confirmation bias is identical for all agents, and that confirmation bias causes agents to cut links completely. Additionally, it assumes that agents only cut links before the learning process begins. This section shows that the main result, Theorem 1 is robust to relaxing all three of these assumptions.
Relaxing these assumptions yields an updated definition of confirmation bias. First, the strength of confirmation bias, q, now depends on the agent, i. It allows arbitrary heterogeneity in the strength of confirmation bias. Second, confirmation bias no longer reduces link weights to zero, but rather reduces them by a link-specific factor (1 − α ij ). There are no restrictions on the distribution of the α ij 's (except of course that they are bounded between 0 and 1). Finally, the network can be changed every period, rather than only once -so there is now a sequence of networks T t * .
The assumptions from Section 1 that agents never reinstate links, and that they redistribute the weight from cut/weakened links to themselves remain unchanged.
Definition A.1. A society on a network T in which agents have (agent-specific) confirmation bias q i ∈ (0, 1] communicates according to a network T t * such that: Where q i ∈ [0, 1] α ij ∈ [0, 1], and T −1 * ij ≡ T ij for all i and j.
Using this generalized definition, it is no longer immediately obvious what it means for confirmation bias to increase. We say that confirmation bias has increased if q i ≥ q i and α ij ≥ α ij for all i and j, with at least one strict inequality.
The proof to Theorem 1 shows that the average convergence time, τ , increases when link weight is moved from the between-agent links to agents' self-links (mathematically, this is moving weight from the off-diagonal elements to the diagonal elements of the matrix T ) and there are no other changes to the network. It does not rely on any other structure to the changes to the network.
Theorem 1 is therefore effectively proved as a special case, and the result holds under the generalization here without any change to the proof. This intuition is also unchanged -society converges to a consensus more slowly when agents are less willing to pay attention to those with different beliefs, and instead pay more attention to themselves.
In contrast, the influence, polarization and optimal networks results do not extend to this generalized setting. 22 This is because they rely on the symmetric application of confirmation bias -if i cuts a link with j, then j must also cut a link with i (if a link is present in both directions).
Both the heterogeneity in the strength of confirmation bias, q, and the heterogeneity in the downweighting of links violate this requirement.
A.2 Speed of Learning
Theorem 1 uses a definition of convergence time, average convergence time, that does not explicitly involve the initial assignment of beliefs. However, it is robust to using an alternative measure; consensus time -which considers the time to converge to a consensus from the worst-case initial assignment of beliefs.
Definition A.2. [Golub and Jackson, 2010] The consensus time of the network T is: where ||.|| s(i) is a weighted version of the 2 norm.
The consensus time is the minimum time t at which all agents have beliefs within of each other in the worst-case distribution of initial signals. The difference is weighted by the influence, s i , each agent has on the consensus. We can characterize confirmation bias' effect on consensus time for symmetric networks.
Theorem A.1. If T ii ≥ 1 2 for all i and T is symmetric, then the convergence time of T * is weakly monotonically increasing in the amount of confirmation bias q.
Before proving this result, it is helpful to state an existing result that links the consensus time to the second largest eigenvalue modulus (SLEM) of T . We use a result from Golub and Jackson [2008], although others exist.
Lemma A.1. [Golub and Jackson, 2008] Assume T is connected, let λ 2 be the second largest eigenvalue of T and let s be the vector of influences with min i s i = s. If λ 2 = 0 then for any 0 < ≤ 1: Proof of Theorem A.1. The proof to Theorem 1 shows that all eigenvalues of the network are weakly positive and are weakly increasing in the strength of confirmation bias (in particular, see 22 The exception to this is Remark 2. This is because it considers a case where only one agent is affected by confirmation bias, so the heterogeneity in q and in α does not matter -as it has no effect. the final paragraph of the proof). Therefore the second largest eigenvalue must also be the SLEM.
Therefore, a weak increase in λ 2 implies a weak increase in the SLEM.
Lemma A.1 provides tight bounds for consensus time in terms |λ 2 | (the SLEM), and therefore shows that an increase in |λ 2 | weakly increases consensus time.
Notice that this uses the proof to Theorem 1 directly, and so relies on the same structure of changes to the network. Therefore, it also holds under the generalized model in Definition A.1.
Markov chain convergence is governed by the spectra of eigenvalues, and is often well approximated by the second largest eigenvalue modulus. In Theorem 1 we use the full spectra of eigenvalues, but use a metric that does not include the initial signals. This alternative result uses only the second eigenvalue as an approximation, but does include the initial signals. However, it only uses the worse-case set of signals. This has two drawbacks. First, we would like to know the convergence time for a given allocation of initial signals, x 0 , rather than only the worst-case allocation. Second, the allocation that constitutes the worst-case scenario for T may not also be the worst-case scenario for T * . Therefore, this result does not guarantee that, for a given x 0 , consensus is reached more quickly under T than under T * .
A.3 Optimal networks
This section first presents the benchmark case where the social planner is able to observe both the initial allocation of signals, x 0 , and the level of confirmation bias, q. It then introduces the Maximum-Degree Heuristic and the Metropolis-Hastings Heuristic -well-known heuristics for finding fast-converging Markov chains, and shows that they suggest unweighted networks as likely candidates for achieving fast convergence.
Optimal network with full information. We focus on the case where convergence to a consensus is possible. That is, where there exists T ∈ T such that T * is strongly connected. This rules out the existence of a group A ⊂ N where no agent i ∈ X is willing to listen to any j ∈ A c .
Since the social planner observes x 0 and q, she can then control the network T * directly -as she knows exactly which links would, if created, be cut due to confirmation bias. Here, characterizing the optimal network is straightforward. It is a version of a 'long-armed' star network, which we call an octopus network. It guarantees convergence to the truth in at most 1 To construct an octopus network, first identify an agent whose initial opinion is equal to the truth. Denote this agent a, and set T aa = 1. If there does not exist such an agent, then identify a pair of agents, a and b where; (1) |x a0 − x b0 | < 1 − q (they are willing to listen to one another), and (2) γx a0 + (1 − γ)x b0 = x i0 for some γ ∈ (0, 1) (some linear combination of their initial signals is equal to the truth). Then choose T aa , T ab , T ba , T bb such that x a1 , x b1 exactly equal the truth and T aa + T ab = T ba + T bb = 1.
Second, identify all agents i such that 0 ≤ |x i0 − x a0 | < (1 − q), and set T ia = 1. Denote this set of agents A 1 . Next, identify all agents j such that (1 − q) ≤ |x j0 − x a0 | < 2(1 − q). Denote this set of agents A 2 . Choose T ji such that i∈A 1 T ji = 1 and i / ∈A 1 T ji = 0. Repeat this step until . So convergence to the truth is guaranteed within 1 q (x max0 − x min0 ) periods. Note that this process does not lead to a unique network -any network in the class of octopus networks constructed by this process is optimal.
Heuristics for link weights. Allocating link weights to minimize the time taken to reach a consensus is a well-studied problem in the mathematics literature. 23 Boyd et al. [2004] present a series of methods to solve this computationally for an individual network, but in the absence of general results we instead use heuristics to suggest link weights for optimal networks.
The Maximum Degree Heuristic assigns equal weight to all between-agent links in the network, chooses the maximum feasible link weight. All remaining weight is placed on the self-link. This means that the agent with the highest degree has no self-link.
Definition A.3 (Maximum Degree Heuristic). The maximum-degree transition probability matrix
T md is given by Where E is the set of links, and d max = max i {d i }.
The Metropolis-Hastings Heuristic is somewhat more nuanced. For any pair of agents i and j, without loss of generality assume that i has a weakly higher degree. Then a link between i and j is assigned a weight equal to the reciprocal of i's degree. This happens for all between-agent links.
Self-links are then allocated to ensure that j T ij = 1 for all i.
Definition A.4 (Metropolis-Hastings Heuristic). For a symmetric Markov chain, the Metropolis-
Hastings transition probability matrix T mh is given by Where E is the set of links.
Both of these definitions are taken from Boyd et al. [2004]. Due to the restrictions imposed by Proposition 3, we are only considering regular networks -implying that d i = d j = d max for all i. Therefore, under both the Maximum Degree and Metropolis-Hastings heuristics, all links in the network are assigned the same weight and there are no self-links.
A.4 Media
In this section we find the exact ideology of the fringe media organization for each value of M ≤ 5, and for all values of q ∈ [0, 1].
M=2: µ f r (q ≤ 0.5, M = 2) = 0.5 and µ f r (q > 0.5, M = 2) = 1 − q. When q ≤ 0.5, all agents are still willing to listen to an ideology of µ = 0.5, and so the duopoly equilibrium is unaffected by confirmation bias in this range (see Eaton and Lipsey [1975] for a proof of the equilibrium). When q > 0.5 then the fringe media organization is willing to choose a sufficiently extreme ideology to attract the most extreme agent as a listener, but no more. Clearly a more extreme ideology would not be optimal, as the media organization could choose a more centrist ideology, and not lose any listeners on the fringe, but would gain some listeners to the center. move inwards (towards the center), which increases their audience, but this causes the organization in the middle to want to change its ideology discontinuously to become a peripheral organization.
This cycle never ends; preventing equilibrium existence. When q > 0.75 however, the peripheral organizations no longer want to move inwards past µ = 1 − q (or µ = q), as they would lose very extreme agents from their audience by doing so. Given this, the organization in the middle no longer wants to switch its ideology to become a peripheral organization. We can see from this that Propositions 5 and 6 hold for two or more media organizations (aside from the existence issue when M = 3), but that neither hold for M = 1. At least some competition is required to sustain our results.
A.5 Simulations
In Section 5 we conducted two sets of simulations. In the first set, initial signals were drawn from a uniform distribution U [0, 1], and confirmation bias was relatively low (between 0.05 and 0.15). This was used to test the robustness of Theorem 1 and Proposition 2. The second set of simulations, which used a discrete distribution of signals and a significantly higher value of confirmation bias, was used to test the robustness of Proposition 4. This section uses the second set of simulations to further test the robustness of Theorem 1 and Proposition 2 -showing that they hold even more starkly under a higher level of confirmation bias. Note that it is not possible to use the first set of simulations to further test Proposition 4, as Proposition 4 requires restrictions on the distribution of initial signals not present in the first set of simulations. Changes in consensus value (due to confirmation bias) appear to be symmetrically distribution about zero, and take on an approximate bell curve. Further, when confirmation bias is stronger, the changes are much more widely dispersed, but still centered around zero (see Fig 9).
Analytic results regarding the consensus value are intractable except in the special case of symmetric or wise networks, because it relies on the entirety of the influence vector and on the full distribution of initial signals. Relatively little is known about how elements of the influence vector change in response to changes in the network.
B Proofs
We begin by setting out some notation. λ k is the kth eigenvalue of the Markov chain T , where eigenvalues have been ordered such that λ 1 > λ 2 > ... > λ n . Similarly, λ * k is the kth eigenvalue of the Markov chain T * . v k is the right-hand eigenvector of the Markov chain T that corresponds to the eigenvalue λ k . E(f, T ) is the Dirichlet energy functional of T . s is the left-hand eigenvector that corresponds to the eigenvalue λ 1 ≡ 1, with ith element s i . It is also called the influence vector, and s i , the influence of agent i. f is an arbitrary vector of length n B.1 Speed of Learning Definition B.1. [Souzi, 2019, Definition 4.1] The Dirichlet energy function of a network T is Theorem B.1. [Souzi, 2019, Theorem 4.6] 1 − λ j = min To avoid confusion, we use v k (i) to denote the ith element of the eigenvector v k (which is itself the kth eigenvector of the matrix), and T t (i, ·) to denote the ith row of the matrix (T ) t .
Disambiguation: in this proof, we use subscripts to denote two different things. Subscripts on T , s, and f (i.e. T ij , s i , f i ) denote the ith (or ijth) element of the vector/matrix. Subscripts on v and λ (i.e. v k , λ k ) denote the kth ordered eigenvector / eigenvalue. So v k refers to a whole eigenvector (an n × 1 vector), not a single element of it.
Proof of Theorem 1. Using Definition B.1 we have; E(f, Therefore, E(f, T * ) ≤ E(f, T ) for all f and any symmetric T and associated T * .
Since E(f, T * ) ≤ E(f, T ) for all f , then min f {E(f, T * )} ≤ min f {E(f, T )} subject to any constraints on f . Therefore, it follows from Theorem B.1 that 1 − λ * j < 1 − λ j for all j. This establishes that all eigenvalues are weakly monotonically increasing in q.
Therefore 1 n i ||T t (i, ·) − s)|| 2 2 = 1 n n k=2 λ 2t k . So we can write our convergence metric τ as; It is clearly the case that 1 n n k=2 λ 2t k is decreasing in t and increasing in λ k for all k ∈ {2, ..., n}. We know from earlier that q > q =⇒ λ k ≥ λ k for all k ∈ {2, ..., n}. Therefore q > q =⇒ 1 n n k=2 (λ k ) 2t ≥ 1 n n k=2 λ 2t k . This follows from the fact that all eigenvalues are weakly positive. To see this, note that if T ii ≥ 1 2 for all i, then there exists another Markov chainT such that T = 1 2 (I +T ), where I is the identity matrix. All eigenvalues of I are equal to 1, and sinceT is a Markov chain, all of its eigenvalues are weakly greater than −1.
So the minimum time taken to achieve 1 n n k=2 λ 2t k < is weakly monotonically increasing in q for any fixed > 0.
Notice that this proof only relies on some off-diagonal elements of the matrix decreasing, and some corresponding diagonal elements increasing (and there being no other changes). Therefore, this proof clearly holds under the generalised model (set out in Definition A.1).
B.2 Influence
Proof of Remark 1. First, notice that if an n × n Markov chain, T , is symmetric, then is is doubly stochastic. This means that all rows and all columns sum to 1. Therefore, 1 n · 1 · T = 1 n · 1, where 1 is an 1 × n vector of ones. Since λ 1 ≡ 1, then 1 n · 1 is the first left-hand eigenvector of T , which by definition is the influence vector.
Second, since |x i0 − x j0 | ⇐⇒ |x j0 − x i0 |, in a symmetric network T , i cuts a link with j if and only if j cuts a link with i. This is due to homogeneity of q. Therefore, if T is symmetric, then T * is also symmetric.
Taking these two observations together implies that 1 n · 1 is also the first left-hand eigenvector of T * . Clearly, influences remain unchanged.
Proof of Remark 2. [Schweitzer, 1968, Section 3, equation 8] provides an equation for the influence of an agent changes following a perturbation in a single row: s * i = s i (1 + U ii 1−U ii , where U = (T * − T )Z, and Z = (I − T − 1s) −1 is the fundamental matrix of the Markov chain T .
When only agent i cuts links, then T * differs from T only in the ith row, and so we can apply this formula. Further, when T * differs from T only by entries in the ith row, then U ii = [Conlisk, 1985, Section 2, equation 8] shows that Z ii > Z ji for all j = i. This implies that, j =i (T * ij T ij )Z ji > 0. Therefore U ii > 0, and s * i > s i . Alternatively, we can see the same result by repeated application of the elementary perturbation result from [Conlisk, 1985, Section 4].
Proof of Proposition 1. We prove the result for an influencer, i. Denote the set of influencers Θ, and the set of agents that influencers listen to Θ.
Taking the definition of influence for agent i; (1 − T ii )s i = j∈N(i) T ji s j , and for agent j; (1 − T jj )s j = k∈N(j)\{i} T kj s k + T ij s i , and substituting the second into the first, yields; And by identical logic we can get an equivalent equation for s * i . By definition; (a) i ∈ Θ, T ji = T * ji for all j = i ∈ N(i), and (b) j ∈ Θ, T kj = T * kj for all k = j, i ∈ N(j)\i. Now let (c) T * ii = T ii + ∆ i , ∆ i > 0, and (d) T * jj = T jj + ∆ j , ∆ j > 0. Substituting (a)-(d) into the equivalent equation for s * i , and then subtracting the equation for s i yields; Rearranging the Right Hand Side (RHS) yields; We assume that that third round effects are small; T ji T kj (s * k − s k ) ≈ 0. This yields By the definition of an influencer, if i cuts a link with j (T * ij < T ij ) then j cannot listen to i (T ji = 0). Otherwise, j would cut her link with i -due to the homogeneity of confirmation biaswhich is not permitted by the definition of an influencer. Therefore, T ij T ji = T * ij T ji . any agent who i cuts We know that in any instance where T ij = T * ij , T ji = 0, and so that component of the summation is zero. Therefore, it is appropriate to equate T * ij to T ij in this summation, as any instances where they differed would be irrelevant (the summation component would be zero). Using this result, we can rearrange the Left Hand Side (LHS) to find; This rearranges to; However, there must exist at least one j ∈ N i where T ji = 0, as the definition of an influencer requires that i cuts links with some j, but no j cuts links with i. This in turn requires that there is some j who does not listen to i, but is listen to by i. Therefore; Now equate LHS and RHS (from above) and rearrange; All Right Hand Side terms are weakly greater than zero, and we have established that 1 − T ii − j∈N(i) The proof for a listener follows the same logic to that for an influencer and does not provide any additional insight. It is available from the authors upon request.
B.3 Polarization
Lemma B.2. Under Assumption 1, the learning rule for an agent i can be expressed as Proof of Lemma B.2. Proof by induction. First, show for t = 1. By the definition of covariance; cov(x j0 , T ij ) = 1 . By Assumption 1 we have x j0 ≈ µ, and cov(x j0 , T ij ) ≈ 0.
By definition, T ij = 1 d i (1 − T ii ). Therefore, the equation above rearranges to; . The learning rule in Section 1, can be decomposed to x i1 = j∈N (i)\{i} T ij x j0 +T ii x i0 . Substituting our result from immediately above into the learning rule yields; x i1 ≈ µ(1−T ii )+T ii x i,0 , as required. Now we assume that for t = k; x ik = (1 − T k ii )µ + T k ii x i0 , and prove for t = k + 1. The learning rule for t = k + 1 is; x ik+1 = j∈N (i)\{i} T ij x jk + T ii x ik . Substituting in our assumption for t = k, and multiplying out and rearranging terms; Recall that the covariance equation rearranges to N cov(a, b) = N ab − N ab for general a, b.
Adding/subtracting a constant does not alter covariance, and a multiplicative constant acts linearly, so; d i T k jj cov(x j0 , T ij ) = (x j0 − µ)T ij T k jj . By Assumption 1, the covariance term approximately equals zero, so (x j0 − µ)T ij T k jj ≈ 0. Using this observation, the equation above simplifies to And simple rearranging yields x ik+1 = µ(1 − T k+1 ii ) + T k+1 ii x i0 . This completes the proof.
Proof of Proposition 2. Recall from Definition 7 that var( Substituting in the simplified learning rule from Lemma B.2 and rearranging terms yields; var( ii · (x i0 − µ) 2 . We now show that the belief at time t is always further from the mean in the case with confirmation bias.
Proof by induction. For t = 1; x * i1 = j x j0 T * ij + T * ii x i0 , which rearranges to; x * i1 = (1 − T * ii )µ + T * ii x i0 + Cov(x j0 , T * ij ). Assumption 1; Cov(x j0 , T ij ) = 0, coupled with x i ∼ U [0, 1] implied that Cov(x j0 , T * ij ) has the same sign as x i0 − µ. If x i0 > 0.5, then after cutting links due to confirmation bias, i listens disproportionately to other agents with x j0 > 0.5. Conversely for x i0 < 0.5. Therefore, |x * i1 − µ| − |x i1 − µ| = | Cov(x j0 , T * ij )| > 0 For t = k; assume that x * ik = (1 − T * k ii )µ + T * k ii x i0 + ik , where ik = T * ii ik−1 + Cov(x jk−1 , T * ij ) takes the same sign as (x * ik − µ). For t = k +1; x * ik+1 = T * ij x * jk +T * ii x * ik . First, we use a covariance expansion, then substitute in our assumption for x * ik . Next, we multiply out, using the covariance expansion and the assumption that x j0 = µ. Finally, we note that; (1) Cov(T * jj , x j0 ) = 0 due to the mean field assumption and the uniform distribution of initial signals, and (2) jk = 0 due the to uniform distribution of initial signals, and therefore the symmetry of the problem. Notice that for a given agent i, x * it − µ has the same sign for all t. Therefore, Cov(x jt , T * ij ) has the same sign as x * it − µ, by identical logic to the t = 0 case. By defining jk+1 = T * ii jk + Cov(x jk , T * ij ), we get the final equation, completing the induction.
Therefore, |x * ik − µ| − |x ik − µ| = | jk | > 0. Higher q increases the magnitude of Cov(x j0 , T * ij ), it weakly increases the average opinion that i continues to listen to if x i0 > 0.5 (and decreases that average opinion if x i0 < 0.5). In the special case where x i0 = 0.5, then the average opinion that i continues to listen to is unaffected by q. The symmetry of the problem in this case means that Cov(x j0 , T * ij ) = 0 if x i0 = 0.5. By similar logic, it also increases the magnitude of Cov(x jt , T * ij ) for all t. That variance is increasing in q follows from this observation.
B.4 Optimal networks
Lemma B.3. Given a network, T , and vector of unallocated beliefs, x. Taking expectations over the realizations of (allocated) initial beliefs, the probability of cutting a randomly selected in T link is; (1) the same for all links, and (2) weakly monotonically increasing in q.
Proof of Lemma B.3. Consider an arbitrary link, T ij . The probability of cutting this link is the number of pairs of beliefs further apart than (1 − q), as a fraction of the total number of pairs of beliefs.
f i (q) = P r(reroute|q) = #{x k , x l : |x k − x l | > (1 − q), k = l} #{x k , x l : k = l} This did not depend on our choice of link T ij , and so is the same for all links. Hence f i (q) = f (q) for all i. #{x k , x l : |x k − x l | > (1 − q), k = l} is weakly monotonically increasing in q, and #{x k , x l : k = l} is unaffected by q, so f (q) is is weakly monotonically increasing in q.
Definition B.2. Information loss, L, is the number of agents who cease to have influence in the giant component due to confirmation bias. L = #{i : s * giant,i = 0, s giant,i > 0} Where s giant,i is the influence of agent i in the giant component.
Definition B.3 (Vertex-transitivity). A network T is vertex-transitive if for any pair of agents i
and j there exists a Graph Automorphism π : N → N such that π(i) = j Proof of Proposition 3. First, some machinery. Sets. Let g k,z denote the z th group of k ∈ {1, 2, ..., n} agents. Let g c k,z be its complement. Note that g c k,z ≡ g n−k,z . Let G k denote the set of all groups of agents of size k. |G k | = Z k ; G k contains Z k different groups. Let A k denote the event that any one or more groups of size k become disconnected from the network due to confirmation bias. Probabilities. Let P r(A k ) is the probability that the event A k occurs, and let P r(g k,z ) be the probability that the z th group of size k becomes disconnected. Let P r(L > 0) denote the probability that there is some information loss in the network. Degrees. Let d g k,z denote the number of links between the agents in the group g k,z and those in g c k,z . If T is symmetric and there is no information loss, then T * · x i0 = x i0 (by Remark 1). The optimal network is therefore a symmetric network that minimizes P r(L > 0). By Lemma B.3, the probability of cutting a randomly chosen link is equal to f (q). Under Assumption 2 we apply this to all links, and ignore correlations between cutting probabilities. The probability a group g k,z becomes disconnected is the probability that all of their links to the rest of the network are cut. That is; P r(g k,z ) = f dg k,z . Then the probability that one or more group of size k becomes disconnected is one minus the probability that no groups of size k become disconnected. That is; Due to the convexity of taking exponents, this term is minimized by setting d g k,z = d k for all g k,z ∈ G k , for any value of k. The overall probability of information loss, P r(L > 0), is increasing in P r(A k ) for all k. So a graph where all groups of agents of a given size have the same number of links minimizes the probability of information loss. This requires that all nodes are identical in the unweighted equivalent T s . By definition, this is equivalent to the network T s being vertex transitive by definition. Regularity follows from vertex-transitivity, and the absence of self-links increases P r(g k ) for all g k .
By Lemma B.2, when q = 0 we have the simplified learning rule; x it = (1 − T t ii )µ + T t ii x i0 for all i. Let ∆x it = x it − x it−1 for t ≥ 1. Substituting in the simplified learning rule and rearranging yields; ∆x it = T t−1 ii (1 − T ii )(µ − x i0 ). t only enters as the exponent on a weakly positive number. 25 Therefore, the sign of ∆x i,t is the same for all i, t.
Therefore, (x it − µ) is weakly monotonically decreasing in t, for all i. This means that no agent who votes for the Left-wing candidate can ever change their vote (and all agents who vote for the right-wing candidate must switch their vote to the Left-wing candidate at some point). Therefore the Left-wing candidate would win for any t, and so there cannot be any volatility.
To prove that volatility is possible with q > 0 we show sufficient conditions for the Right-wing candidate to win at t = 1. Since the Left-win candidate wins at t = 0 and t = ∞ by assumption, this is sufficient to prove that volatility is possible.
Choose q ∈ (1 − max{k m+1 − k m }, 1 − min{k m+2 − k m }). Then we have; Now suppose that f EL + f CL < 0.5 (the Left do not form a majority on their own) and that f CR x CR0 + f CL x CL0 > 0.5 (the center right can persuade the swing voters more than the center left). Then x S1 > 0.5, and the Right-wing candidate wins at = 1
B.6 Media
Proof of Remark 6. From Eaton and Lipsey [1975] we have; µ f r (q = 0, M ) = 1 2M −4 . When (1 − q) > 1 2M −4 then a fringe media organization does not lose any (potential) agents due to confirmation bias. For q ∈ [0, 2M −5 2M −4 ], the fringe media organization is unaffected by confirmation bias, and so its ideology is unaffected by confirmation bias in this range. | 2020-11-03T02:00:53.511Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "951ce9c55a2b8042a109bec17ce1a3fb41e40aad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "951ce9c55a2b8042a109bec17ce1a3fb41e40aad",
"s2fieldsofstudy": [
"Political Science",
"Computer Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
251340342 | pes2o/s2orc | v3-fos-license | Recent Advances in Post-Combustion CO2 Capture via Adsorption Methods
. In order to alleviate the environmental problems associated with increasing CO 2 emissions, efficient CO 2 capture technologies are urgently needed. Nowadays, there are several main kinds of capture methods, such as absorption, membrane, cryogenic and adsorption etc. The principle, advantages and disadvantages of each method have been summarized. Due to its high adsorption rate, low regeneration energy, good selectivity, high stability and gentle operation condition, adsorption has been regarded as the most promising method for industrial application. Additionally, the core of adsorption is to develop good adsorption materials with low-cost and high-efficiency, and some typical materials, including carbonaceous adsorbents, silica gel, zeolite molecular sieve, and metal-organic frameworks (MOFs), have also been introduced. As a new type of material, MOFs are popular with many researchers depending on functionalizing pore surface, permanent and highly adjustable porosity. As more and more potential mechanisms and raw materials have been discovered, MOFs may speed up the process of application of adsorption methods in the industry.
Introduction
Since the mid to late twentieth century, CO2 levels in the atmosphere have continued to rise worldwide, and the relevant data from 2017 to 2022 and from 1980 to 2022 have been shown in Fig. 1 (a) and (b). The resulting environmental problems, such as global warming, melting glaciers and rising sea levels, have become increasingly serious, causing serious damage to global resources and the sustainable development of the world economy. In fact, CO2 plays an important role in the preservation of agricultural products, supercritical CO2 oil flooding technology, CO2 extraction technology and other fields. Within the last twenty years, especially under the background of carbon peaking and carbon neutrality goals, carbon capture, utilization and storage (CCUS) has made great progress, and many scholars have taken CO2 as a carbon resource for further research.
In the development of CO2-related technologies, carbon capture is the premise and foundation of capture utilization and storage. Generally, there are three main carbon capture processes, namely oxycombustion CO2 capture, pre-combustion CO2 capture and post-combustion CO2 capture. Among them, pre-combustion CO2 capture has a lower cost, while low efficiency and high-temperature requirements have been the limitations of its industrial application [1]. Oxy-combustion CO2 capture requires specific materials of pure oxygen combustion equipment and air separation systems, which will significantly increase the investment in carbon capture. At present, large-scale pure oxygen combustion technology is still in the research stage. As for post-combustion CO2, it is a mature technology with a wide range of applications, simple system principles and good inheritance to existing power plants. Due to these advantages, this method has been seen as the most promising industrialized technology. This paper will briefly discuss four ways of post-combustion CO2 capture, absorption, membrane separation, cryogenic separation and adsorption, respectively. Depending on a series of merits and more and more newly developed materials, adsorption has become a favorable method for many researchers, and the distribution of typical articles published in four fields during the past decade has been shown in Fig. 2, and Fig. 3 introduces several common adsorption materials. Additionally, metal-organic frameworks (MOFs), as a kind of adsorption material, bringing a new mechanism and perspective for CO2 adsorption separation, maybe provide those power plants with a viable option.
Absorption method
Absorption is a method that takes advantage of the difference in solubility of CO2 in solution from that of other components in gas mixture to separate CO2, as shown in Fig. 4. When the absorbent reaches saturation, the separation of absorbents from CO2 is achieved by heating, which provides energy to decompose physical or chemical bonds. Due to low input costs, good separation effects,
Physical absorption method.
The physical absorption method refers to the process that which CO2 dissolves in the absorption solution, but does not react with the absorption solution. The principle is to use the characteristic of larger solubility of CO2 in absorption solution while other gas components have smaller solubility to achieve separation effect. Since the solubility of solutes varies with pressure and temperature, solutes can be absorbed in solution via low temperature or high pressure and be separated via the inverse process. At present, the developed technologies include the low-temperature methanol method and propylene carbonate method [4].
Chemical absorption method.
The operation of chemical absorption takes advantage of the reversible chemical reaction between alkaline absorbent and CO2 in the mixture to generate unstable salts, such as carbonate, bicarbonate, and carbamate. Then these types of salt will decompose under extreme conditions such as high temperature and release CO2 to achieve capture and separation of CO2. Popular adsorbents include ammonia absorbent, ammonium salt absorbent, potassium carbonate absorbent, etc. Now the mixed amine absorbent, phase change absorbent, ionic liquid absorbent, nanofluid absorbent and other new absorbents are also getting more and more attention from researchers [5].
Membrane separation method
Membrane made of polymer materials has different permeability to different kinds of gases, which is the principle of the membrane absorption method. The driving force of this method is the pressure difference between two sides of the membrane. The gas components with higher permeability have priority through the membrane to the outlet side, while gases with lower permeability stay at the inlet side of the membrane. Subsequently, the two groups of gases will be divided into the respective side of the membrane to get the target gas separated. Typical membrane materials are cellulose acetate butyrate, cellulose acetate, polyamide, polyimide, polysulfone, polyethersulfone, polyethylene oxide, etc [6].
Cryogenic method
Due to their different condensing temperatures, the diverse gases can be separated at a specific temperature in a gradual cooling system. According to the different refrigeration systems providing cooling capacity, the low-temperature separation method can be divided into the direct expansion refrigeration method, the external cold source refrigeration method and the mixed refrigeration method [7]. These methods need large equipment and high energy consumption, so they are generally rarely used. Oil field mining site mainly uses this way to separate and recover CO2 from Semi-gas in the Oilfield and enhance the oil recovery rate [8].
Adsorption method
Some porous materials have a strong binding ability to a certain component of mixed gas under certain conditions, allowing them to absorb the gas components on a solid surface while the rest remain in the air phase. Adsorption achieves gas separation by this principle, and the simple adsorption process is shown in Fig. 5. There are physical adsorption and chemical adsorption on the basis of different adsorption. The former is dominated by van der Waals intermolecular interaction force, whereas the latter is dominated by a chemical bond due to the binding force. Generally, physical adsorption can be multilayer adsorption with low selectivity; chemical adsorption is only monolayer adsorption with higher selectivity [4]. According to different operations, adsorption can be divided into thermal swing adsorption (TSA), pressure swing adsorption (PSA), vacuum swing adsorption (VSA) and electric swing adsorption (ESA). This article briefly introduces the first two ways.
Figure 5.
Brief schematic diagram of CO2 seperation by adsorption [9] The process of TSA is that adsorption materials absorb gases at low temperatures and desorbs gases at high temperatures. Due to the low thermal conductivity of common adsorbents, heating and cooling time is relatively long, often taking several hours. So its equipment is usually large, and it needs corresponding heating and cooling equipment, of which energy consumption and investment are relatively high. Therefore, TSA is not so widely used as PSA and is only suitable for occasions where the impurity component in mixture gas is low, and the requirement for recovery rate is very high.
PSA utilizes the character varying with pressure to get CO2 captured and separated, which adsorbs CO2 at high pressure and desorbs CO2 at low pressure. PSA has three mechanisms: the steric hindrance effect, kinetic effect, and equilibrium effect. The steric hindrance effect is mainly applicable to the screening effect of zeolite molecular sieves. For kinetic effect, the required condition is that the pore size of the adsorbent is located between the two gases waiting to separate. As for the equilibrium effect, the principle of the design and selection of adsorbents is based on the basic physical properties of adsorbed components, such as polarization, magnetization coefficient, magnetism, quadrupole moment and permanent dipole moment [4]. In comparison, because the PSA productions have high purity, a high degree of automation and long service life of adsorbents, it is a method valued by many researchers [10]. In the above four methods, the absorption method has a better effect but has not been widely used in industry because of complicated operation, low utilization of absorbents and high energy consumption. Although the membrane separation method has made great progress in the laboratory, considering the cost of the membrane, there are many difficulties to be overcome in the way of industrialization. And the cryogenic method has less suitable occasions, which has been explained before. In comparison, adsorption has many advantages over other materials. Some advantages and disadvantages have been summarized in Table 1. However, whatever the principle, the core research direction of adsorption is the study of adsorbent materials.
Carbonaceous adsorbents
There are abundant carbon resources in nature, and they are widely used in scientific research. Especially, carbon materials show some good performances in the field of adsorption, such as high specific surface area, good porosity, large pore volume, stable chemical properties and good electric properties [18]. Several simple carbon materials are shown in Fig. 6. Mohd investigated that the principle of gas adsorption using biochar is mainly physical adsorption and pores on biochar surfaces are storage sites for gas molecules [19]. Najafabadi made carbonaceous adsorbents from cocoa shells and indicated that cocoa shell biochar has good stability after several cycles and has great potential as a biochar adsorbent [20]. In addition, he compared the properties between cocoa shell biochar and commercial activated carbon. The result indicated the higher equilibrium CO2 uptake of the biochar, while the activated carbon has a faster adsorption rate.
Highlights in Science, Engineering and Technology
Nowadays, scientists attempt to promote the adsorption properties of activated carbon (AC) by pore structure adjustment, AC-based composite adsorption, preparation of surface modification and nitrogen amine doping [21]. Boujibar successfully synthesized AC adsorbents from Moroccan coal [22]. The Coal-K-PM sample, which was chemically activated by KOH and prepared through physical mixing(PM), had a high pore volume of 1.34 cm 3 /g and the highest surface area of 2934 m 2 /g, whereas the Coal-K-Im (prepared using impregnation(Im) method) showed the best CO2 adsorption capacity to 5.88 mmol/g at 25 º C.
Silica gel
Silica gel is a kind of cheap adsorbent and has been widely used in many industrial fields, such as dehydration in cracked or natural gas and removal of oxidized or chlorinated organic matter from hydrogen. Researchers are currently committed to studying modified silica gel to improve its performance.
The most commonly used modification methods of silica gel are ionic liquid modification, organic amine modification and alkali metal ion modification [23]. Anyanwu et al. utilized organic amine modification to improve the adsorption properties of 150A silica gels (corresponding mesh sizes of 200-425) [24]. The improvement included the addition of water while grafting to increase the concentration of amine (N1-(3trimethoxysilylpropyl) diethylenetriamine). This action brought about a CO2 adsorption capacity of 1.83 mmol/g at 0.15 bar and 75 º C and 2.3 mmol/g at 1 bar and 75 º C. The wet-amine-grafted adsorbent exhibited good amine efficiency, rapid uptake rates and cyclic stability, and due to its commercial feasibility and excellent performance, this kind of material has great application prospects.
Garip et al. used sol-gel method to synthesize silica-based aerogels with different components of ionic liquid (1-ethyl-3-methylimidazolium bis (trifluoromethylsulfonyl) imide (IL)) and 3aminopropyltriethoxysilane (APTES) [25]. According to his research, the addition of APTES and IL both enhanced the CO2 adsorption properties of silica gel. Among various experiment groups, the sample with mol ratios of APTES and IL are 0.24 and 0.28, respectively, showed favorable performance with 5.53 mmol/g(243.32 mg/g).
Zeolite molecular sieve
Zeolite molecular sieve is a natural or synthetic crystalline aluminosilicate containing alkali or alkaline earth metal oxides. Due to the large specific surface area and adjustable pore size, the zeolite molecular sieve is regarded as an excellent CO2 solid adsorbent material [26]. The adsorption mechanism of Zeolite molecular sieve is generally that molecular sieves with kinetic selectivity are easy to adsorb molecules with strong polarity or polarization. The molecular sieves framework and cations have strong polarity, which can attract each other with the heteroelectric center of polar molecules so that the polarization of molecules can be achieved via electrostatic induction [27].
Gabriele et al. used fly ash as raw material to synthesize X-type zeolite through the melting and hydrothermal method, and its adsorption capacity is 2.18 mmol/g [28]. Chao et al. performed a highthroughput grand canonical Monte Carlo (GCMC) simulation to foresee the CO2 adsorption effect of 2625 aluminosilicate zeolite structures [29]. The consequences of simulation revealed that those models exhibited excellent performance in TSA and PSA processes, and some of them even exceeded the most efficient zeolite at present, the zeolite Na-X.
Runlin Han et al. prepared SSZ-13 via an unconventional process by adding Al(NO3)3 as the aluminum resource before the structure-directing agent and base. In this case, SSZ-13 was crystallized by an unconventional growth mechanism [30]. Based on the preparation of SSZ-13, the author also optimized the effects of aging time and the number of fluoride ions on the adsorption effect. The result showed that SSZ-13 had a bright application prospect. When the ratio of F/Si was 0.1, the product had the best adsorption performance, 2.74 mmol/g at 0.25 bar and 4.55 mmol/g at 1 bar, which was 54.8% and 20.4%, respectively, which is higher than that of SSZ-13 prepared by typical methods.
Metal-organic frameworks
MOFs are periodic and ordered crystal structures self-assembling via organic or inorganic linkers and metal-containing nodes(also known as Secondary Building Units or SBUs). They have functionalizing pore surface, permanent and highly adjustable porosity [31]. Because of the advantages of a rich source of raw materials, high stability in getting a high capacity of CO2, and favorable selectivity, MOFs have been regarded as the most promising materials for industrialization.
In 2008, Wang et al. used grand canonical Monte Carlo simulations to certify the feasibility of MOF Cu-BTC for gas separation in three kinds of binary mixtures (CO2/CO, CO2/CH4, and C2H4/C2H6) through adsorption [32]. In addition, they also illuminated the potential adsorption mechanism of MOFs at a molecular level, which made a foundation for the later research in the fields of metal-organic frameworks in CO2 capture.
Nowadays, researchers have taken a series of actions to promote the adsorption performance of MOFs. Table 2 summarizes some strengths and challenges of these strategies. H2IPA) [37]. They showed that MIP-207-25% expressed the most excellent adsorption properties up to 3.96 and 2.91 mmol/g, 20.7% and 43.3% higher than those unmodified at 0 º C and 25 º C, respectively. Hu et al. reported that incorporating heterocyclic ligands could remarkably improve the CO2 adsorption amount [34]. They investigated UiO type MOFs comprising N, S, and O heterocyclic ligands, and the UiO-67 comprising O-heterocyclic performed the highest CO2 adsorption amount, which was four times much than those without O-heterocyclic. These results validated that the introduction of heterocyclic was a useful method to enhance the performance of MOFs. Meanwhile, the author also pointed out that further study would pay attention to investigating the co-adsorption of CO2 with other competitive gas components in MOFs with heterocyclic ligands.
In the field of mixed metal-organic frameworks, Li et al. reported that they immobilized alkali metal ions (K + ) in MOFs and enhanced CO2 skeleton affinity by 24% [38]. Gao et al. developed a kind of mixed metal-organic framework (M'M ), [ ( yz)N ( N)4] (pyz=pyrazine), used for C2H2/CO2 separation [39]. This material showed high C2H2 adsorption capacity up to 4.54 mol/L and favorable C2H2 selectivity of 24% at ambient temperature and pressure.
Conclusion and future perspective
In this work, the principle, advantages and drawbacks of four kinds of adsorption methods have been reviewed, and adsorption has a broad prospect in industrialized applications because of its advantages of mild operating conditions, low energy consumption and excellent stability. The research status of common adsorbents has also been summarized. Nowadays, CO2 capture and separation are essential tasks of human beings, and a number of researchers have achieved good grades in the laboratory stage. The existing problem is how to develop a kind of low-cost adsorption material that can be successfully applied in the adsorption industry. As far as the research is concerned, zeolite molecular sieve, one of the widely researched materials, may be applied in industry in the near future by virtue of stable chemical property and high selectivity. However, in recent years, with the deepening of MOFs research, new materials have been constantly developed, and the unique mechanism of MOFs has also been constantly explored. The current target is to enhance the adsorption performance of MOFs, especially in the field of functionalized surface modification. But there is no doubt that this approach is still a long way from industrialization. | 2022-08-05T15:06:51.472Z | 2022-07-27T00:00:00.000 | {
"year": 2022,
"sha1": "d7341c6276faea5cc39977def1c547e2bb4ab77d",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/HSET/article/download/959/887",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "db023fbf4bf56eb8af4dc83821b1d0270de52c83",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
259178173 | pes2o/s2orc | v3-fos-license | Spatio-temporal mechanisms of consolidation, recall and reconsolidation in reward-related memory trace
The formation of memories is a complex, multi-scale phenomenon, especially when it involves integration of information from various brain systems. We have investigated the differences between a novel and consolidated association of spatial cues and amphetamine administration, using an in-situ hybridisation method to track the short-term dynamics during the recall testing. We have found that remote recall group involves smaller, but more consolidated groups of neurons, which is consistent with their specialisation. By employing machine learning analysis, we have shown this pattern is especially pronounced in the VTA; furthermore, we also uncovered significant activity patterns in retrosplenial and prefrontal cortices, as well as in the DG and CA3 subfields of the hippocampus. The behavioural propensity towards the associated localisation appears to be driven by the nucleus accumbens, however, further modulated by a trio of the amygdala, VTA and hippocampus, as the trained association is confronted with test experience. These results show that memory mechanisms must be modelled considering individual differences in motivation, as well as covering dynamics of the process.
Introduction
Preserving the memory of an important life event requires successful integration of several modalities that constitute a multisensory trace, often referred to as the memory engram [1]. In particular, spatial memory is defined as the brain's ability to encode key features of the external environment and to navigate within the boundaries of this mental representation, also known as cognitive map [2,3]. On the physiological level, it manifests as populations of neurons exhibiting activity tuned to specific aspects of the external spatial context, in particular, firing correlated with an animal's presence in a certain, unique location within the environment [4,5]. Such neuron populations, called place cells [6,7], are mainly located in the hippocampus, yet their specificity is influenced by several other groups of spatially tuned neurons, most notably grid cells, border cells and head direction cells [8][9][10][11]. These cells are in turn distributed among several cortical and sub-cortical structures. Said network is further connected by a plethora of direct and indirect projections, serving both top-down and bottom-up processing of spatial information and associating it with representations stored by other circuits.
In the case of episodic memories, their spatial context can be, among others, linked to the emotional state during learning. This phenomenon is often investigated with aversive states, like those triggered by physical pain or alarming stimuli. However, it occurs regardless of the valence, and can also involve appetitive states; in particular reward-related ones, on which we focus in this work. These are processed by yet another crucial functional element of the brain: the reward system, responsible for cognitive fundamentals of motivation and reinforcement, as well as a crucial agent in many psychological disorders. The physiological connections between these systems have also been established; for instance, it was demonstrated that a projection from the hippocampus to the ventral tegmental area (VTA) mediated relations between context and reward [12,13].
Appetitive states can be naturally evoked by likes of attractive foods or positive social interactions [14][15][16], but also induced with substances that directly activate the reward system, that is pharmacological rewards [17][18][19]. One of them is an amphetamine, which is known to reliably activate the reward system [20][21][22] and to induce the emission of 50kHz band ultrasonic vocalisations, which are a marker of affective appetitive states in rats [23][24][25]. To this end, it is often used to induce a consistent appetitive emotional state to be linked with spatial cues during the experiment.
The immense complexity of brain mechanics makes their quantification a highly non-trivial endeavour. Henceforth, contemporary approaches have to focus on narrow aspects of activity, especially those accessible for measurements and relatively straightforward to interpret. On the transcriptomic level, one of such convenient manifestations is the transient expression of immediate early genes (IEGs), in particular, cFos, Arc, Homer, and zif268 [26][27][28]. They can be used to visualise and quantify the cellular ensembles involved in the memory trace, as demonstrated in [29,30].
The memory has a very rich dynamic [31]. In a global view, it consists of memory formation, consolidation, recognition, recall, re-consolidation and extinction; yet all of these processes have their own local dynamics of variable time scales and involve complex phenomena, for instance, engram shifts between brain structures. Consequently, the investigation of temporal aspects is crucial for the analyses of memory mechanics. Thanks to the previously identified distinctive patterns of IEG expression in time, they are an important tool for this task.
In this work, we aim to investigate the neural mechanisms involved in the interplay of spatial memory and reward processing systems, underlying the emotional perception of context, which is in turn crucial to understanding goal-directed behaviour associated with reward-seeking, as well as spatial memory storage, maturation and consolidation. Rodent models offer a way to explore these relationships experimentally on the behavioural and molecular levels. One of the simplest and most commonly used models is the conditioned place preference (CPP) paradigm, where the animal learns to associate a particular area of experimental enclosure (facilitated by spatial cues) with an emotional experience. We have devised an approach extending CPP with an openfield paradigm, orchestrating a reward-seeking task based on spatial cues in a 5-region cage (with 4 corners and interconnecting centre space). For conditioning, we have used amphetamine administration in one of the corners.
To investigate the mechanism underlying such evoked memory, we have used the CatFISH method (Cellular compartment analysis of temporal activity by fluorescent in situ hybridization) to measure the co-expression of two IEGs, Arc and Homer-1A, in nine brain areas covering both the cortex and sub-cortical structures [32]. In order to elucidate larger-scale temporal dynamics, we have analysed two groups of rats, with a recent and remote memory recall. By using machine learning, we were able to pinpoint the circuits responsible for behavioural responses related to reward-seeking based on spatial cues.
Experiment overview
Rats were trained to associate a particular corner of a rectangular enclosure with the effects of amphetamine injection. In training, doors in cage partitions were closed, constraining rats to stay in a corner they were placed. On the other hand, for testing, said doors were opened, and rats were allowed to freely roam around the cage; yet, they were not given amphetamine.
The testing session included two, five-minute entries, separated by a 20-minute break; afterwards, rats were sacrificed and their brains analysed ( Figure 1). Due to such a course, we were able to exploit the temporal expression patterns of two IEGs to untangle the brain activity in either entry. Precisely, we were able to measure what fraction of neuronal nuclei within a given brain structure was active during any entry, either entry, exclusively either entry and finally during both entries (Figure 1 E). Out of these descriptors, we have also calculated the co-localisation coefficient, expressing the degree to which consistently the same neurons exhibit activity across entries.
This experiment was repeated in two variants, investigating recent and remote (consolidated) recall; the first one was carried out the day after training, while the second one was two weeks after the last training session. Both sessions were videorecorded, which allowed us to quantify animals' behavioural propensity to the conditioned corner.
We assumed that placing the animal in the cage context evokes the recall and possibly reconsolidation of the trained spatial-emotional association, consequently that this overall set-up will allow us to elucidate short-and long-term dynamics on a structural level.
Remote recall involves smaller, more consolidated neuronal sub-populations in certain structures
First, we investigated whether there are qualitative signs of strongly consolidated memory in the remote recall group, focusing on the most direct descriptors of inter-entry variability; the outcome of this analysis is summarised in Figure 2A.
When comparing the overall brain activity in the remote group compared to the recent group, expressed as a percentage of any active nuclei, we see that there is a significant (p=0.04) drop of activity, conditioned over structures. Moreover, this drop in overall activity was accompanied by an increase in co-localisation (p<0.001), suggesting an emergence of specialised neuronal sub-populations. These differences were too subtle to be attributed to particular structures with standard statistical methods, however.
The activity was also fairly consistent between individual rats; the notable exception from the rule is nucleus accumbens, which noted striking individual variability in activity, which was not explained by the group. It can be explained by the animals' behaviour, though.
Recall of the remote memory
has a distinct activity pattern from a recent recall The Boruta [34] machine learning-based analysis of the differences between recent and remote recall has uncovered additional, more nuanced interactions, presented on Figure 2B. For VTA, it has detected the aforementioned pattern of higher overall activation in the recent group, yet more specialised in the remote group. For hippocampus, the dentate gyrus area (DG) activity is higher in the recent recall group, and this effect is especially pronounced at the first entry. On the other hand, in CA3 we see disjoint populations, active only in the first entry and indicating recent group, as well as active only in the second entry and indicating the remote group. We have found that two cortical regions contain specific neuronal populations more active in the remote group; in the prelimbic cortex (PrL), it was active at the first entry, while in the agranular retrosplenial cortex (RSA), only at the second.
Two parameters of the amygdala activity at the first entry have been also found to be significant; overall activity of the lateral part and exclusive activity of the central part. Generally, they were higher in the recent group, but also in two rats from the remote group with slightly elevated hippocampal activity and high VTA colocalisation. Fig. 1 Overview of the experimental procedure. A. Each rat is habituated to an experimental cage, trained to associate corner c3 with amphetamine effects and undergoes testing which consists of two entries into the context. Rats are divided into recent and remote recall groups, which differ only in an additional two-week delay between training and testing sessions in the remote group. B. Details of the 6-day training routine; on even days rat is placed in the conditioned corner c2 and given amphetamine, while on odd days placed in each other corner and given sham, saline injection. C. Scheme of the experimental cage; a 1m × 1m box is partitioned into 5 regions by transparent walls equipped with doors that can be opened (during habituation and testing) or closed (during training). An over-hanged illumination assembly provides animals with spatial cues. D. Scheme of the CatFISH method used to elucidate changes in brain activity between test entries. mRNAs of two IEGs, Arc and Homer, are detected; the temporal distribution of entries is set up so that at the observation point, we can independently detect expression of IEGs triggered by the activity at either test entry. E. Image analysis pipeline; brain structures are identified and marked as ROIs using shapes from a reference atlas [33]. Within each ROI, we identify nuclei and, according to IEG expression detected within them, classify into inactive, active during eith erentry and active during both entries. Fig. 2 The differences in overall activity and activity co-localisation in investigated structures between rats with recent and remote recall. A. Comparison of the fraction of active nuclei (top) and co-localisation coefficient (bottom); both conditioned on the brain structure. In the remote recall, the activity has generally dropped in comparison to the recent recall, while the co-localisation has increased, which suggests the development of specialised neuron sub-populations. Box-plots adhere to a standard Tukey's definition, original data is superimposed as jittered points. B. Heatmap showing the values of brain activity descriptors found to significantly differ between groups in a machine learning-based multivariate analysis. Most straightforward effects were identified in the VTA and hippocampus, but also in cortical regions and amygdala. Values were shown as ranks for clarity, with rank one given to a lowest value (blue) and rank sixteen to the highest (red).
High, memory remotenessindependent variation in NAcc is explained by behaviour
As mentioned in Section 2.2, there is a substantial variation of activity in certain structures which is not explained by the recent/remote group; in particular in the nucleus accumbens. By using the trajectory data extracted from the video recordings of the behaviour, we have quantified the inclination of a rat to stay in the corner that was previously associated with the amphetamine injection. To rule out the impact of variable exploration tendencies, we have expressed it as a correctness score, defined as a fraction of the total time spent in any corner which was spent in a conditioned one. While this score was not significantly correlated with the recent/remote recall group, we have identified significant molecular associations with machine learning; they are collected in Figure 3.
High correctness was identified to be explained by the activity of hippocampal CA3 as well as both the shell and core of the nucleus accumbens. Moreover, these activity levels were fairly consistent across two entries, henceforth almost all pattern parameters were selected. A similar pattern applies to caudate putamen (CPu), yet to a lesser extent, because CPu activity at the second entry is less discriminating.
Finally, there are also significant correlations between first entry correctness and overall activity of the lateral amygdala (LA) as well as activity co-localisation in the granular retrosplenial cortex (RSG). Fig. 3 The factors associated with the correctness in the first test entry (A) and second test entry (B). Heatmaps show the values of brain activity descriptors found to be significantly associated with correctness in a machine learning-based multivariate analysis. The high first-entry correctness score was related to nucleus accumbens and CA3 activity, while the central amygdala was a key driver in the second entry. Values were shown as ranks for clarity, with rank one given to a lowest value (blue) and rank sixteen to the highest (red); gray encodes missing values.
Behaviour at each entry has distinct neuronal activity correlates
While the overall activity in NAcc and CA3, which are strong correlates of correctness at the first entry, remains fairly consistent between entries, they cease to strongly explain the correctness at the second entry. Only co-localisations at CA3 and NAcc core and overall activity of CA3 remain to be important. On the other hand, the activity of the central amygdala (CeA) becomes a key driver, we also see significant interactions involving the infralimbic cortex (IL) and VTA. They are rather complex and involve a structure state at the first entry.
Strongest inter-structural co-activation occurs at second entry
We have investigated activity correlations across all investigated neuronal sub-populations in all structures. The graphs constructed from these identified to be statistically significant ones are presented in Figure 4 -there were none such in the Any and Second classes. This analysis has mostly uncovered trivial, inter-structural correlations, in particular within amygdala (BLA-LA-CeA), NAcc (shell & core), retrosplenial cortex (RSG-RSA) and hippocampus (CA1-CA3, though not DG).
When analysing populations active exclusively during the first entry, we have found a significant negative correlation between CeA and PrL. In comparison to the general first entry activity graphs, we see that both the internal coupling in NAcc and between CA1-CA3 have disappeared, which indicates that this synchronisation has persisted over both entries. On the other hand, RSG-RSA link is present in both exclusive views but not in any more general one, suggesting it is a result of a more temporally localised phenomenon.
Most of the significant correlations identified are between the sizes of neuronal populations active exclusively during the second entry. Here, we see intra-structural coherence in the amygdala, Fig. 4 Significant monotonic correlations between values of activity descriptors between investigated brain structures. Each panel presents graph for a certain activity descriptor; panels with no links (for an activity at any entry, any, and at the second entry, 2nd) were omitted. The densest, non-trivial networks can be observed between sizes of neuron sub-populations active exclusively at certain entry. The weight of a link corresponds to an absolute value of Spearman correlation coefficient; black links represent positive, while red ones negative correlations.
hippocampus, retrosplenial cortex and between prelimbic and infralimbic cortices, yet not in NAcc. There are also inter-structural correlations, positive between the hippocampus, retrosplenial cortex and, to a lesser extent, amygdala; as well as negative correlations between the prelimbic cortex and both basolateral amygdala (BLA) and DG. One should note, however, that the sample sizes are quite limited for a correlation network study, and this analysis is likely to have low sensitivity.
Discussion
Contemporary models of memory generally involve multiple functional tiers. Memories are first formed in short-term storage, which is transient as it must withstand a constant influx of new information. To this end, encoded data is then selectively moved to long-term memory, based on a certain assessment of its possible usefulness -this process may take substantial time, and it is generally faster when similar memories (often referred to as schemes) are already present in the long-term memory.
Long-term memory, on the other hand, is the first to react during recall, triggered by a stimulus consistent with a certain record. The activation of short-term system follows, as recall almost always happens in circumstances where novel, relevant information can be acquired, and such would likely be used to update the recalled memory in a process of re-consolidation.
Physiological changes consistent with memory consolidation
It is reasonable to assume that certain memory utilises a progressively smaller and more consolidated population of neurons as it moves from a fresh, most verbose representation of current stimuli and emotional state to a consolidated, processed record. This is indeed what we see when comparing rats with a remote recall from those with a recent recall; the observed activity is generally lower but more specialised. Machine learning-based mining, capable of uncovering non-monotonic and multivariate interactions, has revealed a more detailed picture, showing significant differences on a particular structure level. They were especially pronounced in the VTA and DG, but also in the amygdala, prelimbic and retrosplenial cortices, as well as in CA3.
The dynamics in the VTA were closest to the general trend of diminishing but specialising activity. This structure is attributed to be an element of the reward system [35,36] as well to be involved in memory processes [37,38], especially when reward-related memories are considered. Henceforth, the strong rewarding effect of amphetamine might have led to a formation of a specific neuronal population in this structure.
The higher activity of DG in the recent group is quite interesting, especially because it was also the least active part of the hippocampus; this may be connected with its speculated role in refining memories during encoding [39]. This process is expected to be more pronounced in the recent recall group, where animals experience a sudden break of the training routine.
Cortical activity patterns are particularly interesting, since they both consider an increase of activity in the remote recall group, suggesting that rats have in fact developed a specified neuron population during the hold off period. They could not be detected by simple co-localisation analysis, however, because relevant cortical structures were not uniformly responding in either test entryin particular, said population in the prelimbic cortex was only active in the first entry, while this in RSA only in the second.
A similar argument can be made about CA3; the literature suggests it should be involved in the investigated phenomena, and indeed this is the case, yet its activity pattern goes beyond a simple consolidation model. Precisely, our results suggest that the recent recall group has a specific CA3 population active exclusively in the first entry, while the remote group has a similar population, yet active exclusively in the second entry. A possible explanation of this pattern considers the fact that test sessions give an experience conflicting this from training -amphetamine is not administered at all, not even in the conditioned corner. This fact undoubtedly supports the dissolution of the trained association, which may be aided by CA3. One can hypothesise that fresh memories are more volatile and a single conflicting observation is sufficient to trigger revision, while consolidated ones require stronger support and start to activate at the second entry.
The recent recall group was also characterised by an elevated activation of the lateral and central amygdala during the first entry; this is consistent with the aforementioned activity patterns of CA3 and VTA, leading to a conclusion that this trio was jointly involved in the encoding of freshly formed spatial-emotional association, utilising a network of projections well established in the literature. Our results suggest that this action is transient and can be easily diminished by conflicting experiences, though.
Interestingly, clustering based on ML-selected parameters has uncovered two clear outliers (a05 and a02) which, despite being in a recent recall group, exhibited activity patterns of the remote group. We interpret this as a sign of early consolidation, possibly triggered by an earlier spatial memory or individual differences in emotional state or a reaction to amphetamine.
Utilisation of a memory depends on different factors than its consolidation
We have used a correctness score to quantity the animal's propensity to the corner associated with the amphetamine injection, which we believe corresponds to goal-directed drug seeking. There is no group effect in this parameter, which we interpret as an indication that all animals have correctly recalled the spatial organisation of the train/test cage and identified the conditioned corner, yet adjusted the seeking and anticipatory behaviours based on other factors. In particular, this involves both areas of NAcc, which express higher activity in rats with a larger initial correctness score; this is also consistent with a very high individual variability of NAcc activity patterns. NAcc is a key hub in the cortico-limbic circuitry directing action selection and decisionmaking in reward-related tasks [40], and its activity is bound to the expression of emotion, in particular, reflected by ultrasonic vocalisation [18,[41][42][43], and with processing current values and reward prediction [44][45][46][47]. To this end, we conclude that the experiment likely captured activation of NAcc neurons positively associated with reward anticipation, which was a reaction to the recognition of substance administration context, possibly induced directly by cortex through wellestablished connections [48][49][50][51].
The strength of this action was shaped by the individual variability among rats, however, which caused different behavioural outcomes. The very origin of this variability is unclear, yet it likely arises at a network level because of the aforementioned hub nature of NAcc. In particular, the first-entry correctness is also positively correlated with the activity of the caudate putamen and hippocampus CA3, as well as, to a lower extent, with the overall activity of amygdala LA and activity co-localisation in RSG.
It is crucial to note that the NAcc activity persisted between test entries, despite the fact that it was not indicative of the correctness in the second entry. In general, we have not assumed both test sessions to be equal, as the first one establishes a novel and conflicting context for the conditioned corner, as there is no amphetamine injection consistent with previous experience. This is directly visible on Figure 5, which summarises all the machine learning results.
While the overall NAcc activity has not diminished due to this fact yet the behaviour has changed, we believe this is a consequence of a tertiary mechanism that has modulated the NAcc influence. The machine learning analysis of the correlates of the second entry correctness scores suggests that this can be attributed to an aforementioned CA3-amygdala-VTA trio. In particular, a higher score is predicted by activation of the CeA, but during the first entry; this can be attributed to a hypothesis that the sustenance of the anticipatory actions is promoted by the original emotional arousal. CA3 and VTA activity effects are more complex and require further insights.
Conclusions
Inspired by theoretical models of memory dynamics, we formed a hypothesis that memory consolidation involves the emergence of specialised neuronal populations. To this end, we have scanned the brain for traces of such, i.e., regions which exhibit lower but more co-localised activity when comparing rats with maturated memories to freshly trained ones, within structures connected with memory functions and emotional processing. We have confirmed that pattern on the wholebrain level, as well as that it is most pronounced in the VTA.
The dynamics of cortical regions, generally regarded to be the prime destination for consolidating memories, proved to be more complex. This may be a sign of higher-order phenomena triggered by an asymmetry between training and testing conditions, namely a need to re-encode the spatial memory with a novel, conflicting experience acquired in testing. This process appears to involve hippocampus CA3 and causes increasing intra-structural coordination, but further insight is required to verify these observations. Finally, we have shown that the behavioural outcome induced by memory recall can be heavily modulated by independent factors, including individual variability, especially when emotional processing is substantially involved -in particular, we have found nucleus accumbens to be a key structure enforcing action, while the amygdala to likely be either an integrator or a modulator. Anyhow, molecular and physiological observations provide a more direct and reliable quantification of the properties of memories. On the other hand, considering individual variability is crucial for the development of effective interventions, both experimental and therapeutic.
Acknowledgments
We wish to thank Tomasz Gomoła for the construction a 3D model of an experimental setup.
Animals
Adult male Long-Evans rats (n = 18, 180 ± 20g) were used in the experiment. Animals were purchased from a licensed breeder (the Polish Academy of Science Medical Research Center, Warsaw, Poland), and housed in standard laboratory conditions under 12h:12h light:dark cycles (lights on at 7 a.m.), at a constant temperature (21±2°C) with 70% humidity. Rats had free access to both food and water. All experiments were performed in accordance with the European Communities Council Directive of 24 November 1986 (86/609 EEC). Local Ethical Committee approved all experimental procedures using animal subjects (539/2018).
Behavioural experiment
All experiments utilised the same 1m × 1m cage, partitioned into 5 regions (four corners and a central interconnecting space) with translucent walls equipped with doors. In the training configuration doors are closed, henceforth a rat can be confined in a selected corner but retains the capacity to observe the whole cage through the wall. On the other hand, in the testing configuration, doors are opened, and a rat can freely roam around the cage. The whole setup is softly illuminated by an overhung asymmetric array of static colour lights, which serves as a visual-spatial cue. Otherwise, corners were identical in terms of surface texture, colour or size; the cage was cleaned with 70/100 ethanol solution before each rat placement to remove dirt and scent marks.
Animals were first habituated to the experimenter and cage; in this phase, doors were opened and the visual cue array was inactive. After habituation, the cage was switched to the training configuration, and animals were trained in the following manner. In each of the six training days, the animal was placed confined in a corner c1, c3, c2, c3, c4 and c3, respectively, for 15 minutes. During this time, the animal was injected with either amphetamine at a dose of 1.5mg/kg, when in corner c3, or saline (1ml/kg; i.p.), in other corners; this way, c3 was the conditioned corner.
After training, animals were, according to the pre-assigned group, either immediately transferred to testing (recent group), or first underwent a hold-off in their home cage for 14 days (remote group), and then tested. The testing procedure was identical for all animals and was performed as follows. The animal was placed in the centre of the cage in the testing configuration, with doors opened. Then, it was allowed to freely move and explore for five minutes; in particular, the rat could visit the conditioned corner. After this first entry, animals were put on hold in a separate cage for 20 minutes. Finally, animals were placed again in the centre of the cage for another 5 minutes, which constituted the second entry.
Immediately after testing, each animal was decapitated under isoflurane anaesthesia; its brain was isolated and flash-frozen in a dry-ice cooled isopentane bath. Afterwards, brains were awaiting catFISH reaction stored aluminium foil-wrapped in a deep freezer set to -80°C.
All sessions (habituation, six training entries and two test entries) took place under video surveillance; recordings of the test sessions were later used to reconstruct rat trajectories using the DeepLabCut toolbox [52].
Fluorescent staining of IEGs
To optimise the slicing process, we arranged brains side by side in blocks of four, embedded in a medium with an optimal slicing temperature (OCT; Sakura). The blocks were cut in a cryostat (Leica CM 1850, Germany) into 20µm sections, which were mounted on gelatin-coated SuperFrost slides (ThermoFisher).
Next, slices undergo fluorescent staining according to an established protocol [53][54][55]. Fluorescein-and digoxigenin-labelled antisense riboprobes for 3'UTR of, respectively, Homer 1a and Arc mRNA were applied to slices and allowed to complete hybridisation overnight, in a single step. Then, both kinds of probes were sequentially detected, first with anti-fluorescein and then with anti-digoxigenin. Homer 1a probes were stained with a tyramide-fluorescein signal amplification system (TSA-Fluorescein) and Arc probes with TSA-Cy3 (Perkin-Elmer). Furthermore, the slides were also incubated with DAPI nuclear counterstain (Invitrogen, ThermoFisher), to visualise the nuclei. Finally, prepared slides were cover-slipped with an anti-fade media (Vectashield, Vector labs) and sealed with nail polish. We optimised the settings to obtain bright, intranuclear foci of ongoing IEG transcription. The laser power, gain, and offset, as well as exposure times, were always set for the whole slide to ensure the best possible signal without substantial spatial variation. The scan was repeated to construct a z-stack; we rejected three bottom-and top-most layers so that only whole cells were considered later. Signals from individual filters were routed into separate channels of the final image.
Image acquisition and analysis
For each structure, we performed computeraided manual alignment of a brain atlas to the subsampled, flattened images of appropriate slices ( Figure 1E); this way, we have defined numerically defined ROIs for further automatic analysis.
Next, for each ROI, we used a computer vision pipeline to interpret the geometry of the distribution of reporter proteins. We have pooled each structure from both cerebral hemispheres. In particular, for FITC and DIG/Cy3 emission channels, we have used the ICY dot-finder to identify localised aggregations of the reporter, marking the locations of the mRNA particle of the corresponding IEG. For the nuclei identification, we have fitted ellipses to a coherent circular bright patch in the DAPI emission channel, using a custom code based on the watershed algorithm [56]. Finally, for each nucleus region, we counted the number of dots for either IEG; this way, each ROI got quantified as fractions of nuclei of four classes: both Arc and Homer-1a positive, Arc-only positive, Homer-1a only positive, and negative. This analysis was repeated for each z-layer, and we collected the median values of said fractions as a final result.
Additionally, we have calculated the colocalisation coefficient, given as a fraction of both Arc and Homer-1a positive nuclei divided by its expected value under the independent placement hypothesis, which is a product of fractions of Arc and Homer-1a positive nuclei. This coefficient is positive but unbounded and is expected to be one given the independence of neuron activation in both entries, under one when the activation is somehow exclusive to either entry and finally over one when neuron activations in both entries are correlated.
Statistical analysis
For a basic comparison of the activity between the recall groups, we have used a stratified, twosided Mann-Whitney-Wilcoxon test. The Spearman correlation test was used for the reconstruction of inter-structural interaction graphs. We have applied 0.05 as a significance threshold for p-values, and use Holm's correction [57] for multiple testing, expect in the cross-structure correlation analysis which used Benjami-Hochberg FDR correction.
The Boruta method [34] was applied for the machine learning search for multivariate and non-linear interactions, using the standard Random Forest importance source with 50 000 trees wrapped into the impute transdapter; tentative selections were collected together with the confirmed ones. To stabilise the results, we repeated this procedure thirty times and finally reported features selected in at least half of them.
Code availability
Code for reproducing data analysis is available on GitLab: https://gitlab.com/neuro-reward/reward-memory | 2023-06-17T13:08:39.106Z | 2023-06-13T00:00:00.000 | {
"year": 2023,
"sha1": "cdee1aef6d6f2fb9b22077953cbd74e291c811a8",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/06/13/2023.06.12.544632.1.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "cdee1aef6d6f2fb9b22077953cbd74e291c811a8",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
226840779 | pes2o/s2orc | v3-fos-license | Veno-venous extracorporeal membrane oxygenation allocation in the COVID-19 pandemic
Rapid global spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the resultant clinical illness, coronavirus disease 2019 (COVID-19), drove the World Health Organization to declare COVID-19 a pandemic. Veno-venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) is an established therapy for management of patients demonstrating the most severe forms of hypoxemic respiratory failure from COVID-19. However, features of COVID-19 pathophysiology and necessary length of treatment present distinct challenges for utilization of VV-ECMO within the current healthcare emergency. In addition, growing allocation concerns due to capacity and cost present significant challenges. Ethical and legal aspects pertinent to triage of this resource-intensive, but potentially life-saving, therapy in the setting of the COVID-19 pandemic are reviewed here. Given considerations relevant to VV-ECMO use, additional emphasis has been placed on emerging hospital resource scarcity and disproportionate representation of healthcare workers among the ill. Considerations are also discussed surrounding withdrawal of VV-ECMO and the role for early communication as well as consultation from palliative care teams and local ethics committees. In discussing how to best manage these issues in the COVID-19 pandemic at present, we identify gaps in the literature and policy important to clinicians as this crisis continues.
Introduction
Unfettered global spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) prompted the World Health Organization to declare the clinical illness known as coronavirus disease 2019 (COVID-19) a pandemic on March 11, 2020. COVID-19 related acute respiratory distress syndrome (ARDS) has affected a wide spectrum of patients. Higher mortality is seen not only more vulnerable populations such as the elderly or chronically ill but also young, otherwise healthy patients [1]. This burden of disease has led to high utilization of healthcare resources, particularly with respect to supportive therapies for critical illness. Veno-venous Extra-Corporeal Membrane Oxygenation (VV-ECMO), a resource-intensive approach to managing severe respiratory failure [2], was utilized with some success during the influenza A (H1N1) pandemic of 2009 [3,4] and presumptively may be of value when managing COVID-19 [3,5,6]. However, given the severity of constraints on healthcare resources, the utilization of VV-ECMO as a therapeutic intervention for COVID-19 requires careful deliberation.
The use of VV-ECMO is indicated in severe hypoxemic respiratory failure refractory to conventional mainstays of medical therapy including mechanical ventilation with optimal positive end expiratory pressure (PEEP) [7], neuromuscular blockade [8], and prone positioning [9]. VV-ECMO differs from veno-arterial ECMO (VA-ECMO) as the latter technology is typically initiated for patients in cardiac or circulatory failure with or without concomitant respiratory failure. Despite a lack of definitive data supporting the use of VV-ECMO, there continues to be substantial optimism surrounding its benefit with widespread ongoing utilization of this therapy [10,11]. Importantly, despite its logistical constraints, patients with severe COVID-19-related ARDS have already been managed with VV-ECMO [12][13][14]. However, given the rapid spread of COVID-19, many intensive care units (ICUs) have become overwhelmed; allocation of VV-ECMO must be a carefully adjudicated triage decision. Here we outline the ethical and legal aspects pertinent to allocation of this resource-intensive, but potentially life-saving, therapy in the setting of the COVID-19 pandemic.
Severe Acute Respiratory Distress Syndrome (SARS), and Middle-Eastern Respiratory Syndrome (MERS). Two primary "phenotypes" describing the most common features of COVID-19 have been proposed [15]. The Type L phenotype is characterized by low respiratory system elastance. This phenotype has been associated with low lung weight, low ventilation-to-perfusion ratio, and low recruitability. Clinicians have noticed atypical presentations in these COVID-19 patients, including some patients who present with profound desaturation without loss in mental acuity. Such patients have been successfully treated without mechanical ventilation, instead utilizing non-invasive ventilation modalities such as high flow nasal cannula [16]. By contrast, the Type H phenotype is characterized by high elastance, heavier lung weight, more significant right-to-left shunt, and greater lung recruitability. Type H patients more often require mechanical ventilation. The disease may evolve from the Type L phenotype to Type H due to COVID-19related cytokine storm, stress of injurious mechanical ventilation, and pulmonary edema caused by increased vascular permeability [17,18]. In clinical practice, differentiation of the two phenotypes is challenging. Without large randomized controlled trials to guide clinicians treating this unique disease, there remains no consensus in how to optimally manage critically ill patients with COVID-19-associated respiratory failure [19]. Thus, clinicians must adhere to time-tested therapies for other forms severe ARDS. VV-ECMO is among these therapies, and may be considered for patients displaying profound deoxygenation despite mechanical ventilation with optimized PEEP, neuromuscular blockade, and prone positioning.
Guidelines for the use of VV-ECMO are imprecise. Directives from the Extracorporeal Life Support Organization (ELSO) suggest judicious use of this technology during a pandemic, due to its resource intensive nature [20,21]. VV ECMO is most likely to benefit patients when initiated relatively early in a patient's disease course [22]. Once initiated, VV-ECMO is commonly considered a "bridge" to specific endpoints, such as recovery or lung transplantation. Unfortunately, VV-ECMO may also become a "bridge to nowhere" in patients who become dependent on VV-ECMO though lacking realistic chance of intrinsic recovery. Thus, it is imperative that possible outcomes and goals of care are clearly communicated prior to ECMO initiation.
Early experience with VV-ECMO in COVID-19 was characterized by high mortality rates raising alarm among clinicians [12,13,23]. A more recent pooled analysis of 331 patients placed on ECMO found a combined mortality rate of 46% [24]. This figure is not dissimilar from the overall 40% mortality rate for extracorporeal life support (ECLS) in pulmonary failure [25] and is an improvement upon reported ICU mortality rates exceeding 60% in mechanically ventilated COVID-19 patients [14,26,27].
Resource allocation concerns in a pandemic
To maximize benefit to a population suffering from pandemic and to reduce the frequency of "bridge to nowhere" situations, appropriate assessment and triage of VV-ECMO candidates must occur. Triage strategies range from those which focus predominantly on individual benefit to those which prioritize population health, at the expense of some especially-ill persons (Table 1). Unfortunately, evidence supporting any one particular approach is lacking. Of note, ELSO guidelines state that healthcare providers should be given high priority for access to VV ECMO, superseded only by the young with minor or no comorbidities [21]. This triage approach essentially endorses a "societal value" paradigm, prioritizing those who may generate greatest benefit to society at large. However, this approach has not been universally adopted. At Beth Israel Deaconess Medical Center's current critical care resource allocation guideline (created with Massachusetts state government guidance [28]), health care worker status is only used as a tie-breaker between patients with equal prioritization scores. This strategy has already been called into question and may be amended in future versions of this document.
Recent guidance by the Commonwealth of Massachusetts recommends to reserve VV-ECMO for those who would be most likely to benefit and avoid prolonged use if there are no signs of recovery [28]. Ultimately, irrespective of nomenclature of a given triage strategy, the decision to offer VV-ECMO may be best made by multidisciplinary teams at the bedside based on the principle of distributive justice [29]. Broadly, distributive justice refers to the fairness in distribution of finite resources and benefits. Centers should aim to justly distribute the highintensity, complex modality of VV-ECMO in a manner which prioritizes the needs of the populations they serve, while withholding therapy from individual patients who realistically are unable to benefit from this specific component of care. As healthcare institutions reach escalating levels of surge capacity, distributive justice approaches generally support the idea that increasingly stringent selection criteria be used to prioritize those most likely to benefit and return to an acceptable quality of life (Table 1) [30].
Centers struggling with designing a comprehensive approach to VV-ECMO triage protocols may benefit from ELSO guidelines for ECMO in COVID-19 [30]. In these, specific contraindications for VV-ECMO are listed, which may help tailor the institutional approach. These contraindications are listed in Table 2. Furthermore, a number of VV-ECMO risk prediction models have been created. These may also help guide decision-making with respect to patient survival [31][32][33][34][35][36] (Table 3). These risk scoring systems vary widely in inclusive variables and resulting complexity due to methodology and relatively small derivation cohorts expected with this type of therapy. The Respiratory Extracorporeal Membrane Oxygen Survival Prediction (RESP) score [34], developed from a cohort of 2355 patients contained within the ELSO registry, by far the largest derivation and validation study of VV-ECMO to date. As such the RESP score is the most widely used tool for risk stratification prior to initiation, although external validation studies have yielded mixed results [35,37,38]. A noteworthy finding of the derivation and validation of the RESP score is the recognition that viral pneumonia was independently associated with hospital survival (odds ratio, 2.26; 95% CI 1.62-3.14; P < 0.0001). Thus, legitimate enthusiasm for use of VV-ECMO in the COVID-19 patients may be more warranted than in other populations. As a supportive therapy, VV-ECMO does not treat the underlying disease process but instead provides time for potential organ recovery or transplant. Unlike other supportive therapies (such as mechanical ventilation or renal replacement therapy) tremendous resources are required to initiate and manage VV-ECMO, including specialized healthcare workers trained to care for these patients [39].
VV-ECMO is expensive. The estimated cost of VV-ECMO is roughly $30,000 per quality adjusted life year (QALY) [40]. Although this figure compares favorably with some chemotherapeutic regimens expected to prolong life for less than one year [41], predicted pandemic-related economic fallout has led to cost concerns [42]. Hospitals, in particular, are reeling due to widespread cancellations of revenue-generating elective procedures. Thus, in keeping with the principle of distributive justice, individual centers must consider the future implications on population health (including other expensive care modalities) when designing VV-ECMO pandemic policies. Though, VV-ECMO may be resource intensive, it compares favorably to VA-ECMO in terms of cost, resource utilization and risk of adverse events [43,44]. This consideration is relevant when contemplating conversion from VV to VA-ECMO in patients who develop myocardial injury and/or distributive shock related to COVID-19 [45,46]. Such patients should be carefully screened for signs of multiorgan failure or other relative contraindications that may portend poor prognosis (Table 2).
Currently both large academic centers and many community hospitals have VV-ECMO capability, but these institutions have varied initiation practices [47]. Some thought-leaders have suggested creation of a centrally-coordinated regional outbreak system, with referral to highvolume centers when smaller centers reach capacity [48]. This strategy may balance the economic realities of a healthcare network's needs while also limiting disparities in access to VV-ECMO.
Setting expectations and establishing goals of care
In many cases, patients will be incapacitated prior to initiation of VV-ECMO. Clinicians must therefore rely on advanced directives and/ or surrogate decision makers to determine how a patient may wish to proceed with care. One unique circumstance presented by the current pandemic is the separation of hospitalized patients from surrogates due to distancing policies designed to prevent spread of COVID-19, forcing many important discussions to occur via telemedicine. Disrupted physical presence may complicate decision-making and generate significant psychological and emotional stress [49]. Early palliativecare consultation can assist families and clinicians with complex decision-making processes, reduce conflict, and increase family satisfaction [50]. Ethics consultation, while mandatory at some institutions for all VV-ECMO patients, may be warranted to ensure moral, ethically justifiable care is provided [51]. These discussions and consultations should occur before cannulation or as early as possible following VV-ECMO initiation. Ideally, predefined goals can be set, with tentative plans to withdraw care that is no longer meeting the patient's needs and allow for resources to be directed elsewhere.
Unfortunately, although multiple risk stratification instruments are available to guide initiation of VV-ECMO, no such instruments are available to guide withdrawal. Adverse events and suboptimal response to therapy are generally considered appropriate reasons to consider withdrawal of VV-ECMO [52]. This is especially true in patients with COVID-19, as they are uniquely predisposed to bleeding and/or thrombotic complications [53]. These realities should be discussed early, or even before, using VV-ECMO for a given patient. Setting clear goals, including the expectation that accrued complications or medically futility may warrant early discontinuation of therapy (implying a transition to comfort measures), can help families cope such decisions [54].
In periods of resource scarcity, particularly after an institution has activated an allocation policy (sometimes referred to as "Crisis Standards of Care"), communication with patients and surrogate decision makers regarding the process for allocation of any scarce resources is critical. This should include an explanation of the possibility that the patient will not receive a scarce resource or will receive it for a timelimited trial and then have it removed and reallocated prior to recovery. Prior understanding of this situation generally, should help when facing the specific circumstances of scarce VV-ECMO allocation decisions.
Legal and ethical precedents
A 2016 survey found that experienced physicians favored paternalistic values over patient autonomy when considering the value of complex care for a given patient. Hypothetically, this finding reflects physicians' unwillingness to cede authority to presumably less-knowledgeable care recipients, while also avoiding dispute over the appropriateness of ongoing medical care [55]. States differ in their attitudes toward this physician-patient relationship. For example, the Texas Advanced Directives Act of 1999 justifies withdrawing life-sustaining therapy against the wishes of surrogate decision makers so long as physicians account for patient autonomy, ensure good stewardship of patient resources, and avoid harm to patients [56,57]. This could be considered "informed nondissent," in which surrogates agree interventions should be limited but prefer to leave the actual decision to continue or withdraw therapy to physicians [58,59]. Informed non-dissent may be a palatable approach for both clinicians and recipients of care, though must be a legally acceptable strategy in an individual institution. In Massachusetts, in the absence of a legally appointed surrogate decision maker, providers must generally seek approval from surrogate decision makers before withdrawing care from patients unable to vouch for themselves. In cases involving withdrawal of VV-ECMO, agreement among the patient and/or surrogate decision-makers (legally recognized or otherwise) obviates the need for settlement within the court system. Withdrawal despite the objection of one or more family members could be considered battery. To our knowledge, no case has established a legal precedent for consideration of withdrawal of technology as battery. Further, Massachusetts passed legislation during the COVID-19 state of emergency that granted certain liability protections for the acts or omissions of healthcare providers during such state of emergency, so long as the treatment was impacted by the treatment conditions resulting from the COVID-19 outbreak and the providers acted in good faith [60]. Accordingly, during the recent COVID-19 outbreak, any uncertainty regarding the ability of a Massachusetts provider to withdraw VV-ECMO from a patient who was on a "bridge to nowhere" seems to have been resolved.
Importantly, it is commonly considered ethically unacceptable to remove a patient from a life sustaining therapy to make room for another [61]. Value for autonomy necessitates not only informed consent upon initiation, but also upon withdrawal of therapy. It is generally considered legally unacceptable to remove patients from therapy against their will, even if removal would provide greater benefit to another patient. In cases of withdrawal, the principle of nonmaleficence may prompt clinicians to wonder if not receiving an intense intervention, such as VV-ECMO, is better than receiving that intervention when it is inadequate to reverse a patient's demise. VV-ECMO cannulation, even when intended as life-saving therapy, does bear the risk of unintentionally hastening death if complications were to occur. COVID-specific complications related to prothrombotic state and the use of anticoagulation are also pertinent and should be communicated to families.
At our tertiary medical centers, institutional policy dictates that no provider should be forced to provide treatment that is harmful, ineffective, or of no medical benefit. At Beth Israel Deaconess Medical Center, dissenting health care surrogates have the right to a second opinion and may be offered accommodation of patient transfer. If no facility is willing to accept the patient, the surrogate can appeal once more, prompting a committee to deliberate. If the committee reaches a consensus that the requested intervention is harmful, ineffective, or of no medical benefit, then hospital administration generally supports the clinicians' decision not to offer such intervention, even over the surrogate decision-maker's objection.
VV-ECMO is a difficult technology to reconcile under this rubric because it is, at least temporarily, usually effective at prolonging life. However, when a patient's life is sustained with no hope of ever being able to survive independent of the intervention outside the ICU setting (i.e., is receiving therapy as a "bridge to nowhere"), the intervention should be considered to be of no medical benefit. This determination would apply under normal standards of care, as well as crisis standards of care. As mentioned above, the best way to avoid such situations is through discussion of these concepts prior to ECMO initiation. Palliative care consultants and ethics committees can aid in family counseling in situations where families cannot be at the bedside.
In the setting of crisis capacity and activation of scarce resources allocation policies, our institution endorses consideration of odds for survival following a therapeutic trial to guide discontinuation of therapy. If a patient either shows signs of decline despite receiving VV-ECMO or does not show signs of improvement after an appropriate trial period, VV-ECMO may be discontinued in favor of another patient more likely to benefit from VV-ECMO. This is an explicit rejection of the firstcome, first-served paradigm which is likely to result in unjust distribution; namely, it is unlikely to save the most lives and life-years (Table 1) and it is likely to have a disproportionately negative impact on individuals from certain ethnic and racial groups, and individuals of lower socio-economic status. In this time of scarcity, the threshold for making this decision may fluctuate with increasing disease burden and it may be reasonable to consider a paradigm valuing the greatest number of life years preserved, while ensuring equitable distribution along racial, ethnic, and socio-economic lines.
Conclusion: The future of VV-ECMO allocation
VV-ECMO is a well-established component of support for patients in respiratory failure and has been used successfully in treating COVID-19 associated respiratory disease. However, access to this therapy is dependent on regional disease prevalence, individual hospital expertise, and perceived benefit to the balance between individual patients and the [35] ARDS 108 from single German center population as a whole. Prior to the next pandemic, allocation guidelines should be more rigorously defined to prevent injustice in clinical outcomes, and to reduce the stress on healthcare providers who are choosing to use this particular resource. These guidelines may be best sourced from professional medical societies. Medical professionals have the responsibility to inform the public regarding the true risks and benefits of VV-ECMO such that these guidelines can be understood and accepted by affected communities. Regional leadership from centers of excellence should engage communities and form outbreak response systems capable of making allocation decisions guided by distributive justice principles. Nevertheless, allocation decisions should be transparent and clearly communicated to patients and surrogate decision makers. There is clearly a need for clinicians to effectively communicate when VV-ECMO therapy becomes medically inappropriate care in terms of the patient's values. This process begins with establishing expectations before initiation and can be facilitated by consultants from palliative care and institutional ethics committees. Addressing these allocation concerns will facilitate optimal deployment of VV-ECMO within the current pandemic and for the next healthcare catastrophe we may face.
Declaration of Competing Interest
None. | 2020-11-14T14:06:29.701Z | 2020-11-13T00:00:00.000 | {
"year": 2020,
"sha1": "9dacce1881457f63caada7ab6561b850988b9926",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jcrc.2020.11.004",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef6f6d59ef63f486caa6f51105eb15fd2b6f84ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218907885 | pes2o/s2orc | v3-fos-license | The Psychological Impact of Confinement Linked to the Coronavirus Epidemic COVID-19 in Algeria
The COVID-19 pandemic continues to spread in countries around the world. The impact of this virus is very great on populations following the application of total and partial containment measures. Our study aims to study the psychological impact of total and partial containment applied in Algeria, on 23 March 2020, following the spread of the virus COVID-19 and also studied the habits and behaviors of the Algerian population during this new way of life and this through a cross-sectional survey launched after three days from the start of confinement to quickly assess the impacts over the period from 23 March to 12 April 2020, by an online questionnaire which allowed us to obtain 678 responses from Internet users, who live in confinement in Algeria. According to the gender variable, our sample includes 405 men, or 59.7%, and 273 women, representing 40.3%. The results of the statistical analysis carried out using SPSS version 22.0 software showed that 50.3% of the respondents were in an anxious situation during these first three weeks of confinement. In addition, 48.2% feels stressed, 46.6% of the respondents confirmed to be feeling in a bad mood, and 47.4% do not stop thinking throughout the day about this epidemic and how to protect themselves. In addition, the study shows that 87.9% of the respondents in Algeria found it difficult to follow the confinement instructions. A significant change in the habits of the population was noted especially for the time of going to bed, the time of waking up, and the use of the Internet as well as the hours devoted to daily reading.
Introduction
The coronavirus pandemic COVID-19 has continued to spread to countries around the world since its first appearance in Wuhan, China, on 31 December 2019 [1] and the declaration of the World Health Organization (WHO), on 26 January 2020, on the high risk of the epidemic in China and worldwide [2]. The number of people tested positive continues to increase, and for 13 April 2020, it had reached 1,773,084 in several countries, which also recorded 111,652 deaths [3]. The daily increase in deaths and confirmed cases has prompted countries to take social distancing measures and other actions related to general and partial containment that are difficult for some countries to enforce. In China, COVID-19 has spread rapidly since its first appearance in Wuhan and has proven to be very dangerous since some affected patients do not have fever and other symptoms which complicate the diagnosis [4].
The report of the National Health Commission of China indicated on 27 January 2020 that people carrying the virus can infect others by respiratory droplets as well as by direct contact [2]. The severity of the disease is summed up in the ability of the virus to spread and the difficulties in identifying those affected to care for them and preventing them from infecting other people [5]. Based on these conclusions, the Chinese government reacted quickly by quarantining a population estimated at question of studying the relationship between socio-demographic variables and the psychological impact of containment during the COVID-19 epidemic in Algeria.
This study will allow Algerian health authorities and possibly elsewhere in the countries of the world to better understand the situation, and this in order to take the necessary measures to assist the population during this period of containment which is likely to lengthen as well after this epidemic.
It is signified by containment during coronavirus COVID-19 that the containment procedures were approved by the Algerian state from 23 March 2020 to deal with this epidemic. The psychological impact signifies the various psychological effects of containment in Algeria on the individual, measured in the current study by the sum of the responses to the questionnaire applied in the current study to a sample of respondents. Daily habits represent the totality of practices and behaviors that the individual frequently embodies in his or her daily life such as: washing hands, going to bed and waking up, watching television, and using the internet.
The Method
In the current study, we used the design of the descriptive survey by an online questionnaire, with the snowball sample 678 due to the conditions of the home confinement accompanying the spread of the coronavirus pandemic; thus, the electronic questionnaire includes items with meanings about the psychological effects of the coronavirus, and, after data collection, these were statistically analyzed by the SPSS program version 22 (SPSS Inc., Chicago, IL, USA). Then, the stage of scientific description came, which is related to the significance of the indicators of the questionnaire items. Therefore, the data were collected using an online questionnaire from different regions of Algeria. Knowing that it was not based on random selection and the study population did not reflect the reality of the general population, we used the statistical approach to describe the results and their analysis was linked through the qualitative indicators that came along with the meanings of the questionnaire items.
Sample and Participants
We adopted a cross-sectional survey to assess the immediate psychological impact on the public during the COVID-19 epidemic using an online questionnaire. With a wide dissemination of the questionnaire with the help of university students, our sampling strategy based on the snowball method is suitable in exceptional cases where it is difficult to communicate with the population to study an urgent health problem related to containment. This method allowed us to obtain 678 responses from Internet users, who are living through this first confinement of the coronavirus epidemic COVID-19 in Algeria. According to the gender variable, our sample includes 405 men, or 59.7%, and 273 women, representing 40.3% of the total sample. For the age variable, 423 of the respondents were aged between 14 and 34 years old or 62.4%, 239 of the respondents were aged between 35 and 54 years old and 35.2%, and 16 of the respondents were aged between 55 and 74 years old or 2.4% of the total number of respondents in the sample. Therefore, we see in the composition of this sample the representation of practically all the age groups of the society concerned by this research issue.
Tools (Online Questionnaire)
The psychological impact of COVID-19 was measured using a global questionnaire measuring the impact of confinement during COVID-19 coronavirus. This questionnaire of 29 items is composed of three subscales: social impact, psychological impact, and impact on mobility. The impact scale of the coronavirus COVID-19 in Algeria was designed on the standards of the Likert scale which includes five response options.
This means that the average score for the questionnaire items is 3, so a total score greater than three indicates a negative impact on this variable, and when it is less than 3, this means that there is no negative impact within the meaning of this element.
The questionnaire includes in its entirety questions which are concerned with the following sections: Impacts on Mobility Q23-29 In this study, we focused on the psychological impacts as the only component (psychological impacts subscale). The questionnaire was designed online to facilitate its dissemination and to obtain respondents' answers immediately.
We focused on the current study on psychological effects and daily habits only, and therefore we clarify the two concepts in that psychological effects designate the emotional changes occurring in the behavior of the individual during interaction with those around him, as measured by individual self-assessment through the items of the questionnaire according to the measurement scale, where this indicates, in quantitative terms, the total score obtained by the respondent to the questionnaire in the dimension related to Psychological effects. The daily social habits are the behaviors which are linked to the process of the daily interactions of the individual in various social situations. This means in our study that the habits linked to the interaction with the coronavirus, such as sleep patterns and hand washing, were measured with direct questions.
The psychological impact section contains 10 items; this means that the total score varies from 50 to 10 with a theoretical average of 30, and it also means that the score which exceeds 30 reflects a negative effect of the psychological factor, and a lower score in relation to 30 expresses the absence of a negative effect of this factor.
The Reliability of the Questionnaire
We used the Alpha-Cronbach coefficient to calculate the reliability on 253 answers, and the results gave a coefficient of 0.799 for the 12 questions related to social impacts, so it is a value which indicates the reliability in the tool of this study. The statistical results also show that the Alpha-Cronbach coefficient, in the 10 questions on psychological impacts, is equal to 0.782, which indicates the reliability of the study tool. For the six questions concerning the impacts on the mobility of the population during this period of total and partial confinement, the statistical results show that the coefficient Alpha-Cronbach is equal to 0.613, a value indicating the reliability of the tool of study. For the entire questionnaire, with its 29 questions on the impacts during the first total and partial containment of the coronavirus epidemic COVID-19, the statistical results show that the Alpha-Cronbach coefficient is equal to 0.831; this value indicates the reliability of the tool used.
The Validity of the Questionnaire
The signified validity of the capacity of the questionnaire to measure what was ready to be actually measured involved the social, psychological, and mobility effects of total and partial confinement on citizens in Algeria. For this, we used the internal validity method of the questionnaire (for 253 individuals), which indicates the correlation between the items of the questionnaire and its overall score. The results are presented in Table 2.
From this table, it is clear that all the Pearson correlation coefficients between the items and the total score of the questionnaire are positive and statistically significant at the level of 0.05 and 0.01. This result means that the questionnaire has a considerable degree of internal validity.
Statistical Analysis
A statistical analysis was performed using SPSS version 22.0 software. Therefore, descriptive statistics were calculated for socio-demographic variables and psychological impact factors. The Pearson correlation coefficient was also used to measure the correlation between various socio-demographic variables and the psychological impact of coronavirus containment COVID-19. In addition, the multiple regression analysis method was used to measure the effect of socio-demographic variables on the psychological effects of confinement, and the t-test was also used to study differences in the psychological impact of coronavirus confinement between men and women.
Habits of Daily Life during the Coronavirus Epidemic COVID-19 in Algeria
The results obtained show the impacts of confinement during the first total and partial confinement operations of the coronavirus epidemic COVID-19 in Algeria on certain habits of the daily life of citizens, where we note a high rate of hand washing during the day, since 51.77% of the study sample reported washing their hands up to 10 times a day, and 36.73% of the population washed their hands between 10 and 20 times a day. On the other hand, the rest of the sample 11.5% paid special and somewhat exaggerated attention to hand washing between 20 and 40 times a day (see Table 3). We also note that 83.63% of those questioned confirm that they sleep late between midnight and 3:00 a.m., and that 12.09% of people go to bed in the regular period between 8:00 p.m. and 11:00 p.m., while the remaining 4.28% would sleep between 4:00 a.m. and 7:00 a.m. the next day. Thus, we can deduce the considerable impact of the first period of total and partial confinement following the coronavirus epidemic COVID-19, on the hour of going to sleep, considered to be very late. Regarding waking up time, the survey results show that 45.72% of the study sample confirmed that they woke up between 10:00 a.m. and 12:00 p.m., and that 38.5% said they woke up between 7:00 a.m. and 9:00 a.m., while 3.83% of the respondents woke up between 1:00 p.m. and 3:00 p.m. For the time spent watching TV, the survey results showed that 45.28% of respondents spend up to 5 h watching TV every day, and 33.18% of citizens confirmed that they watch TV for about 10 h every day. On the other hand, we find that 21.54% of the respondents watch television programs daily for around 15 h. Knowing that television is an important means of passing time during the period of the coronavirus pandemic COVID-19 through the many programs and TV channels, the survey results shown in Table 2 also show that 36.72% of respondents, during this period of total and partial containment linked to the COVID-19 pandemic, say that they do not read books, and 34.47% devote one hour a day to reading books, and 13.87% of the respondents read books for 2 h a day. On the other hand, 14.74% of the population prefer to read between 3 and more than 5 h. What is noticed here is poor reading, but perhaps reading books is compensated by electronic reading on smart-phones and computers, and this is shown later.
The respondents confirmed that they use the Internet for several hours a day, so 37.9% of them spend between 10 and 15 h a day, and 33.93% spend between 5 and 10 h maximum a day surfing the Internet. As for the remaining proportion, it uses less internet, 0 to 5 h a day. Indeed, there is a strong dependence on the internet and related devices that allow time to pass during the period of the pandemic.
Finally, we find that 51.92% of respondents are interested in the content of social networks (Facebook and Twitter), and that of YouTube, while 29.94% prefer reading including scientific research by electronic means. In addition, 18.4% of respondents prefer to follow new local and international news related to COVID-19 and other areas on the internet (see Table 3).
The Table 4 above shows the correlation matrix between certain study variables; we note the presence of a statistically significant negative correlation between the variables age and waking time, which means that young people get up later by comparison with the people in the older age categories (r = −0.216 **); as shown in the table, there is a statistically significant positive correlation between the variable psychological impacts and waking time (r = 0.145 ** ), which means that the increase in sleep time and the delay in getting up are linked to the increase in the level of psychological effects during the first confinement of the coronavirus epidemic COVID-19.
The Psychological Impact of Confinement during the Coronavirus Epidemic COVID-19 in Algeria
The results in Table 5 above show the relative levels and weights of the psychological impact factors of the coronavirus COVID-19 in Algeria during the first total and partial confinement, knowing that the value 3 signifies the theoretical average. According to the respondents, we find that the item related to the difficulty of voluntary engagement in home confinement is ranked first in the psychological factors, with a mean of 4.11, and this signifies a lack of social consciousness and previous experiences in behavior during an epidemic. The rapid spread of the epidemic may not have left the time necessary for better awareness among citizens of the seriousness of the coronavirus COVID-19 and the usefulness of home confinement as the sole means of current prevention.
In second position for the psychological factors, we find anxiety with a mean of 3.22 where the respondents confirmed their feelings of anxiety during the confinement period, perhaps because there are difficulties in accepting confinement itself, or difficulty organizing family life inside the house. In addition, anxiety is strongly present in the event of an epidemic among fragile personalities and contributes to the deterioration of the psychological state of the individual, which affects his or her daily interactions and even his or her physical functions.
The state of psychological stress is the third psychological factor affecting individuals during the coronavirus pandemic COVID-19 in Algeria during this period from 23 March to 12 April 2020, and this is confirmed by the respondents with a mean of 3.18. Admittedly, the spread of the epidemic and the obligation of confinement at home on the one hand, and the difficulty of coping with it, on the other hand, put the individual in a state of psychological stress, especially with the transformation of daily life into a boring daily routine. The fourth psychological factor (see Table 5) affecting individuals during confinement is a mood fluctuation with a mean of 3.17, which reflects the entry of the individual into a state of being emotionally unstable, and which negatively affects him or her and the family environment, not only because of the feeling of limited living space, but also because of a feeling of fear of the pandemic and its various repercussions.
The fifth psychological factor represents dependence on thinking throughout the day about the subject of the epidemic, and this is confirmed by the respondents with a mean of 3.11. This indicates an addiction of thinking about the coronavirus COVID-19, its dangers, and its consequences in an exaggerated way, which leads to psychological, moral, and physical fatigue, and especially in relation to the monitoring of new information which is sometimes incorrect about coronavirus COVID-19. Regarding the rest of the items and psychological factors, the current study did not show any negative effect on the sample of our research, since its arithmetic mean is lower than the theoretical mean 3.
Psychological Impact and Socio-Demographic Variables
According to Table 6, in the multiple regression analysis, it is shown that the variables of sex, age, and family situation were significantly associated (AR 2 = 0.019) with the scores of the psychological impact subscale. Through this table, we find that gender was significantly associated with psychology impact scores (B = 0.112, 95% CI). In addition, age was significantly associated with lower psychology impact scores (B = −0.081, 95% CI). In addition, the family situation was significantly associated with psychology impact scores (B = 0.079, 95% CI). Table 7 below presents the study of the differences in psychological impact between men and women during the first confinement of the coronavirus epidemic COVID-19 in Algeria; the results show the existence of statistically significant differences in favor of women (M = 31.35) compared to men (M = 29.63) in the psychological impact scale.
Differences in the Psychological Impact of the Coronavirus COVID-19 according to the Gender Variable
This result means that the female population is more affected by coronavirus COVID-19 than men, and, to determine the details, we return to the differences in the statistically significant items. Indeed, women were more delusional than men, more eager to wash their hands too much, presenting more emotional stress, fear and an unstable mood, and they were more unreal optimists that they would never be infected by the coronavirus COVID-19.
Discussion
The results were discussed according to the structure of presenting the data by linking them to previous studies according to what we had, especially since the problem is recent.
The results obtained to see the changes in behavior and habits as well as the psychological impacts on the Algerian population during the first three weeks (from 23 March to 12 April 2020) of the total and partial confinement applied by the Algerian government show that the difficulty of voluntary engagement in home confinement is ranked first in terms of psychological factors, especially since 87.9% of the respondents have difficulty applying the confinement instructions. Our field observations confirm that some people often leave their homes and do not follow or have difficulty applying the instructions for containment. The lack of awareness through the dissemination of specialized information affects the population, which remains worried in the absence of reliable information, while previous research has revealed the presence of a wide range of psychosocial impacts on people at the individual, community, and international levels during the spread of the epidemic [6]. It is also possible that the rapid spread of the epidemic has not left the time necessary for better awareness among citizens of the usefulness of levels of home confinement as the only means of prevention. Among other things, 50.3% of respondents indicated that they are in an anxious situation for various reasons related to a new organization of daily life, in addition to the measure of confinement or quarantine that shows that the authorities consider the serious situation and its risk of worsening [8], and this worries the population; and the rapid increase in anxiety in people is linked to the lack of information on the disease and the preventive measures that produce a blockage in daily life [14].
It is also found that 48.2% of respondents experience stress during the period of total and partial containment, and certainly people are well informed that COVID-19 threatens the lives of people and that there is no treatment in this current period, which has triggered a wide variety of psychological problems [16] in the population. In addition, there is the transformation of everyday life into very limited actions which in time becomes very boring. In addition, 46.6% of the surveyed population confirmed feeling in a bad mood during this first period of confinement, which means that the individual is in an unstable emotional state which negatively affects him or her as well as the family environment. In addition, 47.4% of respondents continue to reflect throughout the day on this epidemic and on the ways to protect themselves, and this dependence in an exaggerated way leads to psychological, moral, and physical fatigue.
The Chinese government has improved public awareness of prevention measures, and psychologists and psychiatrists use the internet and social media to share strategies for managing psychological stress [17]. The results of this survey showed that women are the most affected compared to men by the impacts of confinement linked to COVID-19. Therefore, women prefer to wash their hands several times, are more stressed and manifest more fear and instability of mood while they are also more unreal optimists that they would never be infected by the coronavirus COVID-19.
The change in population behavior during confinement also affects psychological and physical health, so, during this 3-week period 51.77% of respondents indicated that they wash their hands up to 10 times a day, and 36.73% do it between 10 and 20 times a day. On the other hand, 11.5% of the respondents exaggerate in terms of hand washing and do it between 20 and 40 times a day; this category of people either move frequently outside, in regions of partial containment and know perfectly the hygienic rules which pushes them to react like this. Either he or she lives in the region where the confinement is total which forces them, for fear, to wash his or her hands regularly even at home.
Note that 83.63% of those questioned sleep late between midnight and 3:00 a.m. and that only 12.09% go to bed in normal h between 8:00 p.m. and 11:00 p.m. On the other hand, 4.28% of respondents say that they will go to bed between 4:00 a.m. and 7:00 a.m., which shows that confinement has changed their habits, since schools are closed, and life is slowing down. In this same context, 45.72% of respondents woke up in the morning between 10:00 a.m. and noon and 38.5% indicated that they woke up between 7:00 a.m. and 9:00 a.m. 3.83% of respondents wake up between 1:00 p.m. and 3:00 p.m.; this category represents that of young people who stay connected to the internet for a long time. These changes in the time to go to bed and wake up are a sign of an increase in the level of psychological effects during this first confinement. In addition, there are many hours spent watching television since 21.54% of respondents say they spend 10 to 15 h watching television programs daily, and 33.18% spend between 5 to 10 h watching television daily. The population is also trying to follow the information associated with the epidemic COVID-19 on the various international TV channels since the gravity of the epidemic has not been widely broadcast or recognized, which has delayed protection measures and also containment [7], which pushes citizens to search for information themselves.
We also find that 45.28% stay up to 5 h in front of the television, which becomes an essential means to follow the new information on the epidemic and the measures taken by many countries which continue to make efforts to minimize the contacts between humans and guarantee good protection for the population [18], especially since it is always difficult to fight against COVID-19 of unknown origin and mysterious biological characteristics with a long period of incubation [19].
Reading books, during the period of total and partial confinement in Algeria, does not interest 36.72% of respondents, while 34.47% devote one hour per day to reading and 14.74% of the surveyed population prefer to read books for between 3 h and more than 5 h a day. Confined people do not pay much attention to reading since it is certainly linked to reading on digital media via the Internet for many hours.
Thus, 37.9% of respondents say that they devote to the internet between 10 and 15 h per day, and 33.93% remain connected to the internet between 5 and 10 h per day, which constitutes a strong dependence on the internet and its services during the containment period. Social networks (Facebook, Twitter, and YouTube) attract the attention of 51.92% of respondents. On the other hand, 29.94% prefer reading and scientific research on the internet; 18.4% of the surveyed population opt to follow new local, national and international information related to COVID-19 and other subjects, since it would not be surprising that one day, in the near future, broader containment measures will be required to protect against this pandemic [20].
For this purpose, it is necessary to use the different means of information and communication so that psychologists increasingly approach the confined population in need of psychological help. The dissemination of information related to COVID-19 must be carried out with complete transparency and by specialized scientific journalists capable of disseminating most of the information with great precision. Identifying confined people through a platform and remote assistance will make it possible to quickly get closer to people in urgent need of psychological support.
Conclusions
The COVID-19 pandemic continues to spread in countries all over the world. The number of people affected and deaths is increasing every day. The impacts of this are very big on the populations following the application of total and partial containment measures. Our study evaluated the psychological impact of total and partial confinement applied in Algeria, on 23 March 2020, following the spread of the virus COVID-19 and we also studied the habits and behaviors of the Algerian population during this new mode of life, and this through an investigation launched after three days of the start of confinement to quickly assess the impacts over the period from 23 March to 12 April 2020, by an online questionnaire.
The results showed that 50.3% of respondents were in an anxious state during these first three weeks of confinement. In addition, 48.2% feel stressed, 46.6% of the respondents confirmed feeling in a bad mood, and 47.4 % do not stop thinking throughout the day about this epidemic and how to protect themselves.
In addition, the study shows that 87.9% of respondents in Algeria found it difficult to follow the instructions for full and partial containment. A significant change in the habits of the confined population, especially about going to bed and waking up time, is observed, which shows the increase in the level of psychological effects.
Note also that changes in internet use and daily reading are seen in the results of this study. Among others, the limitations of this study are linked to the sampling strategy, the number of respondents, and the short duration of the study. Thus, this will not make it possible to generalize these results over the entire population. However, these results can help the health authorities and other services concerned by the epidemic COVID-19 in the procedures for taking charge of the population during this period of confinement, which is likely to lengthen further, knowing that the psychological aspect, which influences behavior, is very important to fight against the coronavirus. It is also necessary to regularly monitor the change in daily habits since it indicates the level of awareness of citizens about health protection. In this type of situation, psychological support must be provided remotely to families and individuals to alleviate their suffering and encourage them to stay at home during the confinement period and to respect the habits of prevention against the coronavirus. The current study can be developed to study the effect of confinement on personality characteristics, quality of life, and link them to behavioral habits to be more preventive. | 2020-05-23T13:05:32.720Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "36f2351a40e923623be4408fdbdc4d944362c2b6",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7277423?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eba4c9f955496fdbe60964574ebca5cd39efbd2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236943796 | pes2o/s2orc | v3-fos-license | Global optimization of default phases for parallel transmit coils for ultra-high-field cardiac MRI
The development of novel multiple-element transmit-receive arrays is an essential factor for improving B1+ field homogeneity in cardiac MRI at ultra-high magnetic field strength (B0 > = 7.0T). One of the key steps in the design and fine-tuning of such arrays during the development process is finding the default driving phases for individual coil elements providing the best possible homogeneity of the combined B1+-field that is achievable without (or before) subject-specific B1+-adjustment in the scanner. This task is often solved by time-consuming (brute-force) or by limited efficiency optimization methods. In this work, we propose a robust technique to find phase vectors providing optimization of the B1-homogeneity in the default setup of multiple-element transceiver arrays. The key point of the described method is the pre-selection of starting vectors for the iterative solver-based search to maximize the probability of finding a global extremum for a cost function optimizing the homogeneity of a shaped B1+-field. This strategy allows for (i) drastic reduction of the computation time in comparison to a brute-force method and (ii) finding phase vectors providing a combined B1+-field with homogeneity characteristics superior to the one provided by the random-multi-start optimization approach. The method was efficiently used for optimizing the default phase settings in the in-house-built 8Tx/16Rx arrays designed for cMRI in pigs at 7T.
Introduction
The implementation of MRI-scanners, operating at ultra-high field static magnetic fields (UHF of B 0 �7T, promises a significant increase of both SNR and the spatial resolution (up tõ 200μm in-plane) [1] of clinical MR-images. Therefore, despite technical challenges related to both B 0 and B 1 + fields' inhomogeneities, the interest in the application of UHF scanners for cardiovascular MRI (cMRI) grows [2][3][4]. At the Larmor frequency of protons � 300MHz (B 0 �7T) and electrical permittivity of muscle tissue (ε�60), the wavelength of B þ 1 À field (10-12cm) is smaller than the dimensions of a human thorax. This leads to the establishment of a standing wave regime and creates interferences of a B 1 + -field across an imaged field-of-view (FOV). Additionally, the homogeneity of the B 1 + -field suffers from strong differences in the permittivity of intra-thoracic structures, in particular, of the lung parenchyma and cardiac muscle tissue. Thus, the spatial distribution of contrast in MR images becomes essentially heterogeneous rising demand for methods capable to neutralize these negative factors. During the last decades, multiple elements transmit-receive arrays (mTx-arrays) combined with parallel transmit (pTX) technology became an emerging tool for the improvement of the B 1 + -field homogeneity in UHF cMRI. Starting with standard birdcage coils with individual control of the phases of the driving ports, the technology development has led to customized arrays with up to 32 transceiver elements [5]. The type of mTX-arrays elements also evolved from classical magnetic loops to electrical dipoles, microstrips, and sophisticated loop-dipole hybrids [5][6][7][8][9][10][11]. The novel hardware allows for shaping the optimal spatial distribution of the B 1 + -field to excite a targeted transversal magnetization with the best possible smoothness of both magnitude and phase within a region-of-interest (ROI). This procedure is usually called "B 1 + -shimming". Additionally, using the RF-power amplifier (RFPA) operating in pTX mode allows for driving individual TX-arrays elements dynamically, i.e. with varying magnitude and phase of the RF-pulse waveforms to shape excitation profiles (including 2D and 3D) using different optimization criteria [12][13][14][15][16]. Last but not least, both techniques allow for including safety margins regarding specific adsorption rate (SAR) of electromagnetic energy.
Despite the significant progress in the field of subject-specific static and dynamic B 1 + -shimming, it remains to be a quite complex research tool. It requires deep expertise in MR-physics, pulse sequence programming, and optimization methods, to build a pipeline including (i) B 1 + -mapping, (ii) B 1 + -shimming parameters computation taking into account SAR margins, and (iii) integration on the scanner system. For cardiac MRI additional significant efforts for reliable absolute B 1 + -mapping are required to overcome limitations related to breathing and cardiac motion [17]. An important step in the development of mTx-arrays for UHF applications is the optimization and fine-tuning of the coil ensuring reasonable image quality and SAR safety for an average subject without additional adjustment procedures. In particular, this includes finding the default set of phase shifts required for the driving voltages of the individual TX-elements such that a uniform B 1 + distribution can be achieved for the largest possible Field-of-View (FOV). This set, usually called a "phase vector", can be integrated into the hardware as permanent or adjustable phase shifters implemented e.g in the form of coaxial cables of defined length. Alternatively, for the last generation of MR-scanners with support of pTX-mode (e.g. Magnetom™ "Terra", Siemens Healthineers), the default phase vector can be fixed in the coil configuration file to set the phases of driving voltages generated by RFPA. This provides operation of the mTX-array in the so-called "pTX Compatibility Mode". The default phase vector should shape a reasonable combined B 1 + -profile for an average subject making possible a straightforward application of the array in the single-Tx mode without RF-shimming as well as simplifying the B 1 -shimming process for pTX mode. For the birdcage volume resonators or dipole antenna arrays with cylindrical symmetry of elements arrangement, this usually corresponds to the "circularly polarized" ("CP") phases distribution based on the geometry of elements allocation [18]. However, the straightforward geometry-based approach for setting up default phases is not possible for the surface cardiac mTX-arrays having sophisticated shapes of individual elements and their allocation on a thorax. Therefore, for this type of mTX-arrays, the problem of default phasing is formulated as an optimization problem for a cost function maximizing homogeneity of the B 1 + -field with certain constraints or regularization on SAR and efficiency of using an available RF-power [5,8,11,19,20]. For typical numbers of Tx-elements (N = 8. . .16), the high-dimensional cost function has numerous local extremes. Many computational methods were applied in the context of both static and dynamic B 1 + manipulations which include linear and non-linear solvers approaches. However, most of these methods are targeted for the application of the B 1 + -shimming during the scan session, to deliver practical solutions under a strong time limitation (~1 min computation time). Therefore in most cases, the result represents a local optimum dependent on starting point used for the search process initialization. Moreover, these methods are mostly developed for the B 1 + -shimming using pTX-RFPA and manipulating with both phases and magnitudes of the driving voltages. Due to this reason, the optimization techniques developed for the subject-specific B 1 + -shimming providing quick but locally optimal solution within limited FOV is suboptimal for the optimization of default array elements phases as a part of the coil development and final fine-tuning. In this case, the computation time could be extended to hours and the main goal is to achieve the globally optimal characteristics of a shaped B 1 + in a frame of the formulated optimization problem.
For the small (N = 4-6) number of TX-elements, the most straightforward and universal solution of the discussed "phase only" optimization problem is the global brute force search over the complete phase space raster defined with reasonable discretization over each vector coordinate [21]. However, because the number of values in the raster grows exponentially with the number of elements this requires an extremely long computation time even for a moderate number of phase elements N�8 and discretization step (e.g 5˚). Therefore, often a prior experience and pragmatic restrictions on the searched phase vector components range are used to accelerate these computations. As an example, in the work [8] the brute force approach was used in a highly restricted phase space raster for 12 elements array (36 3 �40000 vectors was used). This procedure was considered to be sufficient for a symmetrical rectilinear array geometry and suggesting that phases should not vary along the z-direction. Alternatively, in the work [6], the search of optimal phases was performed using a non-linear solver (NLS) approach with multiple random starting points (referred to further as "random multi-start"). However, for keeping computation time reasonably short, the number of starting vectors for the NLS-optimization was deliberately limited to 1000. For the mTX array with 16 elements, this number corresponds to �10 −24 parts of the whole phase vector space gridded with 10s teps. As it'll be shown further in this paper, in an arbitrary array elements configuration, the straightforward usage of random-multi-start strategy may not allow us to detect a sufficiently good approximation of a global optimum of the typical optimization cost functions. This paper aims to propose a flexible pragmatic approach for searching for an optimal phase setting to shape a targeted default B 1 + -field of mTX arrays. The proposed numerical optimization strategy allows for a flexible trade-off between (i) sufficiently short computation time and (ii) probability of reaching the global optimum of the targeted B 1 + characteristics. It does not involve a priori constraints on the searched phase space (e.g. originating from the symmetry of the coil's geometry). In the current work, the proposed technique was validated using an inhouse developed 8TX/16RX dual part array for 7T cardiac MRI in pigs described in [22].
Theory
The combined field created by an mTX array with the constant driving amplitudes is expressed as: Here, F k ¼ e iφ k are complex phasors of the driving current for channels and b þ 1k ðrÞ are spatial B 1 + -maps of the individual coil elements. The phasing of an array is performed by control of the phasor vector {F} = {F 1 � � �F N } to achieve the targeted spatial homogeneity of combined field B 1c + (r). Cost functions and optimization problem. In general, the goal of B 1 + -shimming is to achieve a uniform excitation pulse flip-angle (FA) distribution within а whole imaged FOV.
However, using static B þ 1 À shimming and manipulating only with phases of individual elements this task would be either non-accomplishable or put very strong limitations on the range of achievable FA. Therefore, a pragmatic approach demands a certain degree of B þ 1c homogeneity characterized by statistical metrics of a distribution uniformity to be reached in the limited region-of-interest (ROI) Δr = (Δr x ,Δr y, Δr z ).
One of the widely used uniformity metrics used for the B 1 + -field shimming is the coefficient of variation: Here mean() and std() denote the mean value and the standard deviation calculated over the voxels within Δr. Similar to the work [23] we introduce a regularization factor Δ m to enhance the demand on homogeneity and compose the spatial uniformity cost function F u as: Among an achievable B 1 + -homogeneity, the essential characteristic of an mTX-array is the efficient usage of the available radiofrequency power. This can be controlled in the optimization procedure via an array transmit efficiency ratio of "sum-of-magnitudes" (SOM) and "magnitude-of-sum" (MOS) determined as [18,21,23]: Taking values in the range [0. . . 1] it characterizes the efficiency of power usage at the specific combination of phases F k producing B þ 1c . Finally, the optimization problem for the combined cost function F c to be solved for the determining phasor vector can be formulated as follows: fF opt g ¼ argmaxðF c ðDr; fΦgÞÞ F c ðDr; fΦgÞ ¼ F u ðDr; fΦgÞ � TX e ðDr; fΦgÞ ð5Þ
Method
Computation over sub-sampled phase space. The essential difficulty of solving the problem (5) is the periodic influence of each component of the phasor vector F k on the cost function F c leading to multiple local extremes (S1 Fig). In practice, this means that an optimization search initiated at a specific point {F 0 } ends up in the nearest local extremum {F opt }. This often makes a straightforward application of both derivative-based and other types of local NLS inefficient.
The most straightforward and universal solution for the problem is using a global exhaustive search varying all possible combinations of phasor vector components with a discrete step. However, for the given step δF and N Tx-elements, the global full-exhaustive search requires computing the cost function (360/δF) N times which for the δF�10˚and N = 16 would lead to more than 10 25 multiplications of complex matrices of size (Δr x �Δr y �Δr z ). The estimated computation time (up to several months on a high-performance workstation with multicores CPU) makes such an approach technically non-practical. In this work, we propose the computation strategy with an intelligent fusion of the brute-force approach and usage of local solvers to find a sufficiently good approximation of the global maximum of the cost function and, thus, globally optimal combined B 1 The complete flowchart scheme of the proposed computation method is shown in Fig 1. The goal of the first stage (Stage I) is to find a reasonable finite discrete representation of F c (Δr , {F}) using the N-dimensional discrete phase space {F}. For this purpose, the discrete grid of phase vector with N components should be formed as providing the "full-grid" discrete form of the cost function F c ðG N d� Þ, where L denotes the total number of nodes in the grid.
The further approach relies on two observations: (i) the cost function F c ({F}) is periodic by each phase vector component and (ii) the set of values {F top } providing values above 90% of its top (F c {F top } >0.9�max(F c )) is formed by relatively tiny (<10 −6 ) part of all vectors in full grid G N d� (see S1 Fig). Therefore, as the second step, the G N d� is randomly sub-sampled by the sampling operator R s with the uniform probability density function (PDF). This operation selects L s = L N /s vectors from G N d� , where s denotes the scaling factor of the effective nodes number reduction. In this work, the sub-sampling was performed using the uniform PDF. The usage of alternative sampling schemes (e.g. with Poisson disk PDF) for a further improvement of the computation efficiency of the proposed method can be also considered. As the result, this creates a "sub-sampled" grid: The phasor vector sub-space with the reasonably chosen subsampling by factor s�10 3 for N = 8 can be used to find the great majority of the summit points of the cost function solver iterations to find the local optimum of the cost function for a single starting point in {F top }. This procedure is repeated iteratively (see Fig 1) to reduce the set of phasors {F opt } to a sufficiently small amount (N<10) which can be considered as a good approximation of the global optimum. The B 1c + field shaped by these vectors {F fin } can be analyzed based on various preference criteria for (e.g. minimal B 1c + gradients, maximal peak FA, best possible slice profile, etc) to select the variant used for the array default setup. In this work Stage II was performed using "fminsearch" Matlab function implementing the Nelder-Mead algorithm of optimization and known as less dependent from the starting point in comparison to other local solvers [24]. The performed in the Stage I preselection of the starting points allows for starting NLS-search close to the summits of the cost function and, thus, essentially increases the probability that the found solution vector will be globally optimal for the problem (5). B 1 + -profiles of the transmit elements. The described B 1 + optimization strategy relies on the knowledge of the complex B 1 + -maps of each Tx-element b 1k (r). However, to acquire the experimental B 1 + -maps a fully functional prototype of the array is required. Moreover, measuring these maps experimentally with sufficient reconstruction quality is often a non-trivial problem. The destructive interferences lead to attenuation of the MR-signal and producing artifacts of reconstruction or voids in the B 1 + -maps. If this takes place in the region where B 1 + shimming is targeted, the reliable optimization becomes problematic or requires significant efforts for data curation.
The alternative efficient approach is to use numeric electromagnetic simulations to compute the 3D distribution of the electromagnetic fields created by individual coil elements. After validating the simulated data by the experimental MR-measurements it can be further employed for the computation of the combined B 1c + in any arbitrarily placed region of interest.
Moreover, this allows to test the capabilities of the specific array design in terms of tailoring desired B 1 + profiles "in-silico" and perform necessary optimizations of both physical elements arrangement and electrical circuits [25]. In this work, the electromagnetic fields of the mTX-arrays were calculated using CST Studio Suite (3DS, Dassault Systems). The time-domain solver was used for the EM-simulation of the array's structures and the whole phantom volume within the coil using 4�10 7 mesh cells. The average computation time using a dual-core Intel Xeon (TM) E2650 CPU and two NVIDIA Tesla (TM) K80 GPUs was 48 hours for each simulated structure. The simulated data have been exported as 3D complex H-field values with the isotropic spatial resolution voxels of 4x4x4mm. Further computations were performed using in-house developed Matlab scripts (Matlab 2017a/b, Mathworks, USA).
Testing and validation of the phase space under-sampling strategy. To test and validate the efficiency of Stage I strategy in terms of a sub-sampling factor the optimization of phase vector for the six paired elements (Tx1-Tx4, Tx6, Tx7) was performed. These elements providẽ 90% of the total contribution in the combined B 1 + field in the center of the ROI targeted for the testing. Reducing dimension made it possible to perform computation over a full phase space grid and grid undersampled by low factors s within a reasonable time. The full phase space grid G L was built according to [6] using L = 17 nodes. With the span of {ϕ k } components within [-180˚..180˚] it provides the discretization step δϕ � 20 0 . The full grid was sampled according to [7] to create the sub-sampled grids U s with densities varying in a range of s = [1.. 10 4 ]. The grids with the different sub-sampling factors were used to test the fidelity of approximation of the cost function F c by the sub-sampled version Usage of CPU and GPU. In general, the efficiency of GPU computations depends on the GPU performance index and dimensionality of the problem, and the size of processed arrays. For N = 8 elements no advantage of GPU usage was found. However, the situation changes significantly for a higher-dimensional problem with N = 16 elements optimization. For the CPU the computation time grows roughly proportional to the number of voxels in the optimized ROI. At the same time for the GPU, the computation rate for both Stage I and Stage II calculations remains practically independent of the size of computed arrays up to 2.5�10 4 voxels in the optimized ROI (corresponds to 12x12x12cm volume). S2 Fig shows that computation time with a modern GPU may become drastically shorter compared to the CPU already starting from a targeted volume of~1300 voxels. Therefore, in practice, the decision of usage of GPUs within the computation pipeline should be based on benchmarking of small portions of the actual phase grid U s (e.g~0.5%) for the particular dimensionality of the problem and optimization volume.
Comparison to a traditional optimal phase computation method. To analyze the improvement of the homogeneity and mean value of B 1 + which can be obtained with our optimization method we compared it to the random multi-start optimization [6]. For N = 8 optimization a randomly selected set of 1000 phasor seed values with uniform distribution of components φ k in the range of [-180˚..180˚] was used as a starting point. The NLS-search was performed using the function (5). To provide sufficient statistics for the NLS-search comparison, the number of initial phasors {F 0 } for the pre-optimized start search was set to be 250. This corresponded to a cost function cut-off level of �70% from the top (the cut-off at 90% typically provides~20-30 phasors for Stage II). The comparison of the optimization methods was performed for an ROI within a spherical phantom P 1 (described below). The ROI with dimensions Δr = 50x50x50mm was placed as shown in Fig 5. Because the major intrinsic B 1 + inhomogeneity for the surface array is the gradient in the anterior-posterior ("Y-axis") direction, the optimized B 1c + profiles were compared using the relative standard deviation of the mean intensity projection computed as W r ¼ stdðb þ 1m Þ= < b þ 1m > and its relative absolute gradient jrb þ 1m j r ¼ jrb þ 1m j= < b þ 1m >. These projections were computed in z-direction for the slab Δz = 50mm.
As the final step, the comparison of an optimized and random multi-start was done for a high dimensional problem (N = 16 elements). The optimal phase vector search was done using the simulated elements B 1 + -maps of the array loaded by the numerical phantom P 2 (described below). The dependence of the achieved optimization result from the number of starting phasors N start was analyzed using 3 repetitions of NLS-optimization with N start = 7500. The cumulative maximum of the cost function was computed as max (F opt c [1..N start ]), where N start increments from 1 to 7500.
Coils, phantoms and MRI validation of electromagnetic simulation data. The mTXarray used for the validation consists of two physically independent parts each comprised of eight loop elements. The array is connected to the scanner via a dedicated RF-interface (Rapid Biomedical, Rimpar, Germany) which allows for using connected coils both in sTx and pTX modes. A detailed description of the array design and hardware specifications can be found in [22]. The driving voltage phases of elements can be adjusted in two ways: 1. Individually for each of 16 elements using phase-shifting coaxial cables connected to the dedicated sockets of the RF-interface.
2. Pairwise for every two elements, using 8 RFPA channels available on the scanner in pTX regime. Fig 2 shows a sketch of the array elements' configuration and location in both anterior and posterior parts along with the elements pairing scheme for driving by 8TX channels in pTX regime. Two phantoms we used for testing the proposed method both in the numerical simulations and in the experimental validation. Phantom P 1 comprised six rectangular cross-section PE bottles loading the posterior part, and a plexiglass sphere (diameter, 160mm) positioned inside the anterior part (Fig 2 left panel). The bottles and the sphere P 1 were filled with a solution of sugar and NaCl mixed in an experimentally defined proportion providing permittivity close to the typical muscle tissue (ε�58). The second phantom P 2 represents an acryl glass sphere with a diameter of 200mm filled with PVP solution prepared as described in [22] and [26].
The spherical phantom was chosen based on practical experience to minimize the number of destructive RF-interferences within the ROIs corresponding to the typical position of an animal's heart relative to the array. Simultaneously, this mimics real animal studies where some of the anterior array elements are more distant from the loading tissue than others.
All MRI measurements were performed using a 7T Magnetom (TM) Terra whole body 7T scanner (Siemens Healthineers, Erlangen, Germany) equipped with an 8TX channels RFPA. The FA measurement for the B 1 + -map reconstruction was done with the double angle mapping (DAM) technique based on a GRE sequence [27]. Additionally, the B 1 + -maps were crossvalidated with the vendor's B 1 + -mapping pulse sequence (turbo-FLASH with magnetization preparation [28]) available on the scanner. For the GRE-DAM measurements, the parameters were: TE/TR = 1.8/4000ms, pixel resolution 2mm 3 , slice thickness 4mm. The sequence parameters for the Turbo-FLASH B 1 + -mapping were TE/TR = 1.7/9000ms, FA = 10 0 for the same spatial resolution. The coronal slice providing the best overview of the Tx-profiles of the anterior array elements was chosen for comparison with the CST-simulated maps. The FA-map reconstruction was performed using an in-house developed Matlab script. The agreement of simulated and measured data was checked numerically by linear regression between normalized 1D-profiles computed along the lines crossing the most prominent features of simulated and measured 2D FA-maps in each channel. Testing the efficiency of the computed phases for B 1 + optimization was done using the same GRE-sequence as for DAM measurements. The initial "zero" phasеs of the array were set by manual adjustment geometrically guessed phase distribution [22]. To visualize the effect of B 1 + optimization the volumes inside the iso-surface SNR>30 were compared using 3D stack of GRE images. SNR was computed on pixel bases as SNR(x,y,z) = S i (x,y,z)/σ m . The S i denotes signal intensity in pixel and σ m is the mean standard deviation of the intensity of the pixels in the "noise pool" represented by 20x20 pixel regions in each corner of the FOV for all acquired slices. Fig 7B shows the dependence of the cost function progression ratio in the optimization process, defined as F c ({F opt })/F c ({F 0 }), from the norm of the initial and the final phasor vector difference δF opt = kF opt −F 0 k. One can see that the length of the "search trajectory" δF opt is nearly constant (�2 rad) for the pre-optimized starting phasors. At the same time, for the random starting phasors one observes a widely spread distribution of δF opt in the range of 2 to 10 rad. The median value of the cost function grow-up rate performed with the pre-optimized starting phasors is �25% higher than that of random starting points (marked by dashed lines of corresponding colors). -field in the central sagittal and coronal slices of the optimized region for zero phase vector (left) and phase vectors found by random (middle) and proposed optimized multi-start optimization (right) are shown. The value of the normalized cost function F opt c =F 0 c , coefficient of variation and mean gradient demonstrate an improvement of B 1c + homogeneity achieved by the proposed method. Fig 10A and 10B show the GRE MR-images acquired using phase vector settings optimized for N = 8 channels in P 1 phantom. Phases vector were set using the pTX adjustment platform of the scanner simulating usage of the array in the "pTX Compatibility Mode". The mean SNR value in the optimized coronal region is improved by �60%. For the labeled on the panel (a) optimization region the �40% increase of the volume with SNR>30 was achieved as shown on panel (b). The average local gradient of SNR in the anterior-posterior direction was reduced by �50%. Fig 11 shows the experimental result of the optimization of 16 components phase vector for the phantom P 2 . An optimized phase vector for driving voltages of array elements was set using phase shifters (coaxial cables) connected to the dedicated sockets of the RF-interface. + . An essential inhomogeneity of b 1m. + before the optimization is observed.
Discussion
In this paper, we demonstrated a relatively simple way to balance the computation time with the depth of optimization search targeted to reach the best possible homogeneity of the combined B 1 + -field achievable by the fine-tuning of the array by the developer before (or without) subject-specific adjustments in the scanner. The efficiency of the approaching to a globally optimal for the formulated optimization problem phase vector is expected to be close to the full phase space exhaustive search, however, achieved with a realistic computation time even for high-dimensional problems (N>16). To the best of our knowledge, the groups reporting the development of in-house designed transceiver arrays for UHF MRI usually use implicit or explicit pragmatic limitations on the depth of exploring of possible phase combinations in their approaches to the adjustment of the default phases. The results of this work demonstrate that the arbitrary set limitations on the number of starting vectors for the plain random multi-
PLOS ONE
start optimization may lead to achieving essentially lower B 1 + -field homogeneity than using the proposed optimized multi-start strategy with adequately defined phase space raster for Stage I. In particular, with N start = 1000 starting vectors and N = 8 optimized phase components the random multi-start optimization using the cost function based on the coefficient-ofvariation may lead to the relative gradient which is by factor 10 higher compared to the one achieved in the optimized multi-start. The analysis of the high dimensional problem (N = 16) confirms that arbitrary set limitations of the starting vectors set may limit the potentially achievable B 1 + -homogeneity. For example, as shown by Fig 8, for N start = 1000, the maximal cost function value achieved by the random-multi-start would be 20% lower than for the described strategy. The increased depth of the random multi-start search (up to the N start = The problems of higher dimensions (N = 24..32 elements) should be preferably solved by stepwise iterative optimization. On each step, the dominant array's elements for specific ROI can be selected, optimized, and combined using found phase vector. This combined section will be considered as a single element for the optimization of the remaining (or next part of) elements as it was proposed in the work [29]. The experimental validation of the optimized phasing by MRI measurements confirms increased uniformity of the B 1 + leading to the improved homogeneity of the SNR in the optimized region. The results of testing the arrays for pig cardiac MRI with default B 1 -profiles optimized using the described technique were demonstrated both ex-vivo and in-vivo for the dedicated pigs arrays [22,26] and as well as for the human arrays [30,31]. The limitation of the current approach is that found globally optimal phase settings are valid in the mathematical context of the specific optimization problems i.e. "global maximum/minimum of the specific cost function(s) used in the optimization". The achieved B 1 is not necessarily optimal across the "global population" (animals or humans). Moreover, as we demonstrate in the work [30] the human model-based fixed phases optimization is most probably limited to a specific cohort (specific patient weight range and sex). Solution globally optimal to the population of human or animal subjects for the static B 1shimming would require using a machine or deep learning approach with a neuronal network trained to a large number of B 1 -maps representing the population [32]. Another approach providing B 1 + optimization which is global regarding a population is a dynamic pTX-based B 1 + shimming with using universal 3D kT pTX-pulses designed based on the multiple B 1 -maps acquired from the population [33]. Both these approaches, however, require the preliminary acquisition of patient-specific B1-maps using a fully operational array validated for SAR safety, whereas the described method is primarily targeted to the optimization at the hardware development and construction stage.
Conclusion
In this paper, we propose a time-efficient technique for the calculation of globally optimized hardware phases to be used for the construction and the subsequent optimization of transceiver arrays for UHF MRI. The proposed methodology combines the advantages of full coverage of phase space by an exhaustive search with high computational time efficiency of the solver-based optimization. Besides a final hardware adjustment of phases, the proposed approach allows for a rapid evaluating of the B 1 + -shimming capability of different array architectures "in silico". The proposed technique has demonstrated its efficiency in the In both stages using of GPU brings acceleration of the computations starting from the number of voxels in the optimization region exceeding~1300. One can notice, that the speed of CPU computation remains practically constant by a 6-fold increase of the optimization volume whereas for CPU computation the time increases linearly up to the number of voxels~2200 and even faster by larger arrays size. The CPU type used was AMD Ryzen 9 3950x/16 cores. GPU type: GeForce RTX 2080 Titan, 68 multiprocessors, 1.545GHz, Matlab compute capability index = 7.5. (TIF) | 2021-08-08T05:24:46.888Z | 2021-08-06T00:00:00.000 | {
"year": 2021,
"sha1": "dc984dc30c9aaa955d920d3cc05a5ae29d026dec",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0255341&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc984dc30c9aaa955d920d3cc05a5ae29d026dec",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261885760 | pes2o/s2orc | v3-fos-license | Comparative performance evaluation of QIAreach QuantiFERON-TB and tuberculin skin test for diagnosis of tuberculosis infection in Viet Nam
Current WHO-recommended diagnostic tools for tuberculosis infection (TBI) have well-known limitations and viable alternatives are urgently needed. We compared the diagnostic performance and accuracy of the novel QIAreach QuantiFERON-TB assay (QIAreach; index) to the QuantiFERON-TB Gold Plus assay (QFT-Plus; reference). The sample included 261 adults (≥ 18 years) recruited at community-based TB case finding events. Of these, 226 underwent Tuberculin Skin Tests and 200 returned for interpretation (TST; comparator). QIAreach processing and TST reading were completed at lower-level healthcare facilities. We conducted matched-pair comparisons for QIAreach and TST with QFT-Plus, calculated sensitivity, specificity and area under a receiver-operating characteristic curve (AUC), and analyzed concordant-/discordant-pair interferon-gamma (IFN-γ) levels. QIAreach sensitivity and specificity were 98.5% and 72.3%, respectively, for an AUC of 0.85. TST sensitivity (53.2%) at a 5 mm induration threshold was significantly below QIAreach, while specificity (82.4%) was statistically equivalent. The corrected mean IFN-γ level of 0.08 IU/ml and corresponding empirical threshold (0.05) of false-positive QIAreach results were significantly lower than the manufacturer-recommended QFT-Plus threshold (≥ 0.35 IU/ml). Despite QIAreach’s higher sensitivity at equivalent specificity to TST, the high number of false positive results and low specificity limit its utility and highlight the continued need to expand the diagnostic toolkit for TBI.
Discussion
Our study found that QIAreach had a high sensitivity when compared to QFT-Plus (reference standard).In direct comparison with TST using a 5 mm induration threshold, the assay also performed well, achieving a significantly higher sensitivity with a statistically equivalent specificity.However, the assay produced a high number of false positive results, which resulted in a significantly lower specificity compared to TST using a 10 mm induration threshold, which was the standard of care for the majority of persons with TBI according to national guidelines in Viet Nam at the time of the study 22 .The assay's high sensitivity was concordant with recent studies that reported similar high sensitivity versus QFT-plus ranging from 99.1 to 100% [23][24][25] .Moreover, QIAreach displayed a high diagnostic performance in our study, which may qualify the assay as sufficiently accurate for a diagnostic test 26 .As studies have shown a better predictive ability of IGRAs than TSTs, the level of concordance with QFT-Plus and diagnostic accuracy suggests Table 3. Sensitivity, specificity and ROC AUC of QIAreach.Sensitivity, specificity, positive predictive value, negative predictive value, Area under the Receiver Operating Charactistics curve (ROC AUC) of QIAreach were calculated using manufacturer instructions with QFT-Plus as the reference standard (n = 261).To our knowledge, this was the first study to include a matched comparison between QIAreach and TST.The latter currently represents the programmatic standard for TBI diagnosis in Viet Nam as well as many other HBCs 28 .However, there is urgent need to validate new TBI diagnostic tools following the detection of substantial variability in TST positivity despite sourcing the tuberculin from the same manufacturer (PPD-Bulbio) 10 .This variability has been reflected in a decline in positivity and subsequent increase in resource requirements for meeting national TPT targets 29 .In response, the Viet Nam NTP issued new rapid guidance in September 2022 to lower the positivity threshold to 5 mm, and improve positivity and programmatic scale-up of TPT 30 .Thus, our analysis plan included both 10 mm and 5 mm induration sizes to accommodate the latest developments with www.nature.com/scientificreports/ the former being the standard of care at the time of the study.As the NTP has shifted its focus towards the more aggressive strategy of higher sensitivity in exchange for lower specificity, in the scenario of a 5 mm threshold our results show that QIAreach may serve as a viable clinical alternative to TST.This is also one of the first published studies to validate QIAreach outside of the health facility setting as per its intended purpose of expanding access to IGRA closer to the point of care.Operationally, QIAreach offers many of the similar key advantages of QFT-Plus compared to TST.Only one visit is required to complete the test, while TST still requires one visit for testing and a second for interpretation 31,32 .In our research, almost a quarter of recruited participants did not agree to conduct the test or did not present for results interpretation.These data exemplify the hindrance created by the inconvenience of the TST.Clinically, the T-Cell-based QIAreach offers protection from the array of confounders commonly affecting TST results such as age, nutrition, immunology, genetics, BCG vaccination, and cross-reactivity with non-tuberculous mycobacteria 33 .
Simultaneously, QIAreach also aims to resolve key challenges of the QFT-Plus assay by offering simplified operating procedures, faster turnaround time, and greater flexibility of deployment.QFT-Plus requires a laboratory infrastructure, technical expertise, and expensive equipment, while the QIAreach eStick-eHub architecture aims to emulate the success of other cartridge-based diagnostic tools for TB in low-resource settings that have little to no access to sophisticated laboratory capacity.The rapid turnaround time that characterizes the eStick-eHub design also enables higher throughput of up to 24 samples per eHub per hour 34 .
Nevertheless, our study also exposed key performance issues in our setting that will need to be evaluated further to facilitate uptake of this new tool.QFT-Plus needs four BCTs (TB1, TB2, Nil and mitogen) to optimize results interpretation.Meanwhile, QIAreach relies only on a single BCT, which is designed to maximize sensitivity with antigens optimized to stimulate CD4 and CD8 T-cells.Past studies have found this method to possibly increase IFN-γ levels which would inflate the number of results considered positive 17,35 .These factors may have contributed to the low specificity of QIAreach observed on this study as evinced in the analysis of uncorrected (TB2) and corrected (TB2-Nil) mean IFN-γ levels.Specifically, the uncorrected and corrected IFN-γ levels were 1.6-1.7 times higher in participants testing negative on QFT-Plus than on QIAreach.Thus, the absence of the negative (Nil) control likely contributed to the impaired specificity, especially if QIAreach-positivity was calibrated using QFT-Plus thresholds.
The lower specificity on our study was discordant with the limited available evidence base.Two hospitalbased studies in Italy and Japan detected high concordance between QIAreach and QFT-Plus performance and specificity in particular.The Italian study observed a specificity versus QFT-Plus of 93.4% and an overall concordance with QFT-Plus of 95.7% (κ = 0.96) among 130 persons with confirmed TB and 174 healthy controls 23 .Similarly, the Japanese study conducted in 41 persons with active TB and 42 healthy individuals recorded a specificity of 97.6% among the TB patient cohort with an overall concordance of 98.8% versus QFT-Plus in the sample (κ = 0.98) 24 .Moreover, the study highlighted that the IFN-γ concentration cutoff point for QIAreach was similar to that of the QFT-Plus assay (0.35 IU/mL) for the active TB population 24 , which our study did not corroborate.Based on these data, a hypothesis to explain the low specificity of QIAreach compared QFT-Plus may be the study setting.Contrary to these two examples, our study recruited participants in the community with a comparatively lower rate of TB infection and disease.This setting may also be exposed to confounding and bias reflected in the greater variance in the QFT-Plus results as seen in the high indeterminate rate (17/278 = 6.1%).
It is evident that more work is needed to specify the utility and role that QIAreach can play in the global scale-up of TPT.Studies should also incorporate economic and market analyses once the product moves towards commercialization to address the most common criticism of IGRAs-their costs 36 -as health economic analyses have estimated IGRAs to be more cost-effective than TST 37 .For now, the test is not recommended in national guidelines.At the current diagnostic accuracy, it may only find limited application in priority groups with elevated risk of progression and settings where the benefits of aggressive intervention outweigh the health system and patient costs of unnecessary treatment.An example of such a priority group may consist of household and close contacts of MDR-TB patients as a recent study reported a strong correlation between results from TST and QFT-Plus when detecting TBI in MDR-TB contacts, concluding TST could be used in place of QFT-Plus 38 .
Our study was limited in a number of ways that may affect its generalizability.The aforementioned lack of a formal health economic analysis prevents the ability to build an investment case for policymakers and multilateral funding agencies.Another key limitation of the study was a lack of comparison between QIAreach and bacteriologically-confirmed individuals with TB, including children, which precluded the determination of "true" sensitivity and specificity of the assay in Viet Nam.In addition, large gaps in the TBI cascade for TST resulted in a smaller sample size, which may have deleteriously affected the statistical comparison between QIAreach and TST resulting in their respective specificities showing no significant difference at a 5 mm threshold.These drops in the cascade may have also introduced bias into the results.
Conclusions
Currently, TST and IGRA are the only recommended diagnostics for TBI.However, the limitations of these methods still impair the accuracy, effectiveness and uptake of these tools and scale-up of TPT overall.This study provided evidence on the performance and accuracy of QIAreach assay compared to TST and QFT-Plus.Our results showed high sensitivity and AUC classification, but also exposed a suboptimal specificity, thereby potentially affecting its value and utility, particularly in low-resource, high-burden settings.Thus, more evidence for this new IGRA assay and other new diagnostic tools for TBI remain urgently needed.Nevertheless, this study was among the first to evaluate QIAreach along several dimensions and contributes to the available evidence base to inform future research, programmatic implementation and policy development towards reducing the global seedbed of TB.
Table 2 .
QIAreach, QFT-Plus and TST results.Contingency tables of QIAreach and TST results compared to the reference standard.TST results were presented at 5 mm and 10 mm induration thresholds.Chi-squared tests were used to calculate p-values; p < 0.05 was considered statistically significant.¥ McNemar's test used to compare QIAreach and QFT-Plus with TST at a threshold of 5 mm and 10 mm.
Table 4 .
Sensitivity, specificity and concordance of QIAreach compared to TST in individuals with a valid TST result (n = 200).Sensitivity, specificity, positive predictive value, negative predictive value, AUC of Tuberculin skin test (TST) with 5 mm and 10 mm induration threshold values for the subset of individuals with a TST result with QFT-Plus as reference standard (n = 200).Chi-squared tests were used to calculate p-values; p < 0.05 was considered statistically significant. | 2023-09-16T06:17:23.647Z | 2023-09-14T00:00:00.000 | {
"year": 2023,
"sha1": "de9f223a541a70d471baba137723dcfefec06a44",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-42515-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c118e7c2cf3e43908a50779bbd90e2922ec0e96c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215185423 | pes2o/s2orc | v3-fos-license | Proton Pump Inhibitors Inhibit Pancreatic Secretion: Role of Gastric and Non-Gastric H+/K+-ATPases
The mechanism by which pancreas secretes high HCO3 - has not been fully resolved. This alkaline secretion, formed in pancreatic ducts, can be achieved by transporting HCO3 - from serosa to mucosa or by moving H+ in the opposite direction. The aim of the present study was to determine whether H+/K+-ATPases are expressed and functional in human pancreatic ducts and whether proton pump inhibitors (PPIs) have effect on those. Here we show that the gastric HKα1 and HKβ subunits (ATP4A; ATP4B) and non-gastric HKα2 subunits (ATP12A) of H+/K+-ATPases are expressed in human pancreatic cells. Pumps have similar localizations in duct cell monolayers (Capan-1) and human pancreas, and notably the gastric pumps are localized on the luminal membranes. In Capan-1 cells, PPIs inhibited recovery of intracellular pH from acidosis. Furthermore, in rats treated with PPIs, pancreatic secretion was inhibited but concentrations of major ions in secretion follow similar excretory curves in control and PPI treated animals. In addition to HCO3 -, pancreas also secretes K+. In conclusion, this study calls for a revision of the basic model for HCO3 - secretion. We propose that proton transport is driving secretion, and that in addition it may provide a protective pH buffer zone and K+ recirculation. Furthermore, it seems relevant to re-evaluate whether PPIs should be used in treatment therapies where pancreatic functions are already compromised.
Introduction
Digestive processes along the gastrointestinal tract are aided by acidic and basic secretions by a number of epithelia. In particular, the pancreas and the stomach are the most avid base (HCO 3 -) and acid (H + ) secretors, respectively. The gastric H + secretory mechanisms are well established, however, the cellular mechanism by which pancreatic duct cells secrete almost isotonic HCO 3 − fluid has long been a challenge to epithelial physiologists.
The current ion transport model for pancreatic HCO 3 − secretor, the duct cell, involves two machineries on the two epithelial membranes: first, cells accumulate cellular HCO 3 − with the help of a basolateral Na + -HCO 3 − cotransporter (pNBC, NBCe1) and a Na + /H + exchanger (NHE1) together with carbonic anhydrase; second, HCO 3 − efflux occurs via co-operation between Cl − channels and Cl − /HCO 3 − anion exchangers from the SLC26A6 family, e.g. SLC26A6 on the luminal membrane [1]. The Cl − channels are the cystic fibrosis transmembrane conductance regulator (CFTR) Cl − channels, which may have some HCO 3 − permeability [2,3] and the Ca 2+ -activated Cl − channels, such as TMEM16A/ANO1 [4]. Furthermore, K + channels (e.g. K Ca 3.1, K Ca 1.1, KCNQ1) maintain the membrane potential and provide the driving force for anion secretion together with the Na + /K + -ATPase [5][6][7]. Na + and water follows passively. Nevertheless, this model can only explain production of 80-100 mM NaHCO 3 in secreted fluid, yet human pancreas can secrete up to 140 mM NaHCO 3 . In addition to NBCs and NHE, earlier studies have shown vacuolar H + ATPase (V-ATPase) activity on the basolateral membrane of pancreatic ducts by intracellular pH (pH i ) measurements and use of the V-ATPase inhibitor bafilomycin A 1 [8][9][10][11] Nevertheless, whether the V-ATPase plays a significant role in pancreatic HCO 3 − secretion is not clarified, as for example in guinea pig pancreatic ducts bafilomycin A 1 could not inhibit agonist-stimulated HCO 3 − and fluid secretion [12,13]. Therefore, in the present study we have focused on the function of H + /K + -ATPases (pumps), which are pharmacologically approachable and physiologically relevant. Such H + /K + -pumps have not been proposed for HCO 3 − secreting tissues, except for our earlier study on rat pancreatic ducts [14]; rather they have well-established roles in acid secretion, and in H + and K + homeostasis in other tissues. The H + /K + -ATPases are classified into two subfamilies, gastric and non-gastric (latter also called colonic), coded by ATP4A and ATP12A. The gastric H + /K + -ATPase is expressed in stomach parietal cells, kidney distal nephrons [15][16][17] and cochlea [18,19], where they are responsible for H + secretion, K + absorption and K + recirculation, respectively. The non-gastric H + /K + -ATPase is present in several epithelial tissues including colon, kidney, skin, placenta, and prostate, and here it is associated with acid-base or K + and Na + homeostasis [17,[20][21][22]. Each pump is composed of two catalytic α-subunits and two regulatory β-subunits. The gastric α-subunit (HKα1) assembles with the gastric β-subunit (HKβ), while the non-gastric α-subunit (HKα2) can borrow the gastric β-subunit, and β3/β1-subunits of the Na + /K + -ATPase [20,[23][24][25]. The gastric H + /K + -ATPase is the primary target for treatment of peptic and duodenal ulcers and reflux diseases [26]. Proton pump inhibitors (PPIs), such as omeprazole, are activated in acid environment of secretory canaliculus of the parietal cells and bind covalently to cysteines of the ATPase [26]. Another experimental class of ATPase inhibitors are potassium-competitive acid blockers (P-CABs), such as SCH28080, though at high concentrations they may also inhibit the non-gastric H + /K + -ATPase [27,28].
Our hypothesis is that the H + /K + -ATPases may be important in supporting pancreatic function, which may be of particular relevance in human pancreas. In a most simplistic way, one could envisage that these ATPases would pump H + out towards the interstitium and provide HCO 3 for luminal transport and thus fluid secretion.
Thus the aim of this study was to establish whether human pancreatic ducts express functional gastric and/or non-gastric H + /K + -ATPase and whether H + /HCO 3 transport and whole pancreatic secretion is sensitive to proton pump inhibitors (PPIs). For this purpose we have used human cells/tissue and performed in vivo studies on the rat pancreas where H + /K + -ATPases are expressed, as already established in our earlier study [14]. We show that human duct cells express subunits of both gastric and non-gastric H + /K + -ATPases and these exhibit unusual localization patterns. We propose that these pumps have physiological functions in pancreatic H + /HCO 3 transport and fluid secretion, and speculate on their additional role in mucosal protection and K + recirculation. Most importantly, the present studies show that proton pump inhibitors inhibit pancreatic secretion and we speculate about consequences of using these drugs as treatment therapies in several pancreatic diseases.
Ethical Approval
The
Intracellular pH (pH i ) was estimated from changes in the fluorescence emission (at 510 nm) from 15-20 cells after excitation at 495 and 440 nm. Signals for each batch of cells were calibrated in situ with 1 μM ionophore carbonyl cyanide m-chlorophenyl hydrazone, CCCP, and the fluorescence ratios and pH i were fitted to a calibration curve. A standard method of ammonium pre-pulse was used to study H + transport. Tissues were exposed to ammonium pulses (2-3 min), then ammonium was removed, and pH i recovery rates from acidosis were determined from the initial slopes of pH i changes and expressed as dpH/dt (i.e. pH units/min). The following common representative acid blockers (PPIs and P-CABs) were used: omeprazole (10 μM) and SCH-28080 (10 μM) (Sigma Aldrich). Acidified ethanol (0.15 M HCl in 75% ethanol) was used to prepare omeprazole stock solutions for these experiments, as omeprazole needs acid environment to be activated.
Reverse transcription polymerase chain reaction (RT-PCR) and realtime PCR
RT-PCR and real time PCR were carried out as detailed in our recent work [4]. Briefly, cells were cultured to confluence and then RNA was isolated with RNeasy Mini Kit (Qiagen 74104 For real-time PCR, cDNA was synthesized based on 5 μg RNA template per reaction using the RevertAid First Strand cDNA synthesis kit (Fermentas #K1622) with oligo (dT)18 primers and RevertAid M-MuLV reverse transcriptase, and then purified using GenElute PCR Clean-Up Kit (Sigma, NA1020). The purified cDNA was quantified by absorbance at 260/280 nm, and 100 ng was used as template for each PCR reaction. The PCR reactions were run using LightCycler 480 SYBR Green I Master (Roche, 04707516001) with parameters as follows: preincubation for 5 min at 95°C followed by 45 amplification cycles of 10 s at 95°C, 1 min at 55°C, and 30 s at 72°C. A melting curve was performed following the PCR by 5 s at 95°C, 1 min at 65°C, and subsequent heating up to 97°C, and then cooling down for 10 s at 40°C. Reactions were performed as triplicates and were repeated four times. Table 1 shows primers used; these were synthesized by MWGBiotech or TAG Copenhagen A/S (Copenhagen, Denmark). Four house-keeping genes, 18S ribosomal RNA (18SrRNA), β-actin, β-glucuronidase (GUSB) and glutaminyl-tRNA synthetase (QARS) were used for normalization. These genes have relatively stable expression in both normal and cancerous pancreas [30].
In vivo animal experiments
The experiments were performed on male Wistar rats and surgical procedures were similar to those described earlier for mice [31]. Briefly, animals were anaesthetised with gas isoflurane and placed on a heated surgical table and maintained at 38 o C. The jugular vein was cannulated and thereafter anaesthesia was maintained with intravenous injection of 2 mg/100 g animal pentobarbital hourly or as needed. The abdomen was opened, and the proximal end of the bile duct was ligated. The pancreas and the common pancreatic bile duct was located and cannulated with a polyethylene cannula. The pancreatic juice was collected every 15 minutes. First, the basal secretion was collected for 30 min, and then secretion was stimulated with constant intravenous infusion (0.03 ml/min) of secretin (10 pmol/min/animal). Pancreatic juice samples were collected on ice into weighted vials; secretion rates were calculated and corrected for the animal weight. Pancreatic juice samples were stored at -20 o C for further analysis. At the end of the experiment, animals were euthanized with pentobarbital overdose and the samples of stomach contents were collected.
Administration of PPI
In acute experiments a single dose of omeprazole and SCH28080 were administrated by the intravenous injection through the jugular vein. Omeprazole was dissolved in 40% polyethylene glycol (PEG 400) [32] and SCH28080 was dissolved in 0.4% methylcellulose-saline suspension [33]. The doses were chosen according to the previous studies and injected two hours before collecting pancreatic secretion [33,34]. Omeprazole was given in doses of 5 mg/kg animal and 20 mg/kg animal; SCH28080 was given in 10 mg/kg animal. In matched controls, animals were injected with appropriate vehicle solutions. In the long-term treatment study, animals were treated daily by subcutaneous injections of either 5 mg/kg omeprazole or vehicle (40% PEG) for 30 ± 2 days. The final dose of omeprazole was given a day before the planned operation.
pH and ion concentrations in pancreatic juice
To avoid contaminations from basal and bile secretion for pH and ion measurements, the first 3 samples were excluded from analysis. The pancreatic juice samples were equilibrated with 5% CO 2 /air for 30 min and the pH was measured using a glass pH combination electrode (Hanna Instruments, nr. HI 1083). HCO 3 concentrations were calculated using the Henderson-Hasselbalch equation. In order to determine whether PPIs were working as predicted in rats, the pH of stomach contents was also measured, similar to published studies [35]. Stomach contents were centrifuged to obtain the liquid fraction, which was diluted with distilled water at 1:1 ratio and the pH was measured. The following analyses were performed on pancreatic juice samples.
Concentrations of Clwere determined using QuantiChrom Chloride Assay Kit according to the supplier guidelines (Bioassay Systems). Samples were diluted 10x, transferred to 96 microwell plate together with Chloride Assay reagent, absorption at 610 nm was measured and Clconcentrations were calculated from the standard curve. Na + and K + concentrations were measured using FLM3 flame photometer (Radiometer Copenhagen). Lactate concentrations were measured using Lactate Assay kit (Sigma Aldrich) and phosphate concentrations were measured using QuantiChrom Phosphate Assay Kit (Bioassay Systems), following manufacturer's instructions.
Statistics
For real-time PCR, relative quantification (2 -ΔΔCt ) was used, where Ct denotes the threshold cycle. The level of transcripts were normalized to house-keeping genes (ΔCt) and then normalized to the expression in Capan-1 cells (ΔΔCt). The protein level was normalized to β-Actin and then to the Capan-1 protein level. The differences in gene and protein expression were tested using one way analysis of variance (ANOVA) in Sigma Plot 11 and P<0.05 accepted as statistically significant. Data from functional measurements (pH i and pancreatic secretion) are presented as original recordings and summaries showing the mean values ± standard error of mean, SEM. Control and test pH i measurements were made on the same cell, and n refers to measurements on different batches of cells. Paired Student´s t test was applied. For ion concentration graphs, raw data was bound into 1 μl/min-kg intervals and each bullet shows the mean value ± SEM in secretion rates (x-axis) and ion concentrations (y-axis). Statistical analysis on data obtained from animal experiments was performed using Student 0 s t-test and ANOVA, and P < 0.05 was accepted as statistically significant and denoted with asterisks.
Human duct cells express gastric and non-gastric H + /K + -ATPases
Human pancreatic duct adenocarcinoma cell lines Capan-1, CFPAC-1 and PANC-1 are commonly used as human pancreatic duct models to study the expression and function of different ion transporters. Capan-1 and PANC-1 cells express functional CFTR, while CFPAC-1 cells have F508 deletion in CFTR and thus the protein expression and function are defect. We used the three cell lines for RT-PCR and results are shown in Fig 1A. We found expression of the gastric H + /K + -ATPase α subunit (HKα1, 200 bp) and the β subunit (HKβ, 136 bp), as well as the non-gastric H + /K + -ATPase α subunit (HKα2, 339 bp) in all cell lines. Real time PCR analysis is also shown in Fig 1. Expression levels of transcripts for HKα1, HKβ and HKα2 subunits among different cell lines were compared with respect to Capan-1 cells. Expression of gastric and non-gastric H + /K + -ATPases on protein levels was determined in the three cell lines and with protein extracts from the mouse stomach and colon as positive controls (Fig 1B). The HKα1 band at~115 kDa was detected in cell lysates of the three cell lines and in the stomach. In addition, in Capan-1 cells there was a noticeable band at~100 kDa. The HKα2 as well as HKβ were also detected in all cell lines. For HKα2, a band at~100 kDa was detected in colon and in the three duct cell lines. A weaker band at~115 kDa was also detectable. Similar results were also observed in the rat pancreas [14]. For the HKβ subunit the expected band size for the core protein is 35 kDa, and bands at higher sizes indicate glycosylated subunits [36], most prominently glycosylated in the stomach sample. In pancreatic duct cell lysates, we detected a band with highest intensity at about 40 kDa, similar to the stomach, and also seen in the rat pancreas [14], and this may indicate lower degree of glycosylation of the subunit. Additionally, higher bands from 50 to 65 kDa were also observed in all cell lines.
Localization of H + /K + -ATPases in human pancreatic duct cell lines and human pancreas
The expression and localization of H + /K + -ATPases were further analyzed using immunofluorescence and confocal microscopy. Immunoreactivity for HKα1, HKα2 and HKβ subunits was observed in the three duct cell lines (data not shown), but here we focus on Capan-1 cells, which form pancreatic duct epithelia with characteristic ion transporters when grown on permeable membranes [4]. Fig 2 shows images of H + /K + -ATPase α1, β or α2 subunits on nonstimulated cells. For all subunits some immunoreactivity was detected intracellularly, e.g. in vesicles, but we also observed expression of the pumps on the plasma membranes. The gastric HKα1 subunit was most strongly expressed on and close to the luminal membranes (Fig 2A). The gastric HKβ subunit was predominantly found on the luminal side of the epithelium ( Fig 2B). The non-gastric HKα2 subunit localized to the luminal, and importantly also to the lateral membranes of the epithelium (Fig 2C). Whole human pancreas sections showed similar staining to Capan-1 cells. Fig 3 shows images of ducts of various sizes, and surrounding pancreatic acini, which did not stain with HK antibodies. The gastric HKα1 localized on or close to the luminal membrane and sub-membrane vesicles in human pancreatic ducts. The gastric HKβ subunit was clearly localized to the luminal membrane, and more diffusely, possibly in vesicles, proximal to the basal plasma membrane of pancreatic duct cells. The non-gastric HKα2 subunit was detected intracellularly, as well as on the plasma membranes.
Intracellular pH in Capan-1 cells is sensitive to proton pump inhibitors
In order to evaluate the function of H + /K + -ATPases, we applied a method common to study HCO 3 -/H + transport, i.e. monitoring pH i recovery from an acid load, i.e. the NH 4 + /NH 3 prepulse technique (Fig 4A and 4C). In order to eliminate the contribution of Na + and/or HCO 3 transporters to pH i recovery, we used Na + -and/or HCO 3 free solutions for bath perfusion. Capan-1 cells were continuously stimulated with secretin (10 -9 M) during the experiments, to imitate stimulated ductal epithelium [37]. The pH i recovery rate of secretin-stimulated Capan-1 cells in HCO 3 free physiological buffers was 0.313±0.018 pH units/min, and it was significantly reduced to 0.036±0.003 pH units/min in the absence of extracellular Na + (n = 15). However, Capan-1 cells were still able to defend pH i changes even without HCO 3 transporters and EPR12251), non-gastric H + /K + -ATPase α subunits (HKα2, Sigma, HPA039526) and gastric H + /K + -ATPase β subunits (HKβ, Sigma A274) were used. Loading control was β-actin detected at 43 kDa. All lanes were loaded with 60 μg of protein. Stomach and colon gels were run separately. Lower bargraphs show expression of the subunits normalized to actin: the bands at 115 kDa (HKα1); 100 kDa (HKα2) and 45 kDa (HKβ) were used. Data is from 3-4 independent experiments and * indicates P<0.05 and ** P<0.001 compared to Capan-1. doi:10.1371/journal.pone.0126432.g001 Na + /H + exchangers. Importantly, this Na + independent pH i recovery was reduced by H + /K + -ATPases inhibitors (Fig 4B and 4D). The gastric H + /K + pump inhibitor omeprazole inhibited 75% of the Na + independent pH i recovery (n = 6), while SCH28080 reduced the Na + independent pH i recovery by 52% (n = 5). In addition, PPIs also reduced pH i recovery when cells were returned to control Na + containing buffer (Fig 4 phase III vs. I).
Proton pump inhibitors reduce pancreatic secretory rates
The crucial question to answer now was whether the H + /K + -pumps contribute to pancreatic secretion. Therefore, the following acute and long-term in vivo experiments with proton pump inhibitors were performed on rats. In the first series of experiments, animals had free access to food prior to surgery. Omeprazole was given intravenously two hours before pancreatic secretion was induced with secretin. Two doses of omeprazole (5 mg/kg and 20 mg/kg animal) were tested and the results are shown in Fig 5A. Clearly, 5 mg/kg omeprazole had no significant effect on pancreatic secretion compared to the vehicle infusion (n = 4 and 6, respectively). High dose of omeprazole (20 mg/kg animal) had a tendency to reduce pancreatic secretion rate by about 30% (n = 6), however, statistical significance was not reached with these number of experiments. Pancreatic juice contained relatively high content of secreted proteins, i.e. 29 g/l, which would indicate enzyme secretion from acini. It is well recognized in pancreatic physiology that non-fasted animals have higher fluid and enzyme secretions, due to the endogenous hormones/transmitters that can activate acinar and duct secretions. Therefore, in all following experiments animals were fasted overnight. Fig 5B shows the effect of low and high doses of omeprazole, which now caused significant effects on secretion apparent 30 minutes after stimulation and maintained throughout the experiment. Integrated secretion in the first and second hour after secretin stimulation were 663±42 and 804±39 µl/h/kg in control animals (n = 4), and significantly lower in test animals in the same sample periods, i.e. 484±51 (P = 0.018) and 563±68 µl/h/kg (P = 0.014) after low dose omeprazole treatment (n = 4), and 487±58 (P = 0.027) and 581±60 µl/h/kg (P = 0.013) after high dose omeprazole treatment (n = 4). Interestingly, both 5 and 20 mg/kg omeprazole had similar effects, indicating that the maximal effective dose was reached at 5 mg/kg. This dose of omeprazole also effectively inhibited basal gastric acid secretion in rats, though 20 mg/kg dose was required for inhibition of pentagastrin stimulated gastric secretion [34]. In our experiments, we also checked that omeprazole was working on gastric acid secretion by measuring stomach pH. In control animals pH stomach was 4.16±0.08 and in omeprazole-treated animals it was 5.05±0.13 (n = 15, P = 1.00×10 -5 ). In order to check the possible contribution of non-gastric H + /K + pump to pancreatic secretion, the acute effects of SCH28080 were tested. SCH28080 inhibits gastric pump and it has been reported that in high doses it can also inhibit non-gastric H + /K + pumps [20,27,28]. In Fig 5C it can be seen that administration of 10 mg/kg SCH28080 resulted in a significant reduction of secretory rates. During the first hour of secretin stimulation, the secretion was reduced from 739±50 to 438±65 µl/h/kg (P = 0.002) and during the second hour from 1174±92 to 501±117 µl/h/kg (P = 0.0009), comparing control and treatment groups (n = 7; 6). There seems to be more pronounced inhibition of secretion with SCH28080 than with omeprazole compared to H + /K + -ATPases in Pancreas their respective controls. Interestingly, given the dose of inhibitors used in our studies, we would have expected weaker inhibition by SCH28080, due to higher expected ED 50 values. That is, ED 50 of 0.8 mg/kg for omeprazole and 3 mg/kg of SCH28080 was determined for rodent gastric function [33,38].
Above experiments show that acute treatment with both types of blockers (for simplicity denoted here PPIs) reduced pancreatic secretion. In order to imitate PPI treatment in humans, it was relevant to investigate the outcome of long-term omeprazole treatment. Animals were treated with omeprazole (5 mg/kg) or vehicle for 30 days and subsequently pancreatic secretion was monitored in anaesthetized animals. The results of the long-term study are represented in Fig 5D. Comparing control and treatment groups, during the first hour of secretin stimulation the secretion was reduced from 906±130 to 356±159 µl/h/kg (P = 0.019) and during the second hour from 876±49 to 403±179 µl/h/kg (P = 0.036) (n = 4; 4). Together, these data on shortand long-term treatment with PPIs show that they have significant inhibitory effect on pancreatic secretion. Do they also have effects on electrolyte composition of pancreatic juice and can that reveal how the secretion is formed? Effect of H + /K + -ATPase inhibitors on pH i recovery. A: Representative recording of a pH i measurement in Capan-1 cells challenged with ammonium pulse in Na + -containing physiological buffer, then in a Na + -free buffer with or without Na + with omeprazole (10 μM). Dotted lines show the slope of the pH i recovery from acidosis, i.e. dpH/dt. The pHi recovery was determined when cells were returned to control conditions (periods I, II and III) and in periods with Na + -free buffer +/-proton inhibitor. B: Summary of recovery rates expressed as dpH/dt upon return to Na + -containing buffer (first three bars) and in Na + -free buffer (next two bars) (n = 6). C, D: similar representative recording and summary for SCH28080 (10 μM, n = 5). Cells were stimulated with secretin (10 -9 M) and buffers were HCO 3 free in order to eliminate contribution of HCO 3 transporters. Bars show paired measurements as means ± SEM. Asterisks indicates P < 0.05. Effect of proton pump inhibitors on pancreatic juice electrolytes Results of sample analyses for acute and long-term omeprazole experiments compared to control experiments are given in Fig 6. Fig 6A shows that HCO 3 concentrations depend on secretory rates. Without inhibitors, in the secretory rate range 10 to 15 µl/min-kg body weight, HCO 3 concentrations were between 60 to 80 mM. With acute omeprazole treatment, secretion rates decreased and in the secretory range 7.5 to 12.5 µl/min-kg, HCO 3 concentrations spread between 45 to 90 mM. Prolonged omeprazole treatment resulted in low secretory rates, below 7 µl/min-kg, and HCO 3 concentrations were around 10 to 20 mM. Also with SCH28080, HCO 3 concentrations decreased in low secretory rates. Nevertheless, it appears that all data fall into a HCO 3 excretion curve that follows a characteristic pattern, which is valid for several animal species and humans [14]. Fig 6B shows that Clexcretion pattern in control and acute omeprazole samples is inversed to that for the HCO 3 pattern, as expected [1]. However, in samples from the long-term omeprazole treated animals, Clconcentrations were unexpectedly low and independent of secretory rates. There seemed to be an anion deficit in pancreatic juice, which could have been due to one or more organic or inorganic anions. We determined lactate in the juice and found that concentrations were low in both types of experiments: in control samples they were 0.79 ± 0.07 mM (20 samples form 13 animals) and between 0.60 ± 0.04 mM in PPI samples (40 samples from 18 animals). In addition, we estimated inorganic phosphate concentrations, which were between 1.5 and 3.5 mM in both control and PPI samples and given pH values in secreted pancreatic juice, we estimate that they contribute 2.5 to 6 mequivalents/l. Fig 6C shows typical Na + excretory pattern that is independent of secretory rates. Most importantly, Fig 6D shows that K + excretion has a positive linear correlation with the secretory rates (r = 0.642 and P<0.0001). Notably, at higher secretion rates pancreatic juice K + concentrations are higher than the plasma values.
Discussion
This is the first study that shows the expression of H + /K + -ATPase α and β subunits in human pancreatic duct cells on both molecular and functional levels. Furthermore, substantial effect of proton pump inhibitors on the whole pancreas level confirms that H + /K + -ATPases contribute to pancreatic secretion, though finding of luminally placed ATPases is surprising. These findings may mark a shift in paradigm in our understanding of acid/base transport in the pancreas.
Various pancreatic duct cells express H + /K + -ATPases subunits
A number of experimental approaches in the present study show that several human duct cell lines, as well as human pancreatic sections and the whole rat pancreas express gastric and nongastric H + /K + -ATPases. First, we observed the expression of both gastric and non-gastric H + / K + -ATPases on mRNA and protein levels in three different types of human duct cell lines, and these findings agree with those made on the rat pancreas [14]. The differential expression of pumps in the three cells lines may reflect the fact that they are cancer cells with different properties. Nevertheless, Capan-1 cells, which can be grown as monolayers are good models for transepithelial ion transport in human ducts [4,37], and we used these for further functional and immunolocalization studies.
The non-gastric pumps have relatively wide distribution and function in H + and cation homeostasis (see introduction). The targeting of the non-gastric HKα2 subunit to plasma membranes (apical or basolateral) depends on coding motifs and interactions within the subunit, and possibly association with gastric HKβ or β-subunit of the Na + /K + -ATPase, probably explains localizations to the apical and/or basolateral membranes of epithelia [24,[39][40][41][42]. The gastric pump has been reported for H + secreting epithelia (see introduction) and its presence in the HCO 3 secreting epithelium, the pancreas, seemed somewhat peculiar. From a number of expression studies it is known that specific motifs in the HKα subunit and the glycosylated HKβ subunit are required for targeting and functional assembly [36,43,44]. Notably, in Western blot analysis of human material, bands of about 40~65 kDa were detected for gastric HKβ subunit (Fig 1), which might indicate low glycosylation of the protein, also detected in rat pancreatic ducts [14]. Nevertheless, we still observe similar localization of the gastric HKβ subunit with α subunits close to the plasma membrane (Figs 2 and 3). Thus, from molecular considerations one might expect both the gastric and non-gastric H + /K + -ATPase to be functional. The next question is how their cellular localization explains pancreatic secretion.
Localization of the H + /K + -ATPases-possible functions on the duct epithelium
The immunohistochemical data (Figs 2 and 3) shows that the two types of pumps have somewhat different localizations on the pancreatic epithelium. The results for human duct cell lines in particular show that the H + /K + -pumps (predominantly non-gastric type) are expressed intracellularly and on the lateral membrane of pancreatic ducts. The gastric HKα1 subunits are detected on the luminal membranes and in adjacent intracellular vesicles, resembling those in parietal cells. Also gastric HKβ subunits show most distinct placement on or close to the luminal membranes, indicating that functional gastric H + /K + ATPase would be formed. This differential pump distribution is similar to that observed in the rat pancreatic ducts [14]. Although immunolocalization shows some overlap between gastric and non-gastric pump distribution, the most interesting question is whether the basolateral and luminal pumps would have different functions. (Figs 4 and 5). The second observation that H + /K + -pumps (notably the gastric type) are also on the luminal membrane of pancreatic ducts seems at first perhaps unusual. However, similar pumps (vacuolar type proton pumps) are found in fish intestine and insect midgut that also secrete HCO 3 rich fluid [45,46]. Also airway epithelia express proton pumps (and show sensitivity to vacuolar, gastric and non-gastric pump inhibitors) on the luminal membrane [47][48][49][50][51], though compared to pancreas, their net HCO 3 secretion is lower and pH of airway surface liquid layer is < pH 7.4 [52]. The function of luminal proton pumps is unclear. Let us consider possible functions for the luminal proton pumps. We propose that the luminal H + /K + -ATPases participate in a protection mechanism in the base secreting epithelium. One may draw inspiration from the mucus-bicarbonate barrier found in stomach and duodenum [1], where the barrier provides a near-neutral pH and protection at the epithelial surfaces [53][54][55]. In pancreas we have the reverse situation. Human pancreatic duct cells are able to secrete up to 140 mM HCO 3 and luminal pH values are above 8 [1,56,57]. Secretion of H + may provide protection of luminal epithelial surface. Additionally, several mucin genes have been identified in pancreatic duct cells and pancreatic mucins would be relevant in epithelial protection, and altered expression pattern of mucins is one of the important factors in development and drug resistance in pancreatic cancer [58][59][60]. In physiological context, we propose that H + secretion and mucus could provide protection against luminal base, a so called "pancreatic mucus acid barrier" [1]. In addition, H + /K + -ATPases would serve to recirculate secreted K + (see below).
Pancreatic secretion-H + /K + -ATPases on integrative level and effect of proton pump inhibitors
The observation that PPIs and P-CABs inhibit secretin-evoked pancreatic secretion in in vivo rat studies shows that H + /K + -pumps are in general involved in pancreatic duct secretion. Rat pancreatic secretion is sensitive to omeprazole, as even the low doses were effective (Fig 5).
Omeprazole, the acid-activated pro-drug, would inhibit the gastric pump, while SCH28080, which competitively binds to the K + site, could, most likely, inhibit both gastric and non-gastric pumps and seems more effective given the dose used in our experiments and published ED 50 values [33,38]. However, clear functional and pharmacological distinction between the two types of pumps was not possible in our study. Nevertheless, the most important observation is that PPIs had significant effects. In pH i studies, we acidified the pro-drug and thus the activated form of omeprazole would be formed. However, in animals, omeprazole would have to be activated by the acid environment, and the simplest theory is that this would be by the pancreatic H + /K + -ATPases directly. This might be somewhat analogous to what happens in the stomach, and thus then provides further evidence for the pumps. The long-term experiments on rats were performed to find whether omeprazole had cumulative effect on inhibition of pancreatic secretion, or whether animals adjusted to the treatment and regained normal secretory rates. Clearly, the first was the case and pancreatic secretion was further reduced by long-term PPI treatment.
In addition to secretory rates, the excretory electrolyte curves indicate underlying secretory mechanisms. First of all, HCO 3 excretory curve follows predicted relation with secretory rates.
The fact that PPIs do not significantly alter the curve/form indicates that H + /HCO 3 transport drives the fluid transport and thus secretion. Interesting point to recall here is that rodents, other animals and humans do show similar HCO 3 excretory curves, indicating similar underlying mechanisms [1,14]. The Clexcretory curves are inversed to those of HCO 3 -, which may be due to Cl -/HCO 3 exchange, or other more complex mechanisms [1], and we cannot explain anion deficit at low secretory rates.
Regarding the cation excretion, Na + is not dependent on secretory rates. Similar was thought to be the case for K + excretion. However, our data shows that there is a positive relation between K + and secretory rate, and moreover that K + concentrations are higher than plasma values (Fig 6). Similar higher K + concentrations were also reported in a few early studies [61][62][63]. Increased K + concentrations in pancreatic juice are most likely due to K + secretion into duct lumen via luminal K + channels, such as K Ca 3.1, K Ca 1.1 or KCNQ1 [7,64]. These channels are expressed on luminal membranes and their activation would provide increased driving force for secretion as already predicted earlier [1,6,7]. In that context, it is reasonable to envisage that the luminal H + /K + -ATPase could thereby operate and in fact serve to recirculate or salvage K + from the lumen, and would explain the observation that H + and K + transport across the luminal membrane is inversely related.
Clinical implications
Proton pump inhibitors (PPIs) are used widely in clinical practice as one of the treatments for various acid-peptic disorders [65]. As adjuvant therapy PPIs are also prescribed to patients with pancreatic diseases, such as cystic fibrosis and pancreatitis [66]. Moreover, they are also suggested as an adjuvant therapy for patients with type 2 diabetes, supposedly because they elevate plasma gastrin and thereby improve insulin secretion [67,68]. Our acute and long-term experiments on rats show that pancreatic secretion is significantly inhibited by PPIs. Therefore, considering the parallels between H + /K + -ATPases expression in rodent and human pancreatic epithelia, it would be important to re-consider effect of PPIs on human pancreatic function. Furthermore, our recent data mining analyses indicate that on the mRNA level ATP4A and ATP4B are down-regulated in pancreatic cancer [69]. Therefore, the role of H + /K + -ATPases, alongside with other acid-base transporters, should be considered in potential development of deregulated acid-base homeostasis in pancreatic ductal adenocarcinoma.
Conclusions
In conclusion, we find that pancreatic ducts express both gastric and non-gastric H + /K + -ATPases. These pumps are functional in human duct cells and in rat pancreas on the whole organ level where they contribute to secretion as demonstrated by PPIs. The laterally and basolaterally expressed H + /K + -ATPases would export H + out of the cell, leaving HCO 3 for luminal transport and duct secretion. Localization of the H + /K + -ATPases to the luminal membrane is a change in paradigm, and we propose that they may contribute to the protective buffer zone and K + recirculation. Lastly, the solid evidence that we provide for the H + /K + -ATPases in pancreas calls for re-evaluation of the use of PPIs as they will not only affect stomach secretion but also pancreatic secretion-contrary to what may be wished. | 2017-04-12T18:29:35.872Z | 2015-05-18T00:00:00.000 | {
"year": 2015,
"sha1": "cce960e168af8f93677156ab08284e5d3fc4cc41",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0126432&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17080e7f7c65f8c70b718e1eac46e171eed65e6b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247038537 | pes2o/s2orc | v3-fos-license | Effects of Different Inter-Row Soil Management and Intra-Row Living Mulch on Spontaneous Flora, Beneficial Insects, and Growth of Young Olive Trees in Southern Italy
Conservation agriculture (i.e., minimized soil disturbance and permanent soil covering) and living mulches represent two agroecological practices that can improve soil fertility, spontaneous flora, and beneficial insect communities. This research studied the effect of these practices in a young olive orchard in the Mediterranean area. Two Sicilian olive cultivars (‘Nocellara del Belice’ and ‘Nocellara etnea’) were used for the field experiment; inter-row minimum and zero tillage and four species of aromatic plants as living mulch along the row were tested. Spontaneous flora and beneficial insect communities, as well as tree growth, were monitored. The inter-row management did not influence the spontaneous flora dynamics. The species adopted for living mulch showed a very different degree of development and soil cover; 69 insect species (pollinators and predators) belonging to five orders (Hymenoptera, Lepidoptera, Diptera, Neuroptera, and Coleoptera) and 17 families were recorded. The growth of the olive trees was not affected by the conservative strategies.: In the inter-row, the growth of the spontaneous flora was limited by the high temperatures during the summer. Among the living mulch species, sage and lemongrass guaranteed an almost full soil cover, reducing the need for weed management along the row, as well as increasing the beneficial insects without influencing the young tree growth.
Introduction
One of the main goals established by the European Commission during the period 2019-2024 is to lay the foundation for making the European Union the first climate-neutral continent by 2050. To achieve this objective, the Commission presented the European Green Deal policy, the most ambitious package of measures that should enable European citizens and businesses to benefit from sustainable green transition. Concerning the agricultural sector, this objective will be reached by a drastic reduction in farm input (fertilizers, chemical pesticide, hormones), reducing the nutrient losses and preserving and restoring ecosystems and biodiversity [1]. or perennial intercropping), soil management practices (minimum tillage, zero tillage), and organic fertilization were considered for a comprehensive meta-analysis (187 experiments realized in the Mediterranean basin with several woody crops for a total of 46 papers) [29] that highlighted a general positive effect of the abovementioned strategies in carbon sequestration compared to mono-cropping, conventional tillage, and inorganic fertilization. For olive, since the last century, consociations with herbaceous or woody species have been described [30]. These were due to the extensive olive orchards, as well as the consociations with livestock where possible [31]. For other species such as grapevines, minimum or zero tillage is commonly applied in order to regulate the vegetative and reproductive balance of vines and, in some cases, in order to reduce erosion and land degradation [32][33][34].
In this context, olive could represent an important source of ecological interest among the numerous Mediterranean species due to its specific characteristics, such as high drought resistance, low chill unit requirement, adaptation to hot and dry climatic conditions, and low pest and disease incidence, all of which are significant characteristics to consider in the establishment of new orchards with an agro-ecological approach [35]. However, it is important to consider that the cultivation of olive trees is very diversified among the Mediterranean countries, and that the social, economic, and agroecological value of the olive orchards is strongly variable according to the different cultivation systems (traditional, intensive, and super-intensive orchards), farming techniques, and genetic resources [36]. In traditional orchards, the social and agroecological characteristics are highly relevant, whereas, in the intensive model, only the olive agroecological importance is essential. In these categories, olive models are in accordance with the main objectives of the agroecological approach, which aims to reinforce the natural strength of the agroecosystem without using external inputs and augment the resilience of the crops, encompassing the social, ecological, and economic dimensions of sustainability [37]. In the super-intensive growing system, the economic factor is of greater importance than the social and agroecological factors.
In our research, we tested the impact of some agroecological practices (i.e., conservative soil management and ASC living mulch introduction) on the wild agro-biodiversity (weed and arthropod communities) and vegetative growth of a newly planted olive orchard. We assumed that different floor management (minimum tillage vs. zero tillage) and intra-row management (different living mulch species vs. no living mulch) would differently influence the dynamics of the monitored agro-biodiversity and the young plant response. In particular, we hypothesized that (i) the zero-tillage floor management would guarantee permanent soil cover without selecting higher competitive flora, (ii) the living mulches would positively influence the presence of beneficial insects, and (iii) different living mulches would have a different impact on both arthropods and weed communities, depending on the introduced species.
Entomological Report
The complete list of the 69 recorded species of beneficial insects, as well as their relation to the spontaneous flora or the consociated ones, in the studied olive orchard is reported in Tables 1 and 2. Specimens of pollinators (61 species) and predators (eight species) were collected in the 2 years of field surveys on the wild and cultivated plants. Regarding pollinators, the 33 species of Apoidea reported belong to five different families, Colletidae (one species), Andrenidae (seven species), Halictidae (four species), Megachilidae (five species), and Apidae (16 species), and 15 genera ( Table 1). Most of these species nest by digging into the ground (24 species, 72.72%), while 21.21% (seven species) of the taxa nest in pre-existing cavities in the ground, in the walls, or in dry and hollow vegetables. Two species among the 33 observed (6.06%) belong to the Nomada Scopoli genus of brood parasitic bees characterized by the presence of females that lay eggs in the nest of other wild bees. Regarding the behavior, 24 species are solitary (72.72%), five species (15.15%) exhibit a pre-social behavior, two species (6.06%) have a social behavior, and two species (6.06%) are brood parasite species. The 23 species of Lepidoptera reported belong to nine different families, Sphingidae (three species), Sesiidae (one species), Geometridae (two species), Noctuidae (two species) Hesperiidae (one species), Lycaenidae (two species), Nymphalidae (five species), Papilionidae (two species), and Pieridae (five species), and 17 genera (Table 1).
Five species (and five genera) of Diptera were found belonging to the Syrphidae family. The adults of these species are pollinators of spontaneous plants; however, the larvae have different trophic regimes. For example, larvae of Episyrphus balteatus (DeGeer), and Eupeodes luniger (Meigen) are predators of aphids, while those of Eristalinus taeniops (Wiedemann), Eristalis tenax (L.), and Syritta pipiens (L.) are scavengers [38].
Furthermore, regarding predator insects, two species of Neuroptera Chrysopidae and six species of Coleoptera Coccinellidae were found; among these, one species feeds mainly on coccids, while the others feed mainly on aphids.
Spontaneous Flora Distribution and Diversity
The complete list of the spontaneous flora species found in the field, as well as the time (spring or autumn) and the area in which they were recorded (inter-row or intrarow), is reported in Table 3 (POROL), were found in each period and position. In spring, 28 species of plants were detected: 14 of them both in the intra-row and inter-row, and the remaining 14 exclusively in the intra-row. On the contrary, no exclusive species in the inter-row were observed. In autumn, 26 species were observed, and only eight grew both in inter-row and intra-row. In this period, six species were exclusive in the inter-row while 11 species were found only along the row.
Regarding the weed monitoring achieved in spring in the inter-row, in MT treatments, most of the space (70%) was classified as bare soil, while the predominant spontaneous plants were Portulaca oleracea (POROL) (11%) and Convolvulus arvensis L. (CONAR) (7%), even though the quantity was lower compared to the ZT treatment. In these areas, the bare soil was in less quantity (18%), and the predominant spontaneous plant was Papaver rhoeas L. (PAPRH) (almost 40% of the total space was occupied from this plant), followed by Beta vulgaris L. (BEAVX) (almost 25%).
In terms of the distribution of the weed community during spring in the intra-rows, in the MT treatment, the prevalent species found were Portulaca oleracea (POROL) (13%) and Cynodon dactylon (CYNDA) (10%), while the remaining weeds showed a distribution more or less constant along the intra-rows. Regarding the frequency and distribution of the spontaneous flora community in the intra-rows in ZT treatments, Papaver rhoeas (PAPRH) was present in a larger proportion (27%) compared to the others, followed by Beta vulgaris (BEAVX) (10%). The presence of other weed species was similar to that observed in the tillage blocks even if, in the control, Papaver rhoeas (PAPRH) covered about 60% of the soil.
In the autumn survey, it was observed how the vegetation developed almost exclusively along the rows due to the presence of irrigation, whereas, in the inter-row, a high percentage of bare soil (MT 96%; ZT 77%) was registered. Along the row, there was a significant increase in the space occupied by ASC species, particularly sage and lemongrass, and, for both MT and ZT, the most represented spontaneous species was Setaria verticillata (L.) P. Beauv. (SETVE). Table 3. List of the spontaneous flora species detected in spring and in autumn in both the inter-row and the intra-row of the experimental field 'long-term trial on organic olive (BiOlea)' at Palazzelli.
Spontaneous Flora Species
Family EPPO Code
Inter-Row Intra-Row
Inter-Row Intra-Row Zero Tillage
Amaranthaceae A principal component analysis (PCA) was carried out to evaluate the effect of the ASC on the development, quantity, and distribution of the weed community. With respect to soil management data analysis, Component 1 explained 20.97% of the total variability, while Component 2 explained 15.71% (Table 4). According to the PCA results relative to the spring and autumn analysis, as shown in Figure 1A,B for the spring stage, there were no significant differences in terms of distribution between the plots analyzed. Regarding the distribution of the spontaneous flora community in the inter-row with different soil management (ZT and TI) ( Figure 1A), weed species appeared divided into four main groups ( Figure 1A) characterizing the community: perennial species (namely CONAR, CYNDA, and CYPRO), AMARE, POROL, and DACGL (group 1) were negatively correlated to POLAV, URTDI, BETVU, FUMOF, and LACSE (group 2), and PAPRH (group 3), whereas two completely independent grass species appeared, AVEST and LOLPE (group 4). Despite this, PCA did not show clear differences in terms of abundance and distribution. On the other hand, the zero-tillage community was characterized by the presence of AVEST and LOLPE, whereas BETVU (BEAVX) and URTDI showed a higher relationship with minimum tillage (Figure 2A,C). At this stage (spring), the intra-rows with sage, lemongrass curry plant, and thyme living mulch and control all presented a weed community where all the specimens had an average distribution, with some peak presence of AMARE in sage mulch rows and of SETVE in the control row ( Figure 2B,D). These records are an overview of 1 year of the field trial and still need to be re-evaluated in the long term management of the orchard. Similar results were obtained for the second assessment in autumn (not shown).
Plant Growth Analysis
In terms of the produced biomass removed with winter pruning (in February), the most abundant quantity was recorded for the NE cultivar in both soil treatments. In September, the quantity of emitted material (suckers and shoots removed from the trunk) was the highest in NE-MT (Figure 3). Concerning the shoot growth monitoring, despite the absence of significant differences among treatments, a better performance for NE in both soil treatments was observed. In general, the growth rate was about 10-12 cm between day of the year (DOY) 145 and 180, about 8-10 cm between DOY 180 and 210, 2-3 cm between DOY 210 and 239, and 2-3 cm between DOY 239 and 272 (Figure 4). The plant growth response to the applied soil management is reported in Table 5. The trunk cross-sectional area reached the highest growth for both cultivars in the zero-tillage soil management. The canopy height increase (approximately 30%) was similar among treatments, although NE-ZT showed the highest growth. The trunk cross-sectional area (TCSA) showed more variable results, with NE-ZT and NB-ZT showing the highest growth (+105% and 96%, respectively), while NB-TI showed an expansion of about 48% and NE-TI of just 17%. According to the data presented in Figure 4, all variables had the same rate of growth, with an increase of about 10-12 cm between DOY 145 and 180, 8-10 cm between DOY 180 and 210, 2-3 cm between DOY 210 and 239, and 2-3 cm between DOY 239 and 272. This trend is in accordance with the normal development of the olive trees during their young phase, as well as with the climatic data and water intake registered during the trial.
Discussion
This study focused on three key indicators in agro-ecosystems: (1) the insect community, (2) the spontaneous flora diversity, and (3) the young olive response in terms of vegetative growth. Therefore, in our study, the entire soil-plant-atmosphere continuum (SPAC) was analyzed.
The entomological study was performed in terms of both pollinators and natural enemies. The research was conducted in an olive orchard located on a farm in a district with high relevance for citrus and other fruit crops. The collected Apoidea were observed on 23 species of wild plants, comprising a total of 23 plant genera within 16 plant families (Tables 1 and 2). The Asteraceae family was that frequented by the greatest number of pollinators (15 species), followed by Brassicaceae (12 spp.) and Ranunculaceae (five spp.) ( Table 3). On the consociated plants, 39 species of pollinators were observed, 25 on Thymus vulgaris, 12 on Salvia officinalis (Lamiaceae), and nine on Helichrysum italicum (Asteraceae).
The order Lepidoptera, the second most important group, was present with 23 species, comprising 16 butterflies and eight moths.
In terms of wild bees, it is significant to note that 72.72% (24 species) of the overall species nest in the ground, and their existence depends on the typology of soil management. In recent years, various regional surveys have focused on the biodiversity of these populations and the agroecological role of these two groups of insects [40][41][42] or as specific pollinators of crops [43][44][45][46].
In order to maintain Apoidea biodiversity, management practices should take into account that most species of wild bees nest in the ground [47], and different agronomic practices, including tillage of the land, usually render crops an unsuitable habitat for wild bees, especially in intensive management [48]. In particular, deep tillage and total removal of spontaneous vegetation represent a serious problem for the foraging and nesting of these pollinators [49]. Therefore, in agricultural environments, wild bees need semi-natural habitats for nesting, obtaining the floral resources, and overwintering. The elements of the landscape, in the field and around the field, also have the function of habitat for fauna in general and, in this context, of ecological corridors in intensely cultivated and biodiversity conservation areas [50,51]. It is also necessary to consider how useful effects are particularly important in Mediterranean agro-ecosystems subject to desertification [52][53][54][55][56].
The consociated plants in the intra-row were visited by 62.3% (43 species) of collected insects, 62.2% of all pollinators and 62.5% of all predators. Overall, 15.9% (11 species) of all reported insects were found only on consociated plants, 16.3% of pollinators and 12% of predators.
In our trial, conservative models were also proposed to increase soil fertility and biodiversity (insects and spontaneous flora in the inter-row), reducing the costs for soil management and improving the spontaneous flora control along the row. Our findings evidence small differences between the two soil management strategies. In particular, minimum tillage showed a higher reduction in weed presence at both sampling times (spring and autumn) as confirmed by the higher bare soil cover than in the zero-tillage system (Figure 3). This result evidence how single tillage is an efficient weed management strategy. On the other hand, ZT showed a higher weed cover than MT and a higher richness (data not shown). Nevertheless, ZT in spring showed the selection of perennial species (namely, CONAR, BEAVX, CYPRO, and LOLPE; Figure 2A,C) and a higher characterization of some grass-like species (AVEST and LOLPE; Figure 2B,D). This result is in line with previous findings on zero tillage as a filter to shift the community toward grassy annual and perennial species [57,58], representing a risk in terms of competition with young orchards.
The living mulches realized along the row showed different effects according to the adopted species. In spring, only sage covered the main portion of the soil, due to its habitus. In autumn, 6 months after planting, the sage showed a complete hedgerow, and the consociated flora was observed just at the ground level under the plants. Similarly, lemongrass, despite forming an almost dense hedgerow, completely prevented weed growth under the plants thanks to its strong tillering ability, while allowing growth between plants. Therefore, these species contributed to creating a wide soil cover before the winter season and improved the soil performance [59]. Thyme and curry plant recorded the lowest growth and showed reduced power for competition with the spontaneous flora. However, in these cases, the spontaneous flora had a role in the preservation of the essences during summer since they covered the little plants and permitted them to survive during this season. Perhaps, for these essences, two growing seasons are required to reach a complete hedgerow. Therefore, in the inter-row, lemongrass and sage reduced the need for further soil management. The adopted living mulches reduced the propagation of weeds without reducing the vigor and growth of olive trees. It is possible to assume that the distance from the trunk of the young olive trees to the plants of living mulch was about 40 cm, and it did not significantly affect the olive growth. It is important to highlight that the irrigation lines played a strong role for both the olives trees and the consociated species. Since the olive trees were young, full irrigation was useful to reach high growth rates as shown by the increase registered in morphological parameters (Figures 3 and 4, and Table 5). Among these, the canopy volume exhibited strong growth. According to our findings, it is possible to hypothesize two drip lines for differentiated irrigation between olive trees and living mulch species. From a practical point of view, in areas with hot and dry summers, planting in the field is possible in autumn or in spring. One plant every 50 cm is enough to boost the growth of the living mulch along the row, but it is important to consider that, after 6 months, the removal of the lines from the row is very difficult; therefore, positioning above the ground level is preferred.
In general, the obtained hedgerows could represent an integrative crop for a secondary income for the farmer, such as food, feed, or industrial products, increasing the resilience of the system to pest incidence and market volatility [60].
Site Description, Experimental Design, and Treatments
The study was carried out between June 2019 and October 2021, in the 'long-term trial on organic olive (BiOlea)', of the experimental farm of the Council for Agricultural Research and Economics (CREA), Research Center for Olive, Tree Fruit, and Citrus located at Palazzelli (Lentini district, Syracuse), Sicily, Italy, (latitude 37.17 N, longitude 14.50 E, elevation 45 m a.s.l.). The experiment focused on a young olive orchard, planted with two Sicilian main double aptitude olive cultivars 'Nocellara del Belice' (NB) and 'Nocellara etnea' (NE), grafted onto seedling rootstocks. Trees were planted in May 2019, in northsouth-oriented rows, at a spacing of 6 m between rows and 5 m within the row. The adopted training system, since the first winter pruning season (February 2020), was the polyconic vase, aiming to maintain three main branches. Trees were drip-irrigated early in the morning three times per week, from June to September. Irrigation volume scheduling was based on the FAO-56 Penman-Monteith (P-M) approach [61,62], adjusted by the variable crop coefficient (kc) from 0.15 in the first growing season to 0.34 in the second one [63]. Each of the four drippers per tree emitted 2 L·h −1 , for a total of 8 L·h −1 , with an operational pressure of 1 bar. Plants were fully irrigated, corresponding to 95-98% of crop evapotranspiration, ET c . The electrical conductivity of the water (at 25 • C) was 2.02 dS·m −1 and the pH was 7.30. Only organic fertilization was applied at the plantation.
The trial was designed as a split-plot system with four blocks of 10 rows with five plants each (Figure 5). The main plot was assigned to soil management practice comparing two systems: (1) For the specific activity of this study, on 15 March 2021, a living mulch system was set down along the row using four officinal species as agro-ecological service crops (ASCs) planted at a distance of 0.5 m: (1) sage (Salvia officinalis L.), (2) thyme (Thymus vulgaris L.), (3) curry plant (Helichrysum italicum (Roth) G. Don), and (4) lemongrass (Cymbopogon citratus (DC) Stapf). No living mulch between trees along the row was used as control (C), but the spontaneous flora was maintained. Inter-row soil management was used as a factor for field spontaneous flora assessment and for plant growth monitoring in both cultivars. The soil management and the living mulch interactions along the row were used both for the spontaneous flora and for the entomological assessments.
Soil Analysis and Climatic Data
At planting, soil characteristics were analyzed at 20-40 cm depth by three samplings per plot. Soil physical and chemical characteristics are reported in Table 6. Regarding physical characteristics, the quantity and distribution of sand, clay, and silt was obtained by particle-size analysis using the "micro-pipette" method [64]. In terms of chemical properties, total nitrogen (N), organic matter (OM), soil extractable phosphorus (mg/kg), soil exchangeable potassium (meq/100 g), cation exchange capacity, pH, and electrical conductivity (EC) determinations were determined as described in [65][66][67][68][69][70][71]. Total nitrogen was measured by Kjeldahl digestion using a Buchi Labortechnik GmbH N analyzer, and organic matter (OM) was measured by quantifying total organic carbon (TOC, mg·kg −1 ). TOC was analyzed by means of elemental analyzer LECO (RC-612; St. Joseph, MI, USA) using a dry combustion method. Soil exchangeable potassium (meq/100 g) was determined in a solution of barium chloride and triethanolamine at pH 8.2 (2 g of soil: 25 mL). Cationic exchange capacity was analyzed by the BaCl 2 compulsive exchange method. The pH and EC determinations were carried out on a HI 9813 portable EC meter (Hanna Instruments, Woonsocket, RI, USA) and an AB 15 pH meter (Thermo Fisher Scientific, Waltham, MA, USA), respectively. Inductively coupled plasma optical emission spectrometry, ICP-OES, was conducted using an Optima 2000 DV, PerkinElmer Inc. Shelton, CT, USA). According to the United States Department of Agriculture (USDA) scheme, the olive-grove soil is classified as loamy sand [72]. The soil pH is subalkaline, and electrical conductivity is considered low [73]. Climatic data, namely, monthly minimum, mean, and maximum air temperature, global solar radiation, rainfall, reference evapotranspiration (ET 0 ), cultural evapotranspiration (ET c ), and vapor pressure deficit, registered at the experimental field, were collected from an agro-meteorological station located in the experimental farm ( Figure 6). The climate of the region is typical Mediterranean, with hot and dry summers. According to the available meteorological data (30 years, not shown), annual mean reference rainfall is about 550 mm, and the maximum temperature in summer during daytime often reaches 38-40 • C [74]. During the trial, the site's climate was characterized by mild and wet winters, while the summers were semiarid (first and second) and dry (third) in which no rainfall was recorded from May to August. The annual average temperature was 18.29 • C. The lowest minimum temperatures were recorded in January and February. Mean temperature values were always above 22 • C from April to November. Figure 6. Monthly minimum, average, and maximum air temperature and solar radiation, rainfall, reference and cultural evapotranspiration, and vapor pressure deficit registered in the experimental field 'long-term trial on organic olive (BiOlea)'.
Entomological Samplings and Analysis
Entomological studies, regarding pollinators (Hymenoptera Apoidea, Lepidoptera, and Diptera Syrphidae) and predator insects (Neuroptera and Coleoptera Coccinellidae), were carried out twice per month, from March 2020 to October 2021. In particular, from 1 March 2020 to 28 February 2021, insects were collected from 2500 m 2 for each of the two soil management areas (125 m 2 each inter-row × 5 rows × 4 blocks = 2500 m 2 ) for a total of 5000 m 2 . From 1 March 2021 to 31 October 2021, a defined linear transect of 25 m each in eight replicates (25.8 = 200 m) was used for the assessments of the beneficial insects along the row.
Specimens were collected with the net technique, from 10:00 a.m. to 4:00 p.m., on flowers (pollinators) and vegetative organs (predators) of the spontaneous and planted (intercropping) plant species. All specimens were transferred in the laboratory, dry prepared, and identified, when necessary, through the observation of sexual structures. The month of collection, number of specimens, and visited plants are given for all species. Specimens of wild bees were identified using the taxonomic keys in [75][76][77], as Lepidoptera [78], Diptera Syrphidae [38], Coleoptera Coccinellidae [79,80], and Neuroptera [81]. The classification followed Michener [47] for supra-specific taxa, and their nomenclature was according to [82,83]. The examined specimens were preserved in the collections of the authors and in the entomological collection of CREA-OFA of Acireale.
Spontaneous Flora Assessment and Analysis
Weed abundance and community composition and diversity were evaluated and monitored twice during the experiment: at the start of spring on 25 March 2021 and in autumn on 6 October 2021 at day of the year (DOY) 141 and 255, respectively, corresponding to the stages of maximum development of the natural cover (i.e., spring and autumn). At each sampling stage, weed cover (i.e., the percentage of the surface area of the quadrat covered by weeds) was evaluated at a species level by randomly placing three 1.0 m 2 quadrats within each block per soil management in the inter-row (3 squares × 4 subplots × 2 soil managements = 24) and three 1.0 m 2 quadrats for each intercropping species in each intra-row, in all blocks for each soil management (3 squares × 5 consociated species or control × 4 subplots × 2 soil managements = 120). Density was evaluated by placing two 0.60 × 0.60 m 2 quadrats in the intra-row space and four 0.25 × 0.25 m 2 quadrats in each soil management system per block. Cover and density assessment allowed providing the total cover (%) and the total density of the community.
Tree Growth Monitoring
Biometrical measurements of the young olive trees were conducted on 15 December 2020 and on 15 October 2021, and the relative increments were calculated. Measurements regarded the total height of the tree, the widths of the canopy (in two perpendicular directions from the projection on the ground at noon), and the canopy height, measured from the first primary branch insertion point to the top. The canopy volume was calculated assuming an elliptical shape [84]. The trunk cross-sectional area (TCSA) was calculated from the trunk circumference measured at 20 cm from the ground.
Pruning was realized on 15 February 2020, and the weight of the removed material was recorded, while the weight per tree of new emitted suckers was recorded in October 2021.
Moreover, the total vegetative growth was obtained by measuring the length improvement from the beginning of the vegetative growth (15 April 2021) to the end of the experiment (31 October 2021) of two 1 year old mixed shoots per plants, randomly selected and labeled around the canopy of the trees at 1.0-1.2 m height from the ground.
Statistical Analysis
Analysis of variance (ANOVA) was performed with Jamovi 2.0.0 statistical software (The jamovi project, 2021). One-way analysis of variance (ANOVA) was carried out on the differences among the canopy treatments. A post hoc analysis based on Tukey's HSD test (Tukey's honestly significant difference) was performed at a significance level (p-value) of 0.05, 0.01, and 0.001, respectively. Principal component analysis (PCA) was performed with Past 4.03 statistical software (Oyvind Hammer), to assess the effect of the ASC along the row, as well as the role of tillage used in the inter-row soil management in the development, abundance, and distribution of the weed community in spring and in autumn.
Conclusions
The obtained results, even if preliminary, evidence the role of diversification strategies in recovering rather than halting the loss of wild biodiversity in agricultural fields. In particular, the agronomical techniques proposed for the young organic olive, have been shown to be an evaluable option for promoting the presence of pollinators and, thus, supporting the potential production. The inter-row management resulted in a diversified spontaneous flora community, more service provider than competitor. In addition, the wild plants on the row had a sheltering effect on the living mulch species during the hot period, demonstrating a flow of services between the components of the agroecosystem. Among the studied living mulch species, sage and lemongrass were able to create an almost continuous hedge along the row and a semi-full soil cover, thus reducing the need for weed management in the intra-row soil strip and improving the beneficial insects without influencing the plant growth.
In a nutshell, current results indicated that the agroecological practices adopted increase the richness of the biota and, hence, the complexity of the Arthropod fauna in terms of number of species and taxonomic complexity. The knowledge of the two groups of insects investigated is of primary importance for evaluating the local populations of pollinators and predators of wild and cultivated plants. Acknowledgments: The authors thank "G.S. S.p.A. Group Italia" for the economic support in publishing this study through the agreement "Azioni di studio e divulgazione finalizzate alla riduzione e ottimizzazione dell'uso di agrofarmaci in coltivazioni di pesche e nettarine, e all'individuazione di buone pratiche agronomiche al fine di preservare l'ambiente e le api (GS PES-NET21)" subscribed on 13 November 2020. We also wish to thank Vittorio Nobile (Ragusa, Italy) for confirming the determination of some species of Hymenoptera Apoidea. | 2022-02-23T16:14:01.665Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "aab27dbdfd85ba0f7326eebf92d8d431bc893e89",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/11/4/545/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e22b6e997fa35016c7a13e11a88d16ad7ba4caec",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233302357 | pes2o/s2orc | v3-fos-license | Shark Antibody Variable Domains Rigidify Upon Affinity Maturation—Understanding the Potential of Shark Immunoglobulins as Therapeutics
Sharks and other cartilaginous fish are the phylogenetically oldest living organisms that have antibodies as part of their adaptive immune system. As part of their humoral adaptive immune response, they produce an immunoglobulin, the so-called immunoglobulin new antigen receptor (IgNAR), a heavy-chain only antibody. The variable domain of an IgNAR, also known as VNAR, binds the antigen as an independent soluble domain. In this study, we structurally and dynamically characterized the affinity maturation mechanism of the germline and somatically matured (PBLA8) VNAR to better understand their function and their applicability as therapeutics. We observed a substantial rigidification upon affinity maturation, which is accompanied by a higher number of contacts, thereby contributing to the decrease in flexibility. Considering the static x-ray structures, the observed rigidification is not obvious, as especially the mutated residues undergo conformational changes during the simulation, resulting in an even stronger network of stabilizing interactions. Additionally, the simulations of the VNAR in complex with the hen egg-white lysozyme show that the VNAR antibodies evidently follow the concept of conformational selection, as the binding-competent state already preexisted even without the presence of the antigen. To have a more detailed description of antibody–antigen recognition, we also present here the binding/unbinding mechanism between the hen egg-white lysozyme and both the germline and matured VNARs. Upon maturation, we observed a substantial increase in the resulting dissociation-free energy barrier. Furthermore, we were able to kinetically and thermodynamically describe the binding process and did not only identify a two-step binding mechanism, but we also found a strong population shift upon affinity maturation toward the native binding pose.
Sharks and other cartilaginous fish are the phylogenetically oldest living organisms that have antibodies as part of their adaptive immune system. As part of their humoral adaptive immune response, they produce an immunoglobulin, the so-called immunoglobulin new antigen receptor (IgNAR), a heavy-chain only antibody. The variable domain of an IgNAR, also known as V NAR , binds the antigen as an independent soluble domain. In this study, we structurally and dynamically characterized the affinity maturation mechanism of the germline and somatically matured (PBLA8) V NAR to better understand their function and their applicability as therapeutics. We observed a substantial rigidification upon affinity maturation, which is accompanied by a higher number of contacts, thereby contributing to the decrease in flexibility. Considering the static x-ray structures, the observed rigidification is not obvious, as especially the mutated residues undergo conformational changes during the simulation, resulting in an even stronger network of stabilizing interactions. Additionally, the simulations of the V NAR in complex with the hen egg-white lysozyme show that the V NAR antibodies evidently follow the concept of conformational selection, as the binding-competent state already preexisted even without the presence of the antigen. To have a more detailed description of antibody-antigen recognition, we also present here the binding/unbinding mechanism between the hen egg-white lysozyme and both the germline and matured V NAR s. Upon maturation, we observed a substantial increase in the resulting dissociation-free energy barrier. Furthermore, we were able to kinetically and thermodynamically describe the binding process and did not only identify a two-step binding mechanism, but we also found a strong population shift upon affinity maturation toward the native binding pose.
INTRODUCTION
Cartilaginous fish, such as sharks, rays, chimeras, and skates, are the phylogenetically oldest group of animals having a canonical adaptive immune system (Cooper and Alder, 2006;Dooley and Flajnik, 2006;Flajnik and Kasahara, 2010). Thus, shark antibodies can provide insights into the molecular evolution of the immune system (Feige et al., 2014). For 500 million years, sharks have dominated the oceans as predators. During that time, their immune system, the oldest adaptive immunity known, evolved and already produced key parts of the immune system, such as T cells, B cells, and major histocompatibility complexes (MHCs), which can also be found in mammals (Frommel et al., 1971;Criscitiello et al., 2006;Feige et al., 2014;Flajnik, 2018). However, sharks have developed unique structural and immunological features, which cannot be found in humans or other mammals, except in camelids. Additionally, it has been shown that immunoglobulin new antigen receptors (IgNARs) reveal the highest potential for antigen-driven affinity maturation, compared with other Ig isotypes in sharks (Diaz et al., 2002;Feige et al., 2014).
Shark immunoglobulins are comprised of heavy-light chain isotypes, known as IgM and IgW, and one heavy chain homodimeric isotype called IgNAR (Hsu, 2016). The IgNAR antibodies are disulfide-bonded homodimers. The two heavy chains dimerize via five constant domains, while the two variable domains (V NAR s) are unpaired, forming the tips of the IgNARs (Roux et al., 1998;Diaz et al., 2002;Zielonka et al., 2015). Furthermore, it has been shown that dimerization is not required for high-affinity antigen binding of V NAR s, suggesting that ancient V NAR s were already functional single-domain antigenbinding domains compared with the homodimeric IgNARs found in modern sharks. Even though shark V NAR and camelid V H H antibodies have similar structural features, they differ in their evolution, as camelid V H H evolved from an IgG by simultaneously losing the light chain and C H 1 domain of the heavy chain (Figure 1; Clem and Leslie, 1982;Barelle et al., 2009;English et al., 2020).
Shark antibodies evolved under challenging conditions, which makes them particularly stable. Apart from their high stability and solubility, V NAR s have the ability to recognize and bind hidden functional sites of a target antigen, making them especially attractive as novel therapeutics for human diseases (Barelle et al., 2009;English et al., 2020). A lysozyme-binding antibody variable fragment (Fv) (PDB accession code: 2EIZ) was recently compared with a nurse shark structure complexed with lysozyme (PDB accession code: 1T6V) (Stanfield et al., 2004;Nakanishi et al., 2008). The study revealed that in contrast to the antibody Fv, the V NAR can recognize the buried substrate pocket of lysozyme with its extended CDR3 loop. V NAR fragments contain only two complementarity-determining region (CDR) loops and are still able to target antigens through a single variable domain. To compensate for this reduced size (∼13 kDa), the binding site is characterized by a long and structurally complex CDR3 loop. Consequently, the highest diversity in length, sequence, and structure in V NAR s is located in the CDR3 loop; however, the number and position of cysteine residues also contribute to determining the structural diversity of V NAR s (Streltsov et al., 2005). In general, V NAR domains consist of two β sheets, which are stabilized by a disulfide bond between two canonical cysteine residues (21C and 82C) located in the framework. Based on the number and position of additional cysteine residues, four types of naturally occurring IgNAR variable domains have been reported (Roux et al., 1998;Rumfelt et al., 2001;Streltsov et al., 2005;Matz and Dooley, 2019). The CDR3 loops of both type I and type II VNARs have extended CDR3 loops, in a so-called "upright" position, which allows to reach and bind buried epitopes. V NAR domains comprise longer CDR3 loops (up to 40 amino acids), compared with CDR3 loops in humans, and lack the CDR2 loop, which generally plays an important role in IgG and camelid V H H antibodies. Instead, V NAR s contain other CDR2 like regions, which are the hypervariable loops 2 and 4 (HV2 and HV4, respectively). The importance of the HV4 loop for antigen recognition has been reported for T-cell receptor variable β domains (Fernández-Quintero et al., 2020g).
In this study, we investigate the consequences and effects of somatic hypermutations of a nurse shark PBLA8 antibody upon affinity maturation and characterize the respective antibodyantigen-binding processes. The PBLA8 is a type II VNAR clone, which is part of a phage-display library derived from a lysozymeimmunized nurse shark, also known as Ginglymostoma cirratum. Type II VNAR antibodies are characterized by their specific, stabilizing disulfide bonds in the CDR3 and CDR1 loops. Both the ancestral and the matured PBLA8 clones were derived from the same ancestral B cell. The matured PBLA8 antibody contains 13 somatic mutations, four in the CDR1, two in the HV2 loop, and one each in the HV4 and the CDR3 loops.
RESULTS
We use a well-established protocol combining enhanced sampling techniques with classical molecular dynamics simulations to elucidate the affinity maturation process (Fernández-Quintero et al., 2019b, 2020c and describe the antigen-binding mechanism of V NAR antibodies with the antigen, lysozyme. Four crystal structures of the investigated V NAR , before and after affinity maturation, and with and without the presence of the antigen (PDB accession codes: 2I26, 2I27, 2I25, and 2I24, respectively) (Stanfield et al., 2007) were available and were used as starting structures for metadynamics simulations. The matured PBLA8 antibody clone contains in total 13 mutations, compared with its germline ancestor. Figure 2 illustrates both the naive (light gray) and the affinity matured (dark gray) V NAR domains, including a sequence-comparison of the mutated residues, color-coded in the table below and in the structure. As described in the methods section, we performed 1 µs of metadynamics simulations for all four available crystal structures to enhance the sampling of the CDR1 and CDR3 loops of both the naive and the matured V NAR s. We did not delete the antigen in our simulations, to be able to structurally characterize the antigen-binding process. Thus, to identify the influence of the 13 somatic hypermutations on both the conformational space FIGURE 1 | Structural comparison of an immunoglobulin (IgG) structure with an IgG new antigen receptor (IgNAR). The structure of a V NAR with its unique binding site geometry is depicted next to the IgNAR. and on the antigen-binding process, the dynamic nature, and the conformational diversity of the V NAR s has to be considered.
To quantify the resulting flexibility between the naive and the matured V NAR s, we performed hierarchical clustering of all four metadynamics simulations individually on the CDR1 and CDR3 loops of both the naive and the matured V NAR s with and without the antigen present, using the same Root Mean Square Deviation (RMSD) cut-off criterion of 1.5 Å. Table 1 summarizes the resulting numbers of clusters and also includes additional clustering results using different input criteria. Independent of the input features and the cut-off criterion applied for the clustering, we observe a substantial decrease in the number of clusters as a consequence of affinity maturation. We also observe that antigen-binding results in a decrease of flexibility in the binding site, reflected in a smaller number of clusters for the different input criteria, except for the HV4 loop. The reason for this increase in flexibility of the HV4 loop upon binding is that this loop is not directly involved in the antigen-binding process. Therefore, the rigidification of the CDR1 and CDR3 loops allows a higher variability of the HV4 loop. To reconstruct the kinetics and thermodynamics of the different CDR loop rearrangements, we used the obtained cluster representatives as starting structures for every 100 ns of classical molecular dynamics simulations. These trajectories were then used to construct a time-lagged independent component analysis (tICA) and a Markov-state model based on the backbone torsions of the CDR1 and CDR3 loops. Figure 3 illustrates the Markovstate models and the reweighted free energy surfaces of the naive and the matured PBLA8 V NAR s, which were performed without the antigen. From the resulting free energy surfaces projected into the combined coordinate system in Figures 3A,C, we observed a substantial rigidification of conformational space upon affinity maturation, which is accompanied by a strong population shift toward the binding competent state. Interestingly, even without the presence of the antigen, we find for both variants that the binding competent state already preexists in the captured CDR loop ensembles in solution with varying probabilities. The Markov-state models depicted in Figures 3B,D, clearly confirm this population shift upon affinity maturation. While the matured PBLA8 antibody shows only one deep and narrow minimum, the naive antibody results in four different CDR loop macrostates with conformational transitions of the CDR loops in the microsecond timescale. Figure 4 illustrates the free energy landscapes and the respective Markov-state models of the naive and matured V NAR domains, simulated with the antigen present. Furthermore, these free energy surfaces are also projected into the same coordinate system as shown in Figure 3. These results confirm the strong population shift upon affinity maturation toward the binding competent state. Additionally, the naive V NAR strongly supports the conformational selection paradigm, as the binding competent state already preexists with lower probability without the presence of the antigen and was selected as the dominant solution structure upon binding.
To compare the interactions of the naive and the matured V NAR with the antigen and to structurally elucidate the antigenbinding process, we visualized the different types of contacts (hydrogen bonds and salt bridges) as individual flare plots ( Figure 5). The thickness of the lines in these plots represents the duration of the contacts. The flare plot is divided into two colors, blue for the antibody and green for the antigen. The CDR1 and CDR3 loops are also highlighted in yellow and red, respectively. The numbering and position of the residues can directly be compared between the naive and matured V NAR . These facilitate the comparison between the two variants. In agreement with the observed rigidification upon affinity maturation in the presence of the antigen, the decrease in flexibility of the matured V NAR can be structurally explained by the substantially higher number of contacts and long-lasting interactions formed between the hen egg-white lysozyme and the matured PBLA8 V NAR .
To further structurally and mechanistically characterize the antigen-binding process, we again performed metadynamics simulations, but used the distance between the center of masses of the antigen and the antibody as a collective variable, ensuring the minimal distortion of the binding interface. These simulations allow us to cover a broad range of unbinding pathways and to elucidate the antigen-binding process in detail. As mentioned in the methods section, three individual runs of metadynamics simulations were started with different initial velocities. We combined and clustered the simulations on the center of mass distances between the antibody and the antigen for each variant separately. The resulting cluster representatives were used as starting structures for short classical molecular dynamics simulations to allow an unbiased view of the mechanism involved in antibody-antigen recognition and binding. To identify kinetically stable states along the binding pathway, we apply tICA on the inverse distances of the native contacts. We chose inverse distances as they are well suited to distinguish small differences between conformations where the V NAR and the antigen are close, but not overemphasizing the differences in unbound conformations (big distances, small inverse distances). Besides, inverse distances are functionally closer to potential energies. In Figure 6A, the resulting free energy surfaces and the Markov-state models of the antigen-antibody binding pathways of both the naive and the matured V NAR s are illustrated. For these two antibody-antigen complexes, we observe three metastable states along the binding pathway. The main difference between the naive and the matured antibody is the populations of these three metastable states. Particularly interesting is the strong population shift upon affinity maturation toward the binding competent conformation, compared with the naive V NAR . Before unbinding in both variants, a so-called "encounter complex" could be identified, which already shows a significantly higher number of electrostatic interactions, compared with the completely unbound conformations ( Figure 6C). The encounter complex of the naive V NAR is even more dominated by electrostatics compared with the complexed state, which is in agreement with the obtained free energy surface and the Markov-state model showing that the encounter complex is the highest populated state. The encounter complex formation is dominated by ionic interactions of Glu 86 with lysozyme Arg 73 (corresponding to R188) (occurrence 60%) and Arg 88 with lysozyme Asp 101 (corresponding to D216) (occurrence 15%) and hydrogen bond interactions of Tyr 89 with lysozyme Trp 63 (corresponding to W178) (occurrence 30%) and lysozyme Asp 52 (corresponding to D167) (occurrence 6%). Additionally, Tyr 92 forms a hydrogen bond with lysozyme Asp 48 (corresponding to D163) (occurrence 10%). Figure 6C clearly shows that the unbinding process of the matured V NAR is strongly governed by electrostatics, while Van der Waals interactions play only a minor role in the association of the antibody and the antigen in the transition from the unbound state to the formation of the encounter complex. Especially interesting is that the formation of the encounter complex of the matured V NAR is favored by ionic interactions formed by the mutated residue Asp 51, which was a Lys 51 before maturation. Asp 51 forms ionic interactions with lysozyme Arg 21 (corresponding to R136) (occurrence 15%) and lysozyme Lys 96 (corresponding to K211) (occurrence 14%) and makes an additional hydrogen bond with lysozyme Tyr 20 (corresponding to Y135) (occurrence 20%). Furthermore, we observe hydrogen bond interactions of Ser 48 with lysozyme Asp 101 (occurrence 30%). However, Figure 6C also illustrates that the Van der Waals interactions have a more prominent role in the transition from the encounter complex to the native complexed state.
DISCUSSION
The rise of antibodies as therapeutics has motivated numerous studies to characterize and understand the antibody binding interface as a pre-requisite for rational antibody design and engineering (MacCallum et al., 1996;Schmidt et al., 2013;Di Palma and Tramontano, 2017;Fernández-Quintero et al., 2019a,b, 2020d. Compared with conventional antibodies, small antibodies such as nanobodies and V NAR s are more stable and more soluble. Additionally, they can work inside cells as their small size allows them to wend into tissues and they can recognize cryptic epitopes (Griffiths et al., 2013).
The transfer across the blood-brain barrier (BBB) remains a challenge in the development of biotherapeutics that affect the central nervous system. However, it has already been reported that V NAR s can reach the brain, making them especially attractive for use as therapeutic, diagnostic, or transport molecules. Additionally, just recently, a V NAR targeting the transferrin receptor 1 (TfR1) is transported through the BBB into the brain parenchyma, highlighting the importance of V NAR s as they can shuttle molecules across the BBB (Stocki et al., 2019).
Thus, structurally characterizing the peculiar antibodybinding site of V NAR s and understanding antibody-antigen recognition is crucial for the design and engineering of these outstanding proteins. In this study, we thermodynamically and kinetically characterize CDR loop ensembles in solution before and after affinity maturation and explain the observed rigidification in atomic detail. However, apart from the decrease in conformational space, the underlying binding mechanisms were also investigated, including a description of the fundamental factors that contribute to antigen recognition and binding.
Conformational rearrangements in the paratope, as well as binding and unbinding events of an antigen, can occur in the microsecond to second timescale, which exceeds routinely performed simulation times by far. To enhance the efficiency of the sampling, we used metadynamics simulations to cover conformational transitions between different CDR loop conformations, but also to capture conformations along FIGURE 3 | Free energy surfaces and Markov-state models of the apo naive and matured V NAR CDR3 and CDR1 loops. (A) Free energy surface of the naive V NAR CDR3 and CDR1 loops, the starting X-ray structure (PDB accession code: 2I27) is depicted as a black dot. (B) Results of the Markov-state models with the respective macrostate probabilities. The thickness of the arrows denotes the transition timescale and the width of the surrounding circle represents the state population. (C) Free energy surface of the matured V NAR CDR3 and CDR1 loops projected into the same coordinate system as the naive V NAR and the starting crystal structure is also illustrated as a black dot (PDB accession code: 2I24). (D) Agreement with the obtained free energy surface only one macrostate.
the path between the complex and dissociated V NAR -lysozyme complex.
The comparison of the obtained free energy landscapes of the naive with the matured V NAR s (Figure 3) without the presence of the antigen clearly shows a substantial rigidification upon affinity maturation as a consequence of 13-point mutations. This broader conformational space of the CDR loops is governed by the higher flexibility of the CDR3 loop in the naive V NAR , compared with the matured PBLA8 V NAR. The stabilization of the CDR3 loop originates from a salt bridge and a hydrogen bond formed between an Arg28 in the CDR1 and an Asp93 in the CDR3 loop. Both the salt bridge and the hydrogen bond interactions are present in nearly all frames of the simulations (95%). The higher flexibility of the naive V NAR can be explained by the absence of these interactions, as residue 28 is an asparagine before maturation, which only forms a hydrogen bond with Asp93 in 2% of the frames. Upon antigen binding, the stabilizing intramolecular network of interactions within the matured PBLA8 V NAR of Asp93 with Arg28 and Asp93 with Ser43 (12% of occurrence) is changed to a salt bridge of Asp93 with lysozyme Arg112 (corresponds to R227 in Figure 5). Another salt bridge and hydrogen bond interaction between the matured PBLA8 V NAR and lysozyme could be identified-Arg61 and lysozyme Asp 101 (corresponding to D216) and Asn 103 (corresponding to N218) strongly interact with each other, which is unique for the matured variant. Before binding, the Arg 61 was interacting with the Asn 60 (15% of occurrence) and with the backbone of Thr 58 (45% occurrence). Overall, the duration and number of contacts between the antibody and lysozyme are much higher in the matured PBLA8 V NAR , compared with the naive V NAR (Supplementary Figures 1-3). Additionally, the residues Arg FIGURE 4 | Free energy surfaces and Markov-state models of the complexed naive and matured V NAR CDR3 and CDR1 loops. (A) Free energy surface of the naive V NAR , the starting x-ray structure (PDB accession code: 2I26) is depicted as a black dot. (B) Results of the Markov-state models with the respective macrostate probabilities. The thickness of the arrows denotes the transition timescale and the width of the surrounding circle represents the state population. (C) Free energy surface of the matured V NAR CDR3 and CDR1 loops projected into the same coordinate system as the naive V NAR and the starting crystal structure is also illustrated as a black dot (PDB accession code: 2I25). (D) Agreement with the obtained free energy landscape two macrostates and again the transition timescales and state populations are represented by the thickness of the arrows and the width of the circles, respectively. 61, Asp 51, and D101 in the matured V NAR turn out to be key determinants for molecular recognition of the antigen. Astonishingly, Asp 93 can form equally strong interactions with Asp 101 in both the matured and the naive V NAR . However, Asp 93 contributes together with Arg 28 substantially to an intramolecular interaction network, contributing to the significant increase in flexibility before binding upon affinity maturation.
Furthermore, the structural changes of the CDR3 and CDR1 loops upon antigen binding have been reported to follow the induced fit theory, as it was assumed that the observed conformational changes in the CDR loops were induced by antigen binding (Koshland Daniel, 1995;Stanfield et al., 2007). However, we find that within the obtained dynamic apo ensemble of the naive V NAR the binding competent conformation already preexists without the presence of the antigen Tsai et al., 1999;Fernández-Quintero et al., 2020e). As the antigen recognizes and binds to this conformation, we observe a strong population shift toward the binding competent conformation (Figure 3). The free energy surface of the affinitymatured PBLA8 V NAR simulated without antigen exhibits only one deep narrow minimum, in which the binding competent state already preexists. As the matured PBLA8 V NAR rigidifies substantially upon affinity maturation, with only small structural rearrangements upon antigen recognition being observed, the binding process can be described as lock-and-key binding.
This has already been reported in 1997, where significant conformational changes occurred in the germline antibody upon binding, while the matured antibody was identified to bind the antigen by a lock-and-key-fit mechanism (Koshland Daniel, 1995;Wedemayer et al., 1997).
To better understand the antibody-antigen recognition and the effect of affinity maturation on the complex formation, we investigated the detailed binding and unbinding pathways of the naive and the matured PBLA8 V NAR with the antigen lysozyme (Figure 6). Our results clearly show that the pathway of the binding process can be described as a two-step mechanism. The most critical step represents the association of the binding partners and the formation of the encounter complex which is characterized by a protein-protein interface dominated by electrostatic interactions (Schreiber and Fersht, 1996;Vijayakumar et al., 1998;Sheinerman et al., 2000;Frisch et al., 2001). At this stage, the protein-protein interface is still partially solvated and contains non-optimal sidechain orientations and interactions (Horn et al., 2009;Kahler et al., 2020). Electrostatic interactions are the driving force in directing the binding and pulling of the antibodyantigen interface together. Thereby, electrostatics may also contribute to the early discrimination between potential binding partners. While the encounter complex features the pre-aligned binding partners, further side-chain rearrangements and closer approaching of the two binding partners results in desolvation FIGURE 5 | Visualization of the salt bridges and hydrogen bond interactions between lysozyme and the naive and matured V NAR domains represented as flare plots. The residues of the antibody are colored in blue, while the antigen is depicted in green. The color-coding corresponds to the structure illustrated next to the flare plots. The CDR3 loop is colored red and the CDR1 loop is depicted in yellow. The mutated residues are shown in bold. (Camacho et al., 2000) and a more prominent role of the Van der Waals interactions (Figure 6). Figure 6 shows that for both the naive and the matured V NAR , an energy barrier between the encounter complex and the complex state, which prohibits a fast transition to the native complex state (Alsallaq and Zhou, 2007). The encounter complex in the naive V NAR -lysozyme binding pathway is the highest populated state and also shows the highest electrostatic interaction energy. This observation is confirmed by the high number of electrostatic interactions formed in the encounter complex. Moreover, we find that especially in the naive VNAR tyrosine residues contribute substantially to the formation and stabilization of the encounter complex. Tyrosine residues have been shown to play a privileged role in antigen recognition by contributing substantially to mediating molecular contacts in the binding interfaces (Koide and Sidhu, 2009). The hydroxyl sidechain makes tyrosine significantly more hydrophilic compared with other hydrophobic amino acids. At the same time, increased hydrophilicity may result in less specific binding in the unbound state, which might be a key characteristic of a naive repertoire, which can still be exposed to a diverse number of antigenic surfaces (Koide and Sidhu, 2009). Excitingly, we observe a strong population shift upon affinity maturation toward the binding competent state (naive 38%-matured 75% population). For the matured V NAR , the formation of the encounter complex, as well as the optimization and transition to the native complex state, are dominated by electrostatics (Tworowski et al., 2005). Figure 7 schematically represents and summarizes the observed binding mechanism for both the naive and the matured antibody. While in the naive antibody, the formation of the encounter complex is energetically more favored compared with the native complex, the matured V NAR reveals a strong population shift toward the complex state. Thus, affinity maturation does not only result in a decrease in conformational diversity of the CDR loops but strongly favors the formation of the native complex, which is governed by both electrostatic and Van der Waals interactions.
CONCLUSION
In this study, we structurally and functionally characterized the antigen-binding site of V NAR s upon affinity maturation. We observed that not only the CDR1 and CDR3 loops, which are directly involved in the antigen-binding process, but also the whole V NAR s rigidify upon maturation, as a consequence of the 13-point mutations. The obtained free energy surface of the naive V NAR is broad and shallow, while the matured PBLA8 V NAR only shows one deep and narrow minimum. This resulting rigidification is accompanied by a strong population shift upon affinity maturation. Additionally, we evidently see that the naive V NAR variant follows the concept of conformational selection, while antigen recognition of the matured V NAR can be described as a lock-and-key binding.
Furthermore, we provide a two-step binding mechanism and describe in detail the driving forces of antibody-antigen association. Thereby, we present a comprehensive model of antibody-antigen recognition. Apart from identifying key determinants for antigen recognition, we also elucidate the affinity maturation mechanism, as we observe a significant population shift from the naive to the matured variant FIGURE 7 | Schematic summary and representation of the binding pathway. Upon affinity maturation, the native complex state becomes the most probable state, while the encounter complex is favored in the binding pathway of the naive antibody.
toward the binding competent complex state, which is represented by a deep and narrow minimum in the free energy surface. Thus, these results have broad implications for the rational design of new antigen receptors, i.e., V NAR , since they provide a detailed characterization of the intraand-intermolecular changes upon affinity maturation. Additionally, these insights presented on the binding pathways in different stages of affinity maturation, combine a variety of fundamental concepts in molecular recognition which can be used to improve protein-protein docking, and consequently, the engineering of specific and stable antibody-antigen complexes.
MATERIALS AND METHODS
A previously published method characterizing the CDR loop ensembles upon antigen binding in solution (Fernández-Quintero et al., 2019a,b, 2020a was used to investigate the conformational diversity of CDR3 and CDR1 loops of V NAR variants in different stages of affinity maturation. Experimental structure information was available for the naive and the matured V NAR s, crystallized with and without the antigen, hen egg-white lysozyme. The PDB accession codes for the naive V NAR s with and without the presence of the antigen are 2I26 and 2I27, respectively (Stanfield et al., 2007). The crystal structures for the matured variant with and without the antigen can be found in the PDB with the accession codes 2I25 and 2I24. All four available x-ray structures were used as starting structures for molecular dynamics simulations. The starting structures for simulations were prepared in Molecular Operating Environment (Chemical Computing Group, version 2020.01) using the Protonate3D tool (Labute, 2009;Chemical Computing Group, 2020). To neutralize the charges, we used the uniform background charge (Hub et al., 2014;Case et al., 2020). Using the tleap tool of the AmberTools20 (Roe and Cheatham, 2013;Case et al., 2020) package, the crystal structures were soaked in cubic water boxes of TIP3P water molecules with a minimum wall distance of 10 Å to the protein (Jorgensen et al., 1983;El Hage et al., 2018;Gapsys and de Groot, 2019). For all simulations, parameters of the AMBER force field 14SB were used (Maier et al., 2015). The V NAR variants were carefully equilibrated using a multistep equilibration protocol (Wallnoefer et al., 2011).
Metadynamics Simulations
To enhance the sampling of the conformational space, welltempered metadynamics simulations (Barducci et al., 2008(Barducci et al., , 2010Ilott et al., 2013;Biswas et al., 2018) were performed in GROMACS (Pronk et al., 2013;Abraham et al., 2015) with the PLUMED 2 implementation (Tribello et al., 2014). As collective variables, we used a linear combination of sine and cosine of the ψ torsion angles of the CDR1 and CDR 3 loops calculated with functions MATHEVAL and COMBINE implemented in PLUMED 2 (Tribello et al., 2014). As discussed previously, the ψ torsion angle captures conformational transitions comprehensively (Ramachandran et al., 1963). The decision to include the ψ torsion angles of these two loops is based on their strong involvement in the binding to the antigen as evident from the x-ray structure of the complex. The simulations were performed at 300 K in an NpT ensemble. The height of the Gaussian was determined according to the minimal distortion of the V NAR systems, resulting in a Gaussian height of 10 kJ/mol, and a width of 0.3 rad. Gaussian deposition occurred every 1,000 steps and a bias factor of 10 was used. Metadynamics simulations measuring 1 µs were performed for each available V NAR crystal structure. The resulting trajectories were clustered in cpptraj (Roe and Cheatham, 2013;Case et al., 2020) using the average linkage hierarchical clustering algorithm with a distance cutoff criterion of 1.5 Å resulting in a large number of clusters ( Table 1). The cluster representatives for the matured and the naive variants, both with and without the antigen present, were equilibrated and simulated for 100 ns using the AMBER 20 simulation package.
To further elucidate the detailed binding mechanism and to investigate the effects of point mutations on the antigenbinding process, we performed additional non-well-tempered metadynamics simulations using the distance between the two centers of masses of the V NAR and the antigen as collective variables (Alessandro and Gervasio, 2008; Barducci et al., 2011). We used a Gaussian height of 1 kJ/mol and width of the Gaussian of 0.1 nm. An additional Gaussian function has also been introduced every 1,000 simulation steps. Three individual runs of both the matured and the naive antibody were performed for 10 ns of simulation time, each. The obtained trajectories were clustered using the distance between the two centers of masses as a clustering criterion with a distance of 1.5 Å. To reconstruct the thermodynamics and kinetics of the binding process, the resulting large number of cluster representatives were again equilibrated and simulated for 100 ns each using the AMBER 20 simulation package ( Table 1).
Molecular Dynamics Simulations
Molecular dynamics simulations were performed in an NpT ensemble using pmemd.cuda (Salomon-Ferrer et al., 2013). Bonds involving hydrogen atoms were restrained by applying the SHAKE algorithm (Miyamoto and Kollman, 1992), allowing a time step of 2 fs. The atmospheric pressure of the system was preserved by weak coupling to an external bath using the Berendsen algorithm (Berendsen et al., 1984). The Langevin thermostat (Doll et al., 1975;Adelman and Doll, 1976) was used to maintain the temperature during simulations at 300 K.
Additionally, a tICA was performed using the python library PyEMMA 2 employing a lag time of 10 ns (Scherer et al., 2015;Pérez-Hernández and Noé, 2016). Thermodynamics and kinetics were calculated with a Markov-state model (Bowman et al., 2014;Chodera and Noé, 2014) using PyEMMA 2, which uses the k-means clustering algorithm (Likas et al., 2003) to define microstates and the PCCA + clustering algorithm (Röblitz and Weber, 2013) to coarse-grain the microstates to macrostates. PCCA + is a spectral clustering method, which discretizes the sampled conformational space based on the eigenvectors of the transition matrix. The sampling efficiency and the reliability of the Markov-state model (e.g., defining optimal feature mappings) can be evaluated with the Chapman-Kolmogorov test (Karush, 1961;Miroshin, 2016), using the variational approach for Markov processes (Wu and Noé, 2017) and by taking into account the fraction of states used, as the network states must be fully connected to calculate probabilities of transitions and the relative equilibrium probabilities. To capture and quantify the CDR loop rearrangements of the V NAR variants, we constructed Markov-state models based on the backbone torsions of the CDR1 and CDR3, defined 150 microstates using the k-means clustering algorithm, and applied a lag time of 10 ns.
To reconstruct the binding kinetics and thermodynamics, we used the inverse distances of the native contacts between antibody and antigen as input features for both the tICA and the Markovstate model. As a lag time, we chose both for the tICA as well as for the Markov-state model a lag time of 50 ns and defined 200 k-means clusters.
For quantitative analyses of the binding processes, the electrostatic and Van der Waals interactions were calculated with the lie, implemented in cpptraj. The images presented in this article were created using the PyMOL molecular graphics system (Schrodinger, 2015).
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
MF-Q performed research and wrote the manuscript. CS and PQ performed research and analyzed data. KL supervised the research. All authors contributed to writing the manuscript.
FUNDING
This work was supported by the Austrian Science Fund (FWF) via the grants P30565, P30737, and P30402, as well as DOC 30. Furthermore, this project has received funding from the European Union's Horizon 2020 Research and Innovation Program under grant agreement no. 764958. | 2021-04-20T13:10:32.892Z | 2021-04-20T00:00:00.000 | {
"year": 2021,
"sha1": "d08004bcb83f557267eb619d40e1acf67b01f6cd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2021.639166/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d08004bcb83f557267eb619d40e1acf67b01f6cd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255958190 | pes2o/s2orc | v3-fos-license | Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors
Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.
Background
Ubiquitous estimation of physical activity and quality of life measures collected from real life environments are becoming an imperative component to monitor successful translation of clinical research into patient's own living environment [1][2][3]. With context to this, continuous monitoring paradigms are gaining substantial significance in tracking a patient's compliance to a stipulated exercise regime and to gauge their level of community integration post rehabilitation [1,2,4]. Commonly used methods of measuring mobility include the traditional methods of performance-based or patient-reported measures. These measurements are either limited by rater or recall bias, or in their ability to encompass all aspects of community mobility. Advanced methods like camera based motion capture, pressure sensor walkway [5] and force plate systems for assessments, although significantly reliable, limits data collection to a confined controlled laboratory space and are expensive [6]. While such controlled environment tests can provide highresolution information to uncover the underlying biomechanics during in-patient movement assessments, they provide very little to no information about a patient's natural physical activity behavior and compliance in their community or home setting [7].
Evidence suggests that employing wearable sensors provide means to remotely and continuously track patient's recovery in real world settings [7,8]. Indeed, activity monitoring with wearables are showing cost benefits for healthcare and also paving the way for participatory clinical decision making for customized healthcare [1,2,[9][10][11]. Despite offering numerous benefits, the validity and reliability of outcome estimates from these wearable sensors for rehabilitation medicine in individuals with chronic conditions is a daunting challenge for researchers [12,13].
A potential reason for this being that most of standard proprietary algorithm (SPA) provided by sensor manufacturer's are derived using empirical data from healthy populations leading to inaccurate/unreliable estimates when deployed to estimate outcome measures in clinical populations [13]. Research to date acknowledges two major contributing factors to such estimation inaccuracies [14], namely, (i) the sensor location and (ii) variation in acceleration thresholds due to pathology specific movement signatures in comparison to healthy controls.
In this context, there has been less focus on studying the influence of such factors on outcome metrics other than step count, like, energy expenditure (EE) and metabolic equivalents (MET), especially in individuals with stroke and incomplete spinal cord injury (iSCI) [13,15,16]. Further, there is limited information regarding the relationship between attributes like (i) sensor type (i.e. body worn, body stuck and standalone Vs fusion sensor modalities) and (ii) characteristics of activity being studied, on the validity of outcome measures estimated using SPAs in populations with neurological impairments like iSCI and stroke.
Consequently, our primary goal was to investigate the influence of sensor type, and sensor location on the validity of physical activity outcome estimates (step count, energy expenditure (EE) and metabolic equivalent (MET) as provided by SPA from wearable sensors in a sample of healthy individuals (controls) and individuals with iSCI (ambulatory) and stroke. A secondary goal was to investigate the influence of activity characteristics (intensity) on the validity of each of the physical activity outcome estimates in our sample. In this pilot study, we investigated the influence of above aspects on validity of the outcome measures as provided by the respective SPAs from two research grade sensors. The spectrum of activities studied were identical to those encountered in activities of daily living (ADL), but were performed in a controlled laboratory setting. Although performed in a controlled environment, such findings will have important implications for understanding the possible factors that needs to be considered while estimating outcome measures using wearables in free living conditions. It was postulated that the choice of sensor type, sensor location (arm, waist and ankle), population specific movement signatures (healthy, iSCI (ambulatory), stroke) and the characteristics of the activity being studied will significantly influence the validity of the physical activity outcomes metrics estimated using SPA in laboratory conditions.
Methods
The participant pool included, healthy controls, and individuals with iSCI and chronic stroke who could ambulate with or without an assistive device. Detailed group wise demographic information is provided in Table 1. Exclusion criteria included, (i) presence of any known serious cardiac conditions, (ii) neurological degenerative pathologies as co-morbidities (such as Multiple Sclerosis, Alzheimer's disease, Parkinson's disease, etc.), and (iii) inability to sit unsupported. In addition, subjects were requested to stay off of any medications that has previously known to affect their metabolism during the study period.
Devices used
Currently, a plethora of commercial and research grade wearable devices are available for clinicians to choose from [17]. Testing the validity of outcome metrics from all the devices was outside the scope of this study design. Therefore, in order to test the postulated hypotheses two specific research grade wearable sensor types used in clinical research, namely, Actigraphs [18,19], and Metria-IH1 [18,20] were chosen ( Fig. 1a & 1b).
The goal of the study was to investigate the validity of estimates as provided by the respective SPAs from ActiLife and SenseWear. These SPAs are usually based on data from healthy individuals and the algorithms performance are optimized to the specific sensor locations. Therefore, the sensor locations for this study were chosen based on literature and the respective manufacturers prescription [21]. This procedure was adopted to reduce any possible confounding factors to the outcome estimates that may arise due to switching of the sensor location from the optimal location for which the respective SPAs were developed. Furthermore, all wearable devices used in this study were obtained from the same manufacturing batch. This was done to minimize any measurement differences inherently arising due to variations in manufacturing process.
ActiGraph
The ActiGraph wGT3X-BT's [22] were worn on the upper arm [23], the waist [24,25] and the ankle [26] (Fig. 1c). The waist sensor locations were chosen based on previous literature [13,24]. For consistency, all the ActiGraphs were placed on the right side of the body. Adjustable fabric belts securely positioned the ActiGraphs on their respective locations. ActiGraphs measure the triaxial acceleration to estimate physical activity metrics [22]. (Fig. 1a). The ActiGraphs sampled at 30 Hz.
Each ActiGraph device was assigned to a specific anatomical location. The device to anatomical location was held consistent between participants to minimize confounding factors due to unit calibration and sensor switching [27]. The time on all the three devices were synchronized to the local atomic clock server time before data collection began.
Metria-IH1
Based on the manufacturer's recommendation, the Metria-IH1 patch was adhered to the skin and located on the back of the upper left arm (Fig. 1b, c). The Metria IH1 patch houses variety of sensors to measure four modalities, namely, (i) 3-axis accelerometer, (ii) skin temperature, (iii) near body temperature and (iv) galvanic skin response (GSR). The module sampled data at the rate of 5000 data points per minute. The accelerometer alone sampled at 32 Hz. Metria-IH1 is a one-time use and throw device.
Outcome measures
Three outcome metrics of relevance widely used in monitoring physical activity in clinical rehabilitation in individuals with stroke and iSCI were studied, namely, (i) step count, (ii) energy expenditure (EE) in Kcals, and (ii) metabolic energy (METs) [14,28,29]. These metrics were compared against the designated gold standard (Cosmed K4B2 [30] for EE and MET and phone-based counter for step count).
Time since condition (Years) -11.9 (7.7) 7.0 (5.0) Assistive devices used during testing -Walker and knee brace (n = 1) Straight cane (n = 2) BORG RPE for activities Multi sit-to-stand 12 (2) 14 (4) 15 (3) breath-by-breath metabolic measurements [22,31]. For EE and MET metrics, the Cosmed K4B2 output was used as the gold standard comparison. For step counts, the steps taken were counted manually using a phone-based counter during the 50 step walk test. This manual tally was used as gold standard for step count [32,33].
Experimental data collection
The experimental design and the test protocol are presented in Fig. 1(d) & (e). Consistent with recommended practice, the validation protocol was designed to cover a spectrum of physical activities [27]. Data collection started with the subjects performing a series of activities such as lying, sitting, and standing for two-minutes each, a 50 steps walk on a hallway [34], a six minute walking test (6MWT) and finally two minutes of multi sit-tostands. During multi sit-to-stand task, subjects were encouraged to do as many sit-to-stand sets as they safely and possibly can do.
Sufficient rests periods were given to the participants between each activity. The rest duration between activities were to ensure that the heart rate of the participants returned to their resting levels before starting the proceeding activity (refer Fig. 1(e)). The Cosmed K4B2 was body mounted with the rubberized facemask; d) the experimental design and the spectrum of activities executed during the protocol. To execute the study protocol, participants performed a set of structured indoor activities in a controlled laboratory setting. (e) The spectrum of the performed activities was categorized into three levels, (i) sedentary activities: lying down on a treatment table, sitting and standing (with or without assistive device) for two minutes each, (ii) low intensity activity: walk 50 steps, and (iii) high intensity activities: a six-minute walking test (6MWT) and two minute of fast paced multi sit-to-stand activity. Sufficient rests and recovery were provided between all the performed activities. All the three devices, namely, the Actigrpah, Metria and Cosmed K4B2 continuously collected data during the entire protocol At the end of each activity, subjects self-reported their perceived effort of each activity using a Borg scale of perceived exertion [35]. These self-reported ratings were later used to classify the intensity of activities performed based on the exertion levels. Participants were given the choice on whether they wanted to use their assistive device. All the participant's completed the entire protocol successfully in a single visit. All the three devices, namely, the ActiGraphs, Metria and Cosmed K4B 2 continuously and simultaneously recorded data during the entire session (i.e. 60 min).
Data analysis
For all data analysis, SPA provided by manufacturers of the respective devices were used for estimating the outcome metrics of interest (EE, MET and step count).
ActiLife:. The goal was to study the validity of outcome metrics while using SPAs in individuals with stroke and iSCI. The Choi [36] and Freedson [37] proprietary algorithms based off of healthy population empirical data was used to extract the EE and MET estimates for each of the activities in all of the groups. A Harris Benedict equation [38] was used to include the contribution of BMR to estimated EE and MET's. This data extraction procedure was repeated for each of the wearable sensors positioned at the waist, ankle and the arm. The metrics from ActiGraph's were used to study the effect of the sensor location, activity characteristics and population effects on the validity of outcome estimates.
Metria-IH1: The Metria-IH1's proprietary fusion algorithm developed on the SenseWear software development kit platform, prompts for anthropometric information (age, height, weight, gender, dominant hand and smoking) at the time of data processing. This information is then fused along with information from the various on board sensor modules to calculate the outcome metrics of interest [20,39]. The outcome metrics from Metria-IH1 were used to study the effect of activity characteristics and population effects on the validity of outcome estimates.
COSMED-K4B2 [30]: The manufacturer provided software was used to extract the EE and MET estimates from the Cosmed K4B2 [30,39].
Step count: For agreement, the manually counted steps using a phone based tally counter was cross-verified with step counts from the video recorded during the 50 step walk test [32,33].
Statistical analysis
All the statistical analyses were performed using IBM SPSS 21.0 (SPSS Inc., Chicago, IL). Per study design, there were no direct between group comparisons (i.e. no direct comparison between healthy, iSCi and Stroke) and between device comparisons (i.e. no direct comparison between ActiGraph Vs Metria-IH1). For each sensor the postulated hypotheses were statistically compared with the designated gold standard's estimate (i.e. EE & MET: Device vs. Cosmed K4B2; step count: device count vs. manual tally). The null hypothesis (H0) was that, the mean ranks of the groups (device estimates Vs Cosmed estimates) are the same. Therefore, failure to achieve the statistical significance, does not give sufficient evidence to reject the H0.
The statistical significance was set to p < .05 for all hypotheses tests pertaining to Metria-IH1. To account for multiple comparisons a corrected (Bonferroni) p value of p < .016 was used for all hypothesis testing pertaining to ActiLife estimates. For the group with stroke, data were analyzed based on the side of stroke impairment. (i.e. (Metria-IH1 for stroke (L) and ActiGraphs for stroke (R))).
Following epsilon-squared effect size (E 2 ) thresholds were used for interpretation of strength of relationship: small effect size (E 2 ≤ .1), medium effect size (.1 < E 2 ≤ .3) and large effect size (E 2 > .5) [40]. With context to this analysis the following interpretation was used: a value of E 2 = 1.0, indicated a large deviation and a value of E 2 = 0 indicated a closer match in the mean rank of the estimated outcome with the designated golden standards (Cosmed K4B2 for EE and MET, manual step tally for step count).
Due to the smaller sample size and non-normal distribution of the data (based on Shapiro-Wilk test), nonparametric tests were used to statistically verify the postulated hypotheses. A series of Kruskal Wallis tests (Games-Howell multiple comparison for post-hoc analyses due to unequal variances verified using Levene's test of equality for variances) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with Cosmed (designated gold-standard measurements [30]. To verify step count validity, estimates of step count from ActiLife and Metria-IH1 SPAs were statistically compared with the manually counted steps during the 50 steps walk test.
Results
For brevity only salient results are reported in the text. For detailed statistical results on bias between measures please refer to the Tables 1, 2 & 3. For detailed analysis of absolute agreement to assess reliability refer to the post-hoc analysis (additional file shows this in more detail [see Additional file 1; Supplementary Tables ST1 through ST8]). The results for the outcome metrics trended the same way from both the bias (Kruskal-Wallis) and absolute agreement tests (Games-Howell post-hoc).
Demographics
In total, 28 participants completed the study. All participants completed the protocol successfully within 60 min. There were no adverse events. The descriptive statistics of the study population is provided in Table 1.
Activity type classification (Borg RPE)
Activities were classified as sedentary (lying, sitting and standing), low intensity (50 step walking) and high intensity (6MWT and multi sit-to-stand) ( Table 1, Fig. 1(e)) based on the participants self-reported perceived exertion. Consistent with literature, individuals with stroke and iSCI rated their activities at a higher exertion level than healthy controls [41].
Sedentary activities
EE estimated by the ActiLife were significantly lower (p < .016; E 2 > 0.7) than the Cosmed for all sedentary activities irrespective of the sensor locations and population studied. (Tables 2 & 3, Fig. 2a).
No significant differences were observed for EE estimates between Metria-IH1 and Cosmed for all sedentary activities in iSCI group (p > .05; E 2 ≤ .3) and stroke groups with left impairment (n = 6) (p > .05; E 2 < .3). In the healthy group, except for the EE during lying activity (p < .05; E 2 > .7), no significant differences were observed between EE estimates from Metria-IH1 and Cosmed for other sedentary activities in the healthy controls (p > .05; E 2 < .1); (Tables 2 & 3, Fig. 2b). The MET's followed the same trend as EE's.
These results show that the sensor type and characteristics of population studied can influence the accuracy of estimates during sedentary activity when using SPA.
Low intensity activity
No significant differences were observed between the manually counted steps (i.e. 50 steps) and the step counts estimated by the ActiLife (irrespective of sensor locations) and Senseware (Metria-IH1) for the healthy control group (p > 0.016) (Fig. 3). In the group with stroke, except the ActiLife estimates for ActiGraph located at the ankle (p > .016) (Fig. 3), ActiGrpah's at all other locations and the Metria-IH1 significantly underestimated the step count (p < 0.016). For the group with iSCI, the ActiLife (all sensor locations) and Metria-IH1, significantly under-estimated the step counts (p < .016) (Fig. 3). These observations with step counts are consistent with previous literature, thus benchmarking the quality of data collected in this investigation [13,42]. These results suggest that, (i) the step count estimates from standard algorithms can be influenced by effects such as sensor location and the optimal location to place sensor for step count tracking can vary depending on the specific type of population being studied.
The EE for the 50 step walk from the ActiLife for the healthy controls and iSCI groups were significantly different than the Cosmed when placed at arm and waist locations (p < 0.016) ( Table 2, Fig. 2(a), 2(b)). No statistical significant differences were observed in the ActiLife EE estimates at all placement locations for the stroke group with right side impairment (p > .016) ( Table 2, Fig. 2(c), 2(d)).
Overall the MET showed similar validity trends as EE for iSCI and stroke groups (Fig. 2). For the healthy group, except the MET from ActiLife at ankle, MET's from all other locations showed same trend as EE.
These results highlight two main observations for low intensity activity; (1) sensor location that may be valid for estimating step count may not be valid for estimating metrics like EE/MET, and (2) using SPAs for estimating physical activity metrics in population with neurologic impairment like stroke and iSCI may yield inaccurate estimates.
High intensity activity Six-minute walk test
In comparison to Cosmed estimates, no statistical significant differences were observed in the ActiLife EE estimates for the healthy controls at arm and waist locations (p > 0.016) ( Table 2; Fig. 2(a)). However the ActiLife EE estimates from ActiGraph at ankle was significantly over estimated in comparison to cosmed (p < .016; E 2 = .58) ( Table 2). These results for the healthy group are in contrast to the EE trends from 50 step walk test. Thus, the activity intensity and duration can influence the accuracy of estimates from SPA even in a healthy group.
Similarly, for the 6MWT in the stroke group with right side impairment, in comparison to cosmed estimates, no significant difference was observed for the ActiLife EE estimates from ActiGraph's at arm, waist and ankle (p > .016) ( Table 2; Fig. 2(c)). However, for the stroke group with left side impairment, the EE estimates from ActiGraph at the right ankle, was only near significant with moderate effect size (p = 0.04; E 2 = 0.39). Further, diminished effect sizes were observed for the ActiGraphs from the right side of the body for the group with left impaired. Similarly, it was observed that for the stroke group with right side impairment, the EE estimates from Metria-IH1 located at the left upper arm, was significantly different in comparison to Cosmed (p = 0.08; E 2 = 0.43) ( Table 2; Fig. 2(d)). This raises the possibility that when using fusion based sensor type to study EE (Metria- IH1-SenseWear), placing the sensors on the side of impairment may be a conservative approach for estimating EE. Indeed similar observations regarding side of impairment and EE estimate has been reported in stroke literature [43]. ActiLife EE estimates from all sensor locations for the group with iSCI, were significantly different (p < .016) in comparison to cosmed. The effect sizes ranging between medium to large (Tables 2 & 3, Fig. 2(a)). This shows that, the SPAs might not yield accurate results for EE in iSCI population. Based on effect size for EE estimates from 6MWT, the waist seems to be a non-optimal location for placing ActiGraph sensor for iSCI group. A potential reason for this could be the reduced walking speed in the iSCI group (0.5(0.22) m/s).
A rationale for Metria-IH1 to have performed well with iSCI sample studied despite using SPA based off of healthy controls could be that the SenseWear fuses a galvanic skin response (GSR) sensor information among others to estimate EE and MET. We speculate that the excess exertion during the prolonged 6MWT could have increased skin conductance due to increased sweating causing the Metria-IH1's fusion SPA to over-estimate EE values and thus producing values closer to Cosmed. It is known that individuals with cervical iSCI are compromised in their autonomic nervous system functioning which could lead to reflex sweating [44]. A majority (75%) of the iSCI group in this study had injury at cervical level where unilateral hyperhidrosis and reflex sweating is a reported phenomenon [45]. Indeed, literature suggests that higher exertion could lead to higher skin conductance [46]. The group with iSCI reported higher physical exertion during the low and high intensity activities (Table 1).
Multi sit-to-stand
In comparison to Cosmed EE estimates, no significant differences were observed for the ActiLife EE estimates from ActiGraphs (Table 2; Fig. 2), (i) at arm and waist for healthy group, (ii) waist for the stroke groups and (iii) arm for the iSCI group. Irrespective of side of stroke impairment, the sensors at waist seems to be a desirable location to estimate EE during sit-to-stand task in stroke group. Except for the EE estimates for iSCI group, the Metria-IH underestimated the EE during the multi sit-to-stand task in stroke and healthy groups. The MET metrics followed a similar trend as the EE for healthy and stroke groups during 6MWT and multi sit-to-stand.
These results could suggest that, (i) the choice of sensor location could be dependent on activity type and outcome metric of interest and (ii) impairment conditions can significantly impact outcome metric estimated by SPA.
Discussion
This study systematically analyzed the influence of four factors, namely, (i) choice of sensor type ((ActiGraph wG3TX-BT using ActiLife SPA) and Metria-IH1 -Senseware fusion based SPA), (ii) sensor location (ActiGraph wG3TX-BT at arm, waist and ankle and Metria-IH1 at arm) (iii) the activity characteristics and (iv) population effects (healthy, iSCI(ambulatory), Stroke) on the validity of three physical activity outcome metrics estimated by SPAs. Overall it was found that the physical activity metrics (EE, MET and step count) estimated by SPAs could be influenced significantly by these factors across the spectrum of activity levels studied.
Consistent with previous literature [13], the SPAs from both the sensors estimated the step count metric accurately in the control group, irrespective of sensor location (ActiLife) and type. However, observations from our data showed that, the estimates by these SPAs for EE and MET significantly diverged from the gold standard estimates at all activity levels. The sensor location, sensor type and activity type seemed to influence the EE and MET estimates provided by SPA even in healthy controls group. For instance, sensor placed at arm and waist seems to estimate EE and MET better during low and high intensity activities in comparison to sensor located at the ankle ( Fig. 2(a)). Similar observations on healthy individuals have been reported in literature [47]. Benchmarking with previous literature serves as a Fig. 3 Step count estimates. Estimated step counts during the 50 step walk test from Actigraphs at arm, waist, ankle and Metria-IH1 compared to manual count (phone based manual tally) of 50 steps in healthy, SCI and Stroke. * indicates significant differences in estimated step count (*: p < 0.05 (Metria-IH1); p < 0.016(ActiGraph)) support for the consistency of data collected and analyzed in this study.
The SPAs estimation for the activity data collected from the study groups with iSCI and stroke were mediocre. For instance, in the group with iSCI, irrespective of sensor location and sensor type, the step counts estimates were inaccurate. Further, based on the ActiLife SPA estimates for iSCI, irrespective of sensor location and activity type studied, most of the EE estimates were significant deviants from the estimates produced by the designated gold standards. However, overall, based on a subjective comparison, the Sensewear performed relatively better for estimation in the iSCI and stroke groups [48]. On the same lines, in comparison to ActiLife, the fusion algorithm from Senseware seemed to perform relatively better for EE and MET estimation during the studied sedentary tasks.
Finally, the trends from the multi sit-to-stand activity showed that, the sensor location should be chosen based on the nature of the activity type being studied. For instance, the arm and/or waist seems to be a desirable spot to estimate EE during sit-to-stand task for the healthy controls and the stroke group. For the iSCI group, sensors located at the arm seemed to capture the EE estimates well.
Overall, there are three possible reasons for such divergences observed in validity while using SPAs based off of healthy population to estimate outcomes from wearable sensors in stroke and iSCI.
Firstly, it is possible that the SPAs estimated outcome metrics based on the movement signature and acceleration thresholds empirically derived off of database collected from healthy population. These standard acceleration threshold values from healthy are far higher than those observed in neurological population [1,2,4,49]. Individuals with iSCI and stroke in general walk with a gait speed much slower in comparison to healthy population [50,51] and also use assistive devices. The average gait speeds during 6MWT for the different participant groups from our study sample were, 1.7(0.23) m/s, 0.5(0.22) m/s and 0.9(0.20) m/s for the healthy, iSCI and stroke respectively. Decreased speed and gait signatures while performing physical activities changes the acceleration thresholds leading to underestimates in step counts (an additional file shows this in more detail [see Additional file 2: Figure S1]). Secondly, sensors placed at different body locations (arm, waist and ankle) unfold different acceleration signatures (cut points) for a given activity type, due the dynamics constraint in motion between different body segments and sensor location [13,52]. Finally, it is reasonable to expect that neurologic impairment like stroke and iSCI alters the metabolic profile. Hence empirically derived EE models based off of healthy controls may not work for this population [11,[53][54][55]. Thus the effect of choice of sensor type, sensor location and activity characteristics seems to be additional factor that needs consideration on top of population specific differences which influence the deviation of the estimates derived using SPAs.
We observed from our results that, for the stroke group, both the Actilife and Metria-IH1 (Senseware) SPAs performed relatively well when placed on the impaired side as opposed to unimpaired side for EE and MET estimation (Fig. 2(c), 2(d)). Overall, we also observed that despite using the standard SPAs some of the EE and MET estimates turned out to produce valid estimates for the group with stroke and iSCI.
As far as this sample data goes, there are a few possible explanations for above observations. One possible speculation from a pathophysiological angle could be the asymmetric sweat response that has been reported in individuals with stroke due to compromise in functioning of their autonomic nervous system [56][57][58].It is possible that the increased sweating on the paretic side could have improved the skin conductance, leading to the GSR sensor in Metria-IH1's (SenseWear) SPA overestimating the EE values, thus leading to values close to the Cosmed. Literature suggests that a higher physical exertion level can lead to higher skin conductance [46]. Indeed self-reported physical exertion levels were higher for both the stroke and iSCI groups compared to healthy controls (Table 1). We speculate that a similar phenomenon could have led to overall better EE and MET estimates for the iSCI group while using Sensewear [44]. However, it is promising to find support for this observation in literature [43]. We did not record data regarding the sweat rate or skin temperature in this study. Nor did we have access to the Sensewear's SPA to tease out the weightage given to GSR data in their fusion algorithm. These aspects require more work and specifically designed study to understand the influence of such factors are warranted.
We also suspect that since the participants from the stroke group in our study had mild-moderate gait impairments (mean gait speed was relatively higher 0.9 (0.20) m/s during 6MWT), the acceleration threshold was sufficient to create enough count points for the SPAs to produce better estimates. Similar observations have been noted in literature [42]. We can only speculate that this could be the reason that the ActiLife SPAs estimates showed validity for some of the activities while used for the group with stroke.
Additionally from a sensor capability stand point, a potential reason for this could be that unlike the ActiLife (ActiGraph sampled at 30 Hz) the Metria-IH1's SPA estimates EE and MET by fusing information from multiple on board sensor modules (overall sampling of all sensors at 5000 data points per minute in which accelerometer alone sampled at 32 Hz), such as, tri-axial accelerometer, near body skin temperature sensor and galvanic skin response in addition to customized participant specific information such as smoking behavior, and anthropometrics. Indeed literature shows that sensor fusion based approaches yield better measurements of metrics, contingent upon the quality of the sensors [2,48,[59][60][61].
In summary, there were two main findings and recommendations form this pilot investigation, (i) the sensor type, sensor location, activity characteristics and the population studied influences the accuracy of estimation of physical activity metrics derived using SPAs: implementing advanced techniques like machine learning and data fusion to create customized population specific algorithms to estimate physical activity metrics in individuals with neurologic impairments such as iSCI and stroke has the potential to improve the reliability & accuracy and (ii) comprehensive validations including all outcome metrics (EE, MET and step counts) at different activity intensity level is recommended for validation of wearable sensors used in rehabilitation. These findings are also in consensus with findings from literature studying validity of wearable sensor estimates in different groups [13,14,52,[62][63][64][65].
Limitations
Despite producing some novel and clinically useful information, this investigation has many limitations. The small sample size limits the extent of generalizability of our findings. However, the sample size in our study is comparable to other studies in this literature [66] and our findings were supported with observations from previous literature. The sedentary activities were recorded only for bouts of 2 min each. It is not clear if the same trends would hold when data is gathered for a different time scale. However, to justify our choice of this time scale, it is reasonable to assume that there is value in studying such small bouts (< 2 min) of sedentary activities as such low/high intensity activity occur through-out the day in community setting. Future studies with a larger sample size and includes other types of neurological impairments is recommended to explore the individual influence of each of the factors for population specific conditions on the outcome variables in laboratory as well as free living conditions.
Conclusions
On one hand the inferences and information from our results highlight the need to practice cautious decision making while choosing wearable sensor types and mounting locations for activity measurement in neurologic rehabilitation. Whilst on the other hand, implementing customized algorithms using advanced methods like machine learning and data fusion methodologies for estimating outcomes using wearables in individuals neurological impairments like stroke and iSCI could be beneficial [2]. We maintain that, incorporating the combined effect of choice of sensor type used, location of placement, and the activity intensity being studied, to algorithms estimating outcome metrics from wearable devices may yield reliable physical activity metrics both in in-patient and out-patient environments. | 2023-01-18T14:53:29.158Z | 2018-03-13T00:00:00.000 | {
"year": 2018,
"sha1": "052993cd2e04c0872fb9a96fa08dd279f35402cc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12984-018-0358-y",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "052993cd2e04c0872fb9a96fa08dd279f35402cc",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
264036540 | pes2o/s2orc | v3-fos-license | Viral and host proteins involved in picornavirus life cycle
Picornaviruses cause several diseases, not only in humans but also in various animal hosts. For instance, human enteroviruses can cause hand-foot-and-mouth disease, herpangina, myocarditis, acute flaccid paralysis, acute hemorrhagic conjunctivitis, severe neurological complications, including brainstem encephalitis, meningitis and poliomyelitis, and even death. The interaction between the virus and the host is important for viral replication, virulence and pathogenicity. This article reviews studies of the functions of viral and host factors that are involved in the life cycle of picornavirus. The interactions of viral capsid proteins with host cell receptors is discussed first, and the mechanisms by which the viral and host cell factors are involved in viral replication, viral translation and the switch from translation to RNA replication are then addressed. Understanding how cellular proteins interact with viral RNA or viral proteins, as well as the roles of each in viral infection, will provide insights for the design of novel antiviral agents based on these interactions.
Introduction
Picornaviruses are a large family of animal viruses, which are pervasive in nature. Certain members of this family are well known since they importantly affect human health. The family Picornaviridae consists of five genera -enteroviruses, rhinoviruses, cardioviruses, aphthoviruses, and hepatoviruses. Picornaviruses are small icosahedral particles containing a single-stranded plus sense RNA genome with approximately 7,500 nt in length. It contains a 3' poly(A) tail with a variable length from 65 to 100 nt. The viron RNA has a virus-encoded peptide, VPg, attached at its 5' terminus, but this protein is rapidly lost in the cell and most of the viral transcripts consequently lack it [1,2]. The picornavirus RNAs lack the cap structure (m7GpppN, where m is a methyl group and N is any nucleotide). The viral RNA encodes a single large polyprotein which undergoes a series of processing events, mediated by virusencoded protease, to produce the mature virus proteins (including 11 mature proteins plus numerous partially processed products, depending on the virus). Four of these proteins (VP1-VP4) constitute the virus capsid, and the others participate in virus replication [3] (Fig. 1).
The infection of cells by enterovirus is an efficient and productive event. To complete the life cycle of the virus, viral proteins are involved in viral replication and translation, in addition to altering host functions, such as cellular gene expression, protein localization, signal transduction and membrane rearrangement. This review focuses on the functions of viral and host factors involved in the life cycle of picornaviruses.
Host factors and capsid proteins involved in receptor binding
The capsid proteins of picornaviruses are encoded by the P1 region of the genome, and the capsid particles comprise 60 copies of four P1-encoded polypeptides, VP1 to VP4. The first three viral proteins (VP1-VP3) reside on the outer surface of the virus, and the shorter VP4 is located completely on the inner surface of the capsids. The capsid proteins mediate the initiation of infection by binding to a receptor on the host membrane. Many picornaviruses have similar receptor molecules that are members of the immunoglobulin superfamily (IgSF), whose extracellular regions comprise two to five amino-terminal immunoglobulin-like domains. For example, poliovirus receptor (PVR, CD155) contains three amino-terminal domains [4]. Additionally, coxsackievirus B1-B6 receptors (coxsackievirus-adenovirus receptor, CAR) [5], and the receptors for coxsackievirus A21 and major-group human rhinovirus (intercellular adhesion molecule-1, ICAM-1), have two and five amino-terminal domains, respectively [6,7]. In all of these receptors, the amino-terminal domain, D1, is involved in the binding with the conserved amino acid residues of the picornavirus canyon, which can trigger viral instability and uncoating. Therefore, a single receptor suffices for virus entry, especially for poliovirus and rhinovirus [8]. However, some viruses can use non-IgSF cell surface receptors that bind to the outside of the canyon. For example, the low-density lipoprotein receptor (LDL-R) is used by the minor-group of rhinoviruses [9], and the decay-accelerating factor (DAF or CD55) binds to some echoviruses and group B coxsackieviruses [5,10]. Human P-selectin glycoprotein ligand-1 and scavenger receptor B2 are cellular receptor for enterovirus 71 [11,12]. Furthermore, foot-and-mouth disease viruses can attach to integrin α v β 3 or heparin sulfate, as determined by the cell lines and the virus isolates [13,14]. These interactions, however, do not cause viral instability or uncoating, but probably result in the aggregation of other receptors, or trigger the subsequent endocytosis. For example, coxsackievirus B3 (CVB3) recruits CAR to the site of infection by binding to a second receptor DAF that is expressed in the epithelial cells [5]. Also, coxsackievirus A21 binds DAF for entry into cells only in the presence of a coreceptor, ICAM-1 [6]. Indeed, receptor recognition is important in determining the tropism of the cell and host range. However, the interaction of capsid proteins with intracellular host factors is also significant. For example, VP2 of CVB3, but not VP2 of other picornaviruses, may specifically bind to proapoptotic protein Siva, and affect the induction of apoptosis, viral spread, and the pathological process of CVB3-caused disease [15]. As will be discussed below, factors other than virus-receptor interaction, including cellular factors (such as polypyrimidine tract-binding protein, PTB) and viral genome elements (such as internal ribosome entry site, IRES), both interact with the 5' untranslated region (5' UTR) and thus influence the efficiency of translation initiation and virus replication.
Schematic of the enterovirus genome, the polyprotein products and their major functions
Infection by poliovirus and rhinovirus involves accumulation of nuclear proteins in cytoplasm. Several components of the nuclear pore complex were degraded in infected cells and 2A pro has been suggested to be a factor that blocks nucleo-cytoplasmic trafficking [45][46][47][48].
2B/2BC
Viral protein 2B and its precursor 2BC have been suggested to be responsible for membranous alteration in infected cells [56][57][58][59][60]. The cellular proteins of COPII have reportedly been used in the virus-induced production of vesicles [61]. 2B and the precursor 2BC contain two hydrophobic regions, which are α amphipathic a-helix domain, which is important in multimerization, integrating into the membrane of the host Golgi and ER complex, producing virus-induced vesicles, and forming the virporin complex [60,[62][63][64]. The accumulation of 2B or 2BC proteins on Golgi changes the permeability of plasma membrane [56,62] and the disassembly of Golgi complex [65], causing cell lysis [57]. The membrane that integrates the 2B/2BC complex also reduces the Ca 2+ level in ER and Golgi complex by increasing the efflux of Ca 2+ [57]. The disruption of Ca 2+ homeostasis by 2B/2BC is the mechanism why the transport of protein from ER to Golgi is blocked [58,59,66,67]. The 2B-induced intracellular Ca 2+ imbalance is also related to the anti-apoptosis property [68]. Hepatitis A virus 2B protein can reportedly inhibit cellular IFN-β gene transcription by blocking the activation of the interferon regulatory factor 3 (IRF-3), which has been suggested to be crucial to the survival of the virus [69].
3A
Protein 3A, a membrane binding protein, plays a role in inhibiting cellular protein secretion and mediating presentation of membrane proteins during viral infection. The expression of poliovirus 3A protein in cells disrupts ER-to-Golgi trafficking, which is also observed in poliovirus 2Bexpressing cells [58,65,70]. Moreover, the interference of protein trafficking by 3A is caused by the redistribution of ADP-ribosylation factor (Arf) family, which are important components of the membrane secretion pathway [71]. The cycling of Arf proteins between active GTP-bound and inactive GDP-bound forms are mediated by guanine nucleotide exchange factors (GEFs) and Arf GTPase-activating proteins (GAPs) [72]. Brefeldin A (BFA), a metabolite from fungus, blocks protein trafficking from ER to Golgi in cells by inhibiting the regeneration of Arf-GTP from Arf-GDP [73]. BFA can also inhibit the replication of poliovirus, implying the participation of Arf proteins in viral RNA replication [74][75][76]. During poliovirus infection, the Arf family is involved in vesicle formation from various intracellular sites through interacting with numerous regulatory and coat proteins, and translocating to the site of viral RNA replication [71]. Two individual viral proteins, 3A and 3CD, can recruit Arfs to bind to membranes via different mechanisms [77,78]. The expression of 3A results in the recruitment of Arfs to membranes by specifically recruiting the cellular GEF, and Golgi-specific brefeldin A resistance factor 1 (GBF1). However, synthesis of 3CD causes other GEFs, Brefeldin A-inhibited guanine nucleotide exchange factor 1 (BIG1) and BIG2, to associate with membranes [77].
3AB
According to biochemical data, the 3AB protein is a multifunctional protein. The hydrophobic domain in the 3A portion of the protein associates with membrane vesicles [79,80]. This interaction is believed to anchor the replication complex to the virus-induced vesicles. Recombinant 3AB interacts with poliovirus 3D and 3CD in vitro [81]. The membrane-associated 3AB protein binds directly to the polymerase precursor 3CD on the cloverleaf RNA of the poliovirus, stimulating the protease activity of the 3CD, and may serve as an anchor for 3D polymerase in the RNA replication complexes [82]. Adding 3AB stimulated the activity of poliovirus 3D polymerase in vitro [83].
Furthermore, 3AB has been demonstrated to function as a substrate for 3D polymerase in VPg uridylylation [84]. The 3AB protein, rather than 3B (the mature VPg), has been proposed to be delivered to the replication complexes for VPg uridylylation. Poliovirus 3AB exhibits other functions, such as helix destabilization, revealing that 3AB has the nucleic acid chaperon activity in destabilizing the secondary structures of RNA and enhancing the hybridization in complementary nucleic acids in viral replication [85].
3B
The enteroviral and rhinoviral 3B proteins (VPg) are small peptides, containing 21 to 23 amino acids, which are covalently linked with the 5' termini of picornavirus genome via a 5' tyrosyluridine bond in the conserved tyrosine residue in the VPg. VPg has been shown to interact with poliovirus 3D polymerase, which incorporates UMP in VPg, yielding VPgpU and VPgpUpU [86]. These products are observed in both poliovirus-infected cells and crude replication complex extract [87]. The uridylylated VPg is utilized as a primer in both positive-and negative-strand RNA synthesis [88].
3CD
3CD protein, i.e. the precursor of mature 3C protease and 3D polymerase, exhibits protease activity but no polymerase activity [89]. 3CD is capable of processing the poliovirus P1 precursor region [90]. Poliovirus 3CD contributes to viral RNAreplication by circularization of the viral genome via interacting with both 5' and 3' ends of viral RNA [91]. The cellular poly(rC) binding proteins (PCBPs), involved in viral IRES-driven translation, is also identified in the ribonucleoprotein complex, which contains 3CD and have a stem-loop I structure at the 5' end of the poliovirus genome [92,93]. PCBPs contain four isoforms (PCBP 1-4) in mammalian cells, but only PCBP1 and 2 have been found to be involved in enterovirus replication [92,93]. PCBP1 and 2 are KH domains RNA-binding proteins, which are involved in the metabolism of cellular mRNAs in normal cells. PCBP2 binds to both poliovirus stem-loop I and IRES, whereas PCBP1 has a binding affinity only for stem-loop I [94,95]. The addition of recombinant PCBP1 rescues viral RNA replication in PCBP-depleted extracts, but does not rescue viral translation [95]. Another cellular protein, heterogeneous nuclear ribonucleoprotein K (hnRNP K), interacts with stemloops I-II and IV of the EV71 5' UTR. During EV71 infection, hnRNP K was enriched in the cytoplasm where virus replication occurs, whereas hnRNP K was localized in the nucleus in mock-infected cells. Viral yields were found to be significantly lower in hnRNP K knockdown cells and viral RNA synthesis was delayed in hnRNP K knockdown cells in comparison with negative-control cells treated with small interfering RNA [96]. Moreover, 3CD has been shown, using the pull-down assay, to interact with heterogeneous nuclear ribonucleoprotein C (hnRNP C) [97]. The hnRNP C participates in pre-mRNA processing in normal cells. The mutant form of hnRNP C with the defective activity in protein-protein interaction inhibits the synthesis of viral positive-strand RNA, implying the participation of hnRNP C in RNA replication [97]. The interactions of 3CD with these cellular proteins, PCBP, hnRNP C and the viral protein, 3AB, together with the stem-loop I structure of poliovirus, form important complexes in viral RNA replication [98]. Several other cellular proteins have been reported to interact with 3CD. For example, the eukaryotic elongation factor EF-1α, one such cellular cofactor, can interact with the poliovirus 3CD-stem-loop I complex [91]. The transcription factor OCT-1 and the nucleolar chaperone B23, have also been identified as co-localizing in nuclei with HRV-16 3CD during virus infection [19]. Additionally, the recombinant mature HRV-16 3C can cleave OCT-1 in vitro. The mature 3C from the precursor 3CD may play a role in shutting off host cell transcription in nuclei. As well as exhibiting protease activity, 3CD interacts with viral RNA structures, the stem-loop I, 3' UTR and the cis-acting replication element (cre) motif of poliovirus RNA, which is the template for VPg uridylylation [91][92][93]99,100].
3D
The viral RNA-dependent RNA polymerase 3D is one of the major components of the viral RNA replication complex. The purified poliovirus 3D polymerase from the complex exhibits elongation activity [101]. 3D polymerase can also uridylylate VPg and use VPg-pUpU as a primer during viral RNA replication [86,102]. The polymerase-polymerase interaction of poliovirus has been observed in biochemical and crystal structure studies [81,103,104]. The polymerase oligomerization has been proposed to be responsible for efficient template utilization. The host protein, Sam68, was identified as interacting with poliovirus 3D using a yeast-two hybrid system [105]. This interaction has also been observed in poliovirus-infected cells. Sam68, an RNA binding protein, mediates alternative splicing in cells in response to an extracellular signal. The details of the functions of Sam68 in virus RNA replication need to be demonstrated.
cis elements involved in RNA replication
The RNA secondary structures in the viral genome play important roles in the replication of viral RNA. The cis elements contain stem-loop I at the 5' terminus of 5' UTR, 3' UTR and poly(A) tail at the 3' terminus of enterovirus RNA. The circularization of the poliovirus template between the 5' and 3' termini of the viral genome is crucial during the initiation of both positive and negative-strand RNA replication [106,107]. The 3CD and PCBP2 on stemloop I at the 5' terminus of 5' UTR and poly(A)-binding protein (PABP) and 3CD on the 3' termini of genome are involved in the circularization of RNA genome for initiation of negative-strand RNA synthesis [106]. The enterovirus 3' UTR serves as the initiation point of negative-strand RNA synthesis. The 3CD or 3D has the ability for the interaction with the 3' UTR element [91]. The binding of 3CD and 3AB to 3' UTR does not depend on the interaction with host proteins and suffices for viral RNA replication in vitro [91]. Moreover, many works have reported that host proteins can bind to the 3' UTR of rhinovirus and enterovirus [91,108,109]. Nucleolin, a nuclear factor, which accumulated in the cytoplasm of poliovirus-infected cells, interacted strongly with an intact 3'-UTR of poliovirus in vitro [110]. The immunodepletion of nucleolin from cellfree extract reduced virus reproduction, indicating that nucleolin may be involved in viral RNA replication. The 3' stem-loop I of the negative-strand RNA is the initiation site of positive-strand RNA synthesis. Positive-strand RNA synthesis is initiated by the recruitment of uridylylated VPg-containing replication complexes close to the 3' stemloop I of the negative strand. Viral protein 2C has been reported to interact directly with the 3' stem-loop I of the negative strand [111]. The cellular protein, hnRNP C, which specifically interacts with either the 3' end of the poliovirus negative-strand RNA or the protein 3CD, is involved in positive-strand RNA synthesis, and probably in the initiation step [97]. Some cellular proteins, such as La, can interact with both 3' and 5' UTRs of CVB3 independently of the poly(A) tail, and may play a role in mediating cross-talk between the 5' and 3' ends of CVB3 RNA for viral RNA replication [112].
Synthesis of the uridylylated VPg is the first step of viral RNA synthesis. The efficient uridylylation of VPg requires 3D polymerase, 3CD protein, UTP and the cre motif from the viral genome as the template. 3CD has been shown to stimulate cre-mediated VPg uridylylation [102]. Moreover, the 3AB protein is regarded as the precursor of VPg (3B) in the RNA replication complex for VPg uridylylation [84]. The cre motif was identified in different regions of the enterovirus genome. The cre structures are located in the 2C-encoding region of poliovirus, the capsid-encoding region of human rhinovirus 14 and cardiovirus, the 2A-encoding region of human rhinovirus 2, the 5' noncoding region of the foot-and-mouth disease virus, and the 3D-encoding region of the hepatitis A virus [113][114][115][116][117][118].
Switch from translation to RNA replication
The positive-stranded RNA viruses use the same RNA as a template for translation and replication. Ribosomes move from the 5' end to the 3' end of RNA to undergo translation, and RNA polymerase binds to the 3' end of the same RNA to initiate replication. In vitro experiments have demonstrated that using cycloheximide to freeze ribosomes on translated RNAs inhibits RNA replication, while using puromycin to release the ribosomes allows facilitates RNA replication [119]. These two events cannot occur at the same time, so the balance between translation and replication is important.
The poliovirus genome contains a conserved 5' UTR, which is important to translation and RNA replication [120]. Gamarnik and Andino have suggested that the binding of 3CD to the cloverleaf at the 5' end of the viral genome promotes the replication of RNA, rather than its translation [121]. Furthermore, they found that PCBP2 binds to stem-loop IV in poliovirus translation. When newly translated 3CD binds to stem-loop I RNA, the affinity of PCBP2 binding for the same region is increased. 3CD induces the dissociation of PCBP2 from stem-loop IV because the affinity of PCBP2 for stem-loop I is increased, while that for viral translation is reduced [98].
Semler et al. found that proteases 3C/3CD cleave PCBP1 and 2 during the mid-to-late phase of poliovirus infection. The primary cleavage site is between the KH2 and KH3 domains. The cleaved PCBP2 cannot bind to stemloop IV and it loses functionality in translation. However, the cleaved PCBP2 still binds to stem-loop I and mediates the replication of viral RNA. PCBP2 can mediate the switch from viral translation to RNA replication [122].
Host factors and viral proteins involved in picornaviral IRES-mediated translation
Most picornaviral IRESs are divided into four classifications based on homology, secondary structure, and other properties (Fig. 2) ral IRESs for canonical translation factors is in contrast to that of IRES elements of other viruses, such as hepatitis C virus and the cricket paralysis virus, which require few or no translation factor to bind to the ribosome [125].
Noncanonical translation factors
Polypyrimidine tract-binding protein (PTB) PTB (also known as p57 and hnRNP I) is a member of the hnRNP family and shuttles between the nucleus and the cytoplasm in a transcription-sensitive manner [126]. PTB is a 57 kDa mRNA splicing factor and has four RNA recognition motifs (RRMs). PTB was identified originally as a protein that binds to the polypyrimidine tracts (Py tracts) of adenoviral major-late and α-tropomyosin pre-mRNAs, and has been proposed to be a splicing factor [127]. The binding of PTB to the Py tract close to the branching point of intron has been demonstrated to modulate the alternative splicing of certain pre-mRNAs [128]. Independently, PTB was shown to interact specifically with the IRESs of numerous picornaviruses [129,130], including poliovirus [131,132], FMDV [133], EMCV [134], Theiler's murine encephalomyelitis virus (TMEV) [135], and HAV [136,137].
Experiments on the depletion and repletion of PTB from rabbit reticulocyte lysate (RRL) have revealed that the efficient translation of FMDV mRNA and a mutant EMCV mRNA depends on PTB [133,134]. Intriguingly, PTB is required for the translation of a mutant EMCV mRNA, but not wild-type EMCV [138], suggesting that PTB is involved in maintaining the proper conformation of the mutant EMCV IRES [138]. Supplementation of RRL with PTB enhances the translation of polioviral mRNA [139,140]. The immunodepletion of PTB from HeLa cell lysate inhibited poliovirus IRES-dependent translation.
Repletion of purified PTB to the immunodepleted lysate did not restore poliovirus IRES-dependent translation. This investigation suggested that the depletion process may have removed unidentified translation factors [132]. The effect of PTB on poliovirus IRES-dependent translation was examined using artificial dicistronic mRNAs that contain the PTB gene as the first cistron, the poliovirus IRES in the intercistronic region, and the chloramphenicol acetyltransferase (CAT) reporter gene as the second cistron. When PTB was transfected into HeLa cells, which contain a limited amount of endogenous PTB, its additional expression increased the activity of poliovirus IRES by 2.5-fold [141]. These results suggest that PTB is required for, or at least promotes, the IRES activity of picornavirus.
Poliovirus protein, 3C pro (and/or 3CD pro ), cleaves PTB isoforms (PTB1, PTB2, and PTB4). PTBs contain three 3C pro target sites (one major target site and two minor target sites). PTB fragments that are produced by PV infection are redistributed to the cytoplasm from the nucleus, where most of the intact PTBs are localized. Additionally, these PTB fragments inhibit poliovirus IRES-dependent translation in a cell-based assay system. The authors posit that the proteolytic cleavage of PTBs may contribute to the molecular switching from translation to the replication of polioviral RNA [142].
PV-attenuated type 3 Sabin and virulent type 3 Leon viruses translated equally well in HeLa cells, but the translation of the attenuated Sabin virus is restricted in neuroblastoma cells. The C472-to-U mutation in the IRES caused the translation defect. Comparison of IRES between Leon serotype 3 and Sabin serotype 3 PV revealed that PTB and a novel neural-cell-specific homologue of PTB (nPTB) were bound adjacent to the attenuation mutation in domain V, but binding was less efficient on the Sabin IRES. The Sabin IRES was demonstrated to have a translation defect in chicken neurons that can be rescued by increased PTB expression in the CNS [143].
PTB has also been shown most clearly to function as an RNA chaperon, stabilizing the type II IRESs, such as EMCV and FMDV, in an active conformation [144,145].
Lupus autoantigen (La)
Lupus autoantigen (La) is a 52 kDa, predominantly nuclear protein. It is a target for autoimmune recognition in patients with systemic lupus erythematosus and Sjogren's syndrome. The majority of La is localized in the nucleus, and it is related to stabilization of nascent RNAs, nuclear retention of nascent transcripts, and RNA pol III transcription termination. During PV infection, La is redistributed to the cytoplasm by 3 h postinfection (p.i.) [146,147]. This redistribution coincides with the appearance of 3C pro in infected cells, and is caused by a 3C promediated cleavage event, which removes a nuclear localization signal from the C-terminus of La, but maintains the dimerization domain. The truncated La can still stimulate translation effectively, but is relocalized to the cytoplasm during viral protein synthesis [147]. Depletion of La from cells by small interfering RNA reduced IRES translation. Similarity, a dominant-negative mutant of La inhibited 40S ribosomal binding by PV IRES in vitro [148]. La protein also binds to the CVB3 IRES and stimulates viral translation in a dose-dependent manner in rabbit reticulocyte [149,150]. La protein binds specifically to distinct parts of HAV IRES, and suppresses HAV IRES-mediated translation and replication by small interfering RNA in vivo and purified La protein in vitro [151].
Poly(rC)-binding protein 1, 2 (PCBP1, 2)
PCBPs are RNA-binding proteins that preferentially bind to single-stranded stretches of cytidines. Mammalian cells contain four isoforms, PCBP 1-4, but only PCBP1 and PCBP2 have been experimentally demonstrated to have roles in the life cycles of enteroviruses [93]. PCBP2 is a factor that is required for poliovirus translation and was discovered because of its interaction with stem-loop IV of the poliovirus IRES [94]. Depletion and replication studies of PCBP2 from HeLa cell-free extracts using a stem-loop RNA affinity column have revealed that PCBP2 was required for PV IRES-dependent translation [152]. Moreover, an oligo-DNA with high affinity to PCBP1 and PCBP2 was recently used to prove that both PCBP1 and PCBP2 function as ITAFs of the PV IRES [153]. PCBP2 also binds to the HAV 5' UTR and stimulates viral translation [154].
One nucleocytoplasmic SR protein, SRp20, interacts with PCBP2 and is involved in the internal ribosome entry site-mediated translation of viral RNA. Both depletion and in vitro translation studies of SRp20 from HeLa cell free extracts have shown that SRp20 is required for the initiation of PV translation. Targeting SRp20 in HeLa cells with short interfering RNAs inhibited the expression of SRp20 protein and correspondingly reduced PV translation [155].
In vitro translation reactions were performed in HeLa cell cytoplasmic translation extracts whose cellular protein, PCBP2 was depleted [156]. Upon depletion of PCBP2, these extracts exhibited a significantly reduced capacity to translate reporter RNAs that contained the type I IRES elements of poliovirus, coxsackievirus, or human rhinovirus, which is linked to luciferase; however, adding recombinant PCBP2 protein restored translation. RNA electrophoretic mobility-shift analysis demonstrated specific interactions between PCBP2 and both type I and type II picornavirus IRES elements; however, the translation of reporter RNAs that contain the type II IRES elements of the encephalomyocarditis virus and the foot-and-mouth disease virus did not depend on PCBP2. These data indicate that PCBP2 is essential for the internal initiation of translation on picornavirus type I IRES elements, but not by the structurally distinct type II elements [156].
Upstream of N-ras (Unr)
Upstream of N-ras (Unr) is a cytoplasmic protein that contains five cold-shock domains. A depletion study of Unr from reticulocyte lysate revealed that Unr was required for the translation of rhinovirus IRES [139]. Both HRV and PV IRES translation was severely impaired in unr(-/-) murine embryonic stem cells. Translation was restored by the transient expression of Unr in unr(-/-) cells [157].
Heterogenerous nuclear ribonucleoprotein A1 (hnRNP A1)
hnRNP A1, an RNA-binding protein that shuttles between the nucleus and the cytoplasm, is a member of a large group of RNA binding proteins (hnRNPs) which are classified into several families and subfamilies based on conserved structural and functional motifs. The hnRNP A1 protein is composed of 320 amino acids; it contains two RNA-binding domains and a glycine-rich domain, which is responsible for protein-protein interaction.
hnRNP A1 is an internal ribosome entry site (IRES) transacting factor that binds specifically to the 5' UTR of the HRV2 and regulates its translation. Furthermore, the cytoplasmic redistribution of hnRNP A1 after rhinovirus infection enhances rhinovirus IRES-mediated translation [158]. RNA-protein pull down assay, reporter assay and viral RNA synthesis assay reveal that hnRNP A1 also interacts with the 5' UTR of enterovirus 71 (EV71) and regulates viral replication [159].
ITAF45
ITAF45 is also known as erbB3-binding protein 1 (Ebp1) or p38-2G4, which is a proliferation-dependent protein that is distributed throughout the cytoplasm from the metaphase to the telophase. ITAF45 is a proliferationdependent protein that is undetectable in murine brain cells and so may function as a tissue-specific factor that controls the translation of particular mRNAs. The initiation on the TMEV IRES depended strongly on PTB, where the initiation on the FMDV IRES depended on both PTB and ITAF45. ITAF45 was bound specifically to a central domain of the FMDV IRES and acted synergistically with PTB to promote the binding of eIF4F to an adjacent domain [124].
Nucleolin/C23
As a 110 kDa nucleolar protein, nucleolin/C23 protein is also an RNA binding protein which contains four RNA binding motifs [160]. Nucleolin is an abundant protein of the nucleolus and participates in rDNA transcription, rRNA maturation, ribosome assembly and nucleocytoplasmic transport [161]. Nucleolin/C23 has been shown to translocate into the cytoplasm following the infection of cells with PV [110]. Nucleolin/C23 stimulates PV IRESmediated translation in vitro and rhinovirus IRES-mediated translation in vivo. Nucleolin/C23 mutants that contain the carboxy-terminal RNA binding domains, but lack the amino-terminal domains, act as dominant-negative mutants in in vitro translation assay. The translation inhibitory activity of these mutants is related to their capacity to bind to the 5' UTR sequence [162].
dsRNA binding protein76:NF45 heterodimer
The double-stranded RNA binding protein 76 (DRBP76) contains two dsRNA-binding motifs and is almost identical to M-phase phosphoprotein 4, NF90, translation control protein (TCP80), and NF associated with dsRNA. DRBP76 has been found to bind to the HRV2 IRES in neuronal cells and to inhibit PV-RIPO translation and propagation [163]. The size of exclusion chromatography indicates that DRBP76 heterodimerizes with the nuclear factor of activated T cells, of size 45 kDa (NF45), in neuronal but not in glioma cells. The DRBP76:NF45 heterodimer binds to the HRV2 IRES in neuronal but not in glioma cells. Ribosomal profile analyses have demonstrated that the heterodimer preferentially associates with the translation apparatus in neuronal cells, and arrests translation at the HRV2 IRES, preventing the assembly PV-RIPO RNA into the polysome [164].
Far upstream element binding protein 2 (FBP2)
The far upstream element binding protein 2 (FBP2) is also known as the K homology (KH)-type splicing regulatory protein (KSRP). It was originally identified as a component of a protein complex that assembles on an intronic c-src neuronal-specific splicing enhancer, and as an important adenosine-uridine element binding protein (ARE-BP) that interacts with several AREs [165,166]. FBP2 is required for the rapid decay of several ARE-containing mRNAs both in vitro and in vivo. It contains four contiguous KH motifs that recognize the ARE, interact with the exosome, and the poly(A) ribonuclease (PARN), and promote the rapid decay of ARE-containing RNAs [167].
Biotinylated RNA-affinity chromatography and proteomic approaches were utilized to identify FBP2 as an ITAF for EV71. During EV71 infection, FBP2 was enriched in cytoplasm where viral replication occurs, whereas FBP2 was localized in the nucleus in mock-infected cells. The synthesis of viral proteins was promoted in FBP2-knockdown cells that were infected by EV71. IRES activity in FBP2-knockdown cells exceeded that in the negative control (NC) siRNA-treated cells. However, IRES activity decreased when FBP2 was over-expressed in the cells. The results of this investigation suggest that FBP2 is a novel ITAF that interacts with EV71 IRES and negatively regulates viral translation [168].
Compelling evidence suggests that cellular RNA-binding proteins (ITAFs) are involved importantly in translation from a variety of IRES elements, suggesting potential roles for RNA-binding proteins in IRES-dependent translation. First, an RNA-binding protein recruits the translational machinery via a protein-protein or protein-RNA interaction. This putative RNA-binding protein then binds directly to the ribosomal subunit, to canonical translation factors, or to a putative mediator protein that connects other RNA-binding proteins with the basal translational machinery. Second, RNA-binding proteins may serve as 'clamping proteins,' holding various parts of IRES RNA in a particular configuration. Components of the translational machinery may bind exclusively to the RNA portion of the RNA-protein complex that is maintained by these clamping proteins.
Concluding remarks
Picornaviruses use multiple RNA-protein interactions to mediate important reactions in their life cycle, including IRES-mediated translation, possible circularization of the genome, and RNA replication. The molecular mechanisms by which host factors are involved in RNA-protein/ protein-protein interaction have been intensively studied; however, tissue-specific viral virulence remains unclear, and demands further investigation in the future. No information is available on whether picornaviruses can be targeted by cellular microRNAs, leading to transcriptional or translational silencing, or RNAi-mediated degradation of viral RNA. This is another field that needs to be further studied in the future. | 2014-10-01T00:00:00.000Z | 2009-11-20T00:00:00.000 | {
"year": 2009,
"sha1": "102bc2ec777cc4e96050883f8370ae773b5b2ddb",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsci.biomedcentral.com/counter/pdf/10.1186/1423-0127-16-103",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "102bc2ec777cc4e96050883f8370ae773b5b2ddb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
412833 | pes2o/s2orc | v3-fos-license | The Effect of Chin-cup Therapy in Class III Malocclusion: A Systematic Review
Background:
The treatment of Class III malocclusion has been challenging for orthodontists. Among a plethora of treatment modalities, the chin-cup is considered a traditional appliance for early orthopedic intervention.
Objective:
The present study aims to investigate the current scientific evidence regarding the effectiveness of chin-cup therapy in Class III malocclusion of prognathic growing patients.
Method:
A systematic review of the literature was conducted using PubMed/Medline and the Cochrane Central Register of Controlled Trials from January 1954 to October 2015. Articles were selected based on established inclusion/ exclusion criteria.
Results:
The search strategy resulted in 3285 articles.14 studies were selected for the final analysis. They were all CCTs, 13 of retrospective and 1 of prospective design. Methodological quality was evaluated by a risk of bias assessment, as suggested by the Cochrane Risk of Bias Assessment Tool for Non-Randomized Studies on Interventions. The reported evidence presented favorable short-term outcomes both in hard and soft tissues improving the Class III profile, as well as desirable dento-alveolar changes, positively affecting the Class III malocclusion.
Conclusion:
There is considerable agreement between studies that chin-cup therapy can be considered for the short-term treatment of growing patients with Class III malocclusion, as indicated by favorable changes both in the hard and soft tissues. The existence of considerable risk of bias in all selected studies and the unclear long-term effectiveness of chin-cup therapy highlight the need for further investigation to draw reliable conclusions.
INTRODUCTION
Skeletal Class III malocclusion is clinically presented as a result of maxillary retrusion, mandibular protrusion or a
Selection Criteria
Articles selected for this study fulfilled the criteria for inclusion, ( Table 2). The criteria included randomized clinical trials (RCTs), prospective and retrospective clinical trials (CCTs) with untreated control groups. The retrieved studies had to use cephalometrics for analyzing the effects of chin-cup therapy contrasted to untreated Class III control groups. Table 2 also presents in detail the exclusion criteria. 10. Articles, whose objective is out of the scope of interest of the present study
Data Extraction
Two independent reviewers (SM, EF) made the assessment of the articles individually in predefined data extraction forms. No blinding to the authors during data extraction was made and any inter-examiner conflicts were resolved by discussion with a third reviewer (IT). The same reviewers performed the risk of bias assessment of the articles, with one author (AT) acting as the coordinator.
Quality Analysis
For the qualitative evaluation of the retrieved studies the risk of bias was assessed by two reviewers (SM, EF) independently. The assessment was based on the following tool: A Cochrane Risk of Bias Assessment Tool for Non-Randomized Studies on Interventions (ACROBAT-NRSI) [13]. This tool addresses seven domains of bias; bias due to confounding, bias in selection of participants into the study, bias in measurement of interventions, bias due to departures from intended intervention, bias due to missing data, bias in measurement of outcomes and bias in selection of the reported result.
Important confounders with regard to chin-cup therapy were considered those that could have an impact on the reported results. Thus, the following confounders were taken into account both for patients and controls: ethnicity (as Asian populations have a higher prevalence of Class III malocclusion) [1, 7,14 -16], age in relation to the skeletal maturity stage, pre-treatment skeletal Class of malocclusion (when it was not skeletal Class III both for patients and controls), soft-tissue profile individual variation in thickness and in tension, and pre-treatment overjet. Moreover, cointerventions were considered the use of additional appliances, such as an occlusal bite plate, a quad-helix appliance, a lingual arch, etc., and the utilization of force magnitude of the chin-cup traction.
Three different outcomes were investigated; skeletal, dento-alveolar and soft-tissue effects of chin-cup therapy in Class III malocclusion both in the short-and in the long-term. For every different outcome of each study an initial risk of bias for every domain was assessed, as indicated by the ACROBAT-NRSI [13]. Because same issues applied to all outcomes, a grouped assessment was made. Finally, an overall risk of bias judgement for each study was achieved.
RESULTS
Our search strategy resulted in 3285 articles. After selection, according to the inclusion/exclusion criteria ( Table 2), 46 studies were gathered and read in full-text. Finally, 14 studies were retrieved for the final analysis. The remaining 32 articles were excluded, mainly, because there were no untreated controls or the controls were not skeletal Class III patients, the treatment was combined with extractions or surgery and their objective was out of the scope of interest of the present study. Table 3 summarizes the data of the 14 included studies.
Clinical heterogeneity among studies (different outcome assessment, variable age of patients and different follow-up duration), and the high risk of bias in general precluded the quantitative synthesis of results in a meta-analysis.
Skeletal Effects
The majority of the studies showed a general improvement of skeletal Class III malocclusion, through increased ANB [17 -22], Wits appraisal [17,22] and decreased SNB [17 -22], SNPg [23]. Moreover, the anterior facial height [17,18,22,24], the mandibular plane angle (SN-MP) [18 -20, 22] and the FMA [23] were significantly increased, whereas the gonial angle [20,23,25,26] was significantly decreased, indicating a tendency towards a backward and downward rotation of the mandible induced by the chin-cup. Furthermore, restraint of the mandibular length was pointed out in five studies [23, 25 -28] by significant decreases in mandibular body length [23,27], total mandibular length [23, 26 -28] and anteroposterior compression of the distance between the condyle and the coronoid process [25]. Significant reduction of the ramus height was also noted [22,23,25,27]. With regard to the skeletal changes in the cranial base and the midface, two studies [8,27] reported significant closure of the cranial flexure angle (N-S-Ba), indicating inhibition of the downward vertical growth of the midface [8] and less downward mandibular displacement relative to the cranial base [27]. Lateral cephalometric radiographs + geometric morphometric analysis.
More rectangular corpusramus relationship, anteroposterior compression of the distance between the condyle and the coronoid processrelative vertical posterior ramus and gonial area compression, decreased gonial angle, increased symphysis height and narrowing.
Wide modification of the mandibular shape (more rectangular Mn configuration, forward condyle orientation, gonial area compression and symphysis narrowing.
Arman et al. [ Soft-tissue changes in association to any skeletal and dentoalveolar changes induced by CC.
Significant changes: inhibition of the sagittal growth of the mandible and the Mn incisors' alveolus (B-GD, Pg-GD, Id-GD), inhibition of the total Mn length (Co-Pg), decreased facial convexity angle, lower lip's inclination (Linf-E Ricketts), upper lip thickness, upper lip protrusion (Lsup-E Ricketts). Similar correlations of changes both in the hard+soft tissues apart from that one between the Pg retrusion and the reduction in the facial convexity angle.
Short-term soft facial profile improvement by favourable soft-tissue alterations following the underlying skeletal and dentoalveolar changes except for a significant correlation of Pogonion retrusion and reduction in the facial convexity angle.
Tuncer et al. Until the end of the treatment for the treated patients; 11.14 ± 0.24 y for the untreated controls.
Examination of the sagittal pharyngeal dimensions after CCtherapy.
Lateral cephalometric and hand-wrist radiographs. Not reported.
More than 2 y after the end of CC therapy.
Skeletal changes and posttreatment stability after CC therapy.
Dento-Alveolar Effects
The main dento-alveolar changes produced by the chin-cup were the achievement of a significant overjet [17, 20 -22, 24, 29] and retroclination of the lower incisors [17,19,21,22,24]. More precisely, Ritucci and Nanda [8] declared that transition in overjet occurred with a marked degree of flaring of the maxillary incisors, followed by a variable amount of uprighting, based on lateral cephalograms. Overjet correction was, also, reported byAlacrόn et al. [25], mainly achieved by mandibular incisor retroclination. Moreover, Barrett et al. [21] noted the uprighting of the lower incisors, indicated by the decreased IMPA, as the most significant dental change between the chin-cup and the control groups. Significant proclination of the upper incisors [8,17,19,20,24] was also pointed out. However, the aforementioned results, especially those regarding the proclined upper incisors, should be carefully interpreted in order to clarify whether they constitute net effects of the chin-cup alone or the additional appliances that were used and were co-interventions.
Changes regarding overbite varied, depending on the appliance that was used. More specifically, Arman et al. [17] noted a significant decrease in overbite in all the treated groups (chin-cups only, chin-cups with removable bite plate, reverse headgear with rapid maxillary expansion devices).
Regarding the molar relationship after the active treatment, Ritucci and Nanda [8] declared that chin-cups accelerate the mesial movement of maxillary molars, without any effect on their eruption rate, while Wendell et al. [27] manifested that the initial Class III occlusion was corrected to Class I relationship in all of the patients.
Soft-Tissue Effects
The effects of chin-cup therapy on the soft tissues were reported in five studies [17,19,21,24,28]. Significant forward movement of the upper lip was declared in four studies [17,19,24,28] with a concomitant forward movement of the soft-tissue point A [17], while the movement of the lower lip presented differing results. Arman et al. [17], Alacrόn et al. [28] and Barrett et al. [21] stated a decreased distance of the lower lip to E plane (LL-E Ricketts line [17,21,28]) and lower lip's retraction (LL-VR [17]) with a concomitant backward movement of the soft-tissue point B [17] and the soft chin (Pg (s) [17,28]). However, Abu Alhaija and Richardson [24] showed significant forward movement of the lower lip [24]. A general soft-tissue facial profile improvement was attributed to the chin-cup by Alacrόn et al. [28], who demonstrated similar correlations between the changes in the hard and in the soft tissues, especially the one between a significant reduction of the facial convexity angle and a significant pogonion retrusion in the chin-cup group.
Stability
Two studies [23,24] reported information concerning the stability of treatment outcomes, using cephalometric xrays at a post-treatment observation. Abu Alhaija and Richardson et al. [24], following a one-year post-treatment cephalometric observation, reported a significant increase in mandibular length, which was in accordance with Sakamoto et al. [23], whose study found a forward displacement of the mandible in one-year post-treatment observation and total relapse in the original mandibular growth pattern after two years. Both studies [23,24] showed a significant increase in the anterior face height.
As for the dental effects, the significantly increased overjet achieved by chin-cups was maintained one year after the end of the treatment [24].
Although stability in the soft-tissue profile was evident at the post-treatment observation, the upper, the lower lip and the chin continued to grow forward following the skeletal pattern [24].
Quality Analysis
The overall judgement for the risk of bias was found serious for all the retrieved studies, ( Table 4). All had a serious risk of bias concerning the selection of participants into the study (selection bias). Based on the ACROBAT-NRSI [13], all the studies were found to have some important problems in the corresponding domains, indicating cautious interpretation of the reported results.
DISCUSSION
In this systematic review, our primary goal was to search the existing literature for randomized and control clinical trials regarding the short-and long-term effects of chin-cup therapy on hard and soft tissues of growing patients. These had to include untreated patients as controls. Although this was not the first time that this issue has been addressed in the literature, researchers in previous systematic reviews did not investigate the long-term effects of chin-cup therapy [2,10,12], the soft tissue changes [2,10,12] and the adolescence as a study growth period [10].
Our search strategy resulted in only CCTs, thirteen of retrospective [8, 17 -28] and one of prospective design [29], with no RCT found. One possible reason is that RCTs are not common in orthodontics, since various parameters are required. These include patient/observer blinding to treatment and ethical matters regarding the control group whose decision of participation is negatively affected by receiving no treatment.
The final studies were cohort studies with weaknesses due to the serious risk of bias, as it is described in detail in Table 4. All the studies were found to have selection bias, as the selection of both participants and controls was related to the received intervention and likely to the outcomes.
Furthermore, the studies were judged to have a serious risk of bias concerning the outcomes' measurements when the knowledge of the received intervention by the assessors was likely to influence the outcomes in a way that it could cause statistically significant differences. Thus, three studies [8,23,27] received that characterization, as the way that the outcome measure was conducted, was considered to have the potential to significantly affect the outcomes. The risk of bias was judged low, when blinding of outcome assessors was reported [25,28]. These studies were considered comparable to a well-performed randomized trial with regard to this domain, according to the ACROBAT-NRSI [13]. Consequently, studies pertaining to neither categories, were judged to have a moderate risk of bias [17 -22, 24, 26, 29]. Based on the ACROBAT-NRSI [13] in these studies the outcome measure was only minimally influenced by the awareness of the received intervention and any error in measuring the outcome was only minimally related to intervention status. The methods of outcome assessment were comparable across intervention groups both for the studies with a moderate and a low risk of bias.
Another weakness of the observational studies, both prospective and retrospective, is the presence of confounders. In the present systematic review, we considered confounders, all those factors that were possibly related to the chin-cup therapy and could cause significant changes in the results. Ethnicity was needed to be taken into account, as Class III malocclusion is more frequently seen in Asian populations [1, 7, 14 -16] and consequently these patients may be more often treated with chin-cups. Moreover, patients of Asian ancestry may present different baseline characteristics, as well as a different growth pattern than other populations, thus significantly affecting the results. The age of the participants in relation to their skeletal maturity stage was also accounted for. This was mainly due the fact that the prepubertal patients may present different results from patients that are in the peak of their growth or later. Skeletal Class of malocclusion was considered a confounder when there was doubt on whether the treated and/or the control group had skeletal Class III malocclusion or when some controls had skeletal Class I. Soft-tissue individual variation in thickness and in tension was co-estimated, since it could affect the reported results regarding the soft-tissue changes, as it was highlighted by Arman et al. [17] and Alacrόn et al. [28]. Finally, pre-treatment overjet was also considered a confounder.
In addition, co-interventions were addressed. More specifically, the use of additional appliances, such as a lingual arch to flare the maxillary incisors [23,26] or a quad-helix appliance [21] were considered critically important cointerventions that could significantly alter the outcomes. To illustrate this, in four studies [17,19,20,24] the declared proclination of the upper incisors [17,19,20,24] followed by forward movement of the soft-tissue point A [17] and the upper lip [17,19,24] was probably the result of an additional occlusal bite plate [17,19], an upper removable appliance [24] and of the combination of maxillary protraction and chin-cup traction in an occipitomental anchorage appliance [20]. The significantly increased overjet [17, 19 -21, 24] that was noted, was expected to be a result of the aforementioned additional appliances. However, it was also reported in studies where patients, treated solely with chincups, were contrasted to untreated controls [8,28,29]. One possible reason is the occlusal interferences in the transition of the occlusion from a one with underjet to one with overjet [8], that flare the upper incisors. It could also be the result of the significant retroclination of lower incisors caused by the chin-cup [8,22]. At last, utilization of force magnitude was considered a co-intervention as well, since significant reduction in ramus height was noted when lighter force in chin-cup traction was used [22].
Patients under chin-cup therapy showed an improved facial profile, merely induced by the backward and downward rotation of the mandible [17 -20, 26, 28]. This was documented by a decrease in the SNB [17 -22] and closure of the gonial angle [20,23,25,26]. It was also correlated with an increase in the anterior facial height [17,18,22,24]. In contrast, Wendell et al. [27] recorded significant decreases in the anterior face height during chin-cup therapy in comparison with untreated controls. This was attributed to the 43% decrease in the downward displacement of pogonion during treatment, which was not stable at the post-treatment observation, when it was increased by 60% [27]. The backward and downward rotation of the mandible was correlated with an increase in the ANB angle as well [17, 19, 21 -23]. However, there is ambiguity in whether only the mandible or both the mandible and the maxilla are responsible for this.
Moreover, there is controversy among researchers regarding the retardation of the mandibular growth during chincup therapy. A significant reduction of the mandibular length (ramal, body and total length) was reported in five studies [22, 23, 26 -28] indicating an improvement in the skeletal profile of the treated patients. Most interesting were the findings of Wendell et al. [27], whose study presented a reduction in absolute mandibular length, which continued after the end of the active treatment. In contrast, the studies of Gökalp and Kurt [29] and Abu Alhaija and Richardson [24] showed significantly increased mandibular body [24,29] and total mandibular length [24]. Gökalp and Kurt [29] attributed these alterations in the forward bending of the condyle, as a result of bone deposition between the condylar head and neck during chin-cup therapy.
The aforementioned controversy led to further investigating attempts by researchers in order to elucidate the role of chin-cup therapy in the retardation of mandibular growth. Similar attempts were also made to assess the potential influence of chin-cup therapy in the appearance of Temporomandibular Joint Disorders (TMD). It has been speculated that internal derangement of the TMJ is likely to occur due to the direct application of the backward chin-cup's force on the mandibular condyle [7]. This was recently evaluated in a systematic review [7] by Zurfluh et al. who, interestingly, concluded that despite the craniofacial adaptations induced by chin-cups in patients with Class III malocclusion, chincup therapy does not constitute a risk factor for the development of TMD, as the existence of insufficient or low-quality evidence in the literature do not allow clear statements regarding the influence of chin-cup treatment on the TMJ. Nevertheless, they related TMD with age and a stressful lifestyle that seem to differentiate the effects imposed on TMJ.
As for the soft-tissue effects, although confounding was evident, the documented results indicate a general softtissue profile improvement when the chin-cup is used in skeletal Class III patients. However, in the lack of studies that evaluate the long-term stability of the aforementioned changes, no definite conclusions can be reached.
In the basis of these manifestations, it is evident that the effects of chin-cup therapy both in the short-and especially in the long-term need further investigation and better substantiation with more high-quality evidence to draw reliable conclusions.
CONCLUSION
In summary, the present systematic review shows that the chin-cup therapy can be considered for the short-term treatment of growing patients with Class III malocclusion. More specifically, the following are evident: The skeletal profile is improved, as it is confirmed by significant changes in measured variables, which indicate a downward and backward rotation of the mandible. Favorable dento-alveolar changes, such as a significant increase in overjet are also observed. However, data need to be carefully interpreted in the presence of co-interventions, such as additional appliances that could have an impact on the outcomes. The soft tissues show a general improvement in the facial profile, following the accompanying skeletal and dento-alveolar changes, but with uncertain long-term stability.
Nevertheless, existing limitations that do not permit a clear judgement need to be taken into account. The unclear role of chin-cup therapy in the retardation of mandibular growth, the need for further investigation of the long-term effectiveness and the general lack of high quality evidence suggest cautious interpretation of the reported findings and highlight the need for future research with more high-quality evidence-based clinical trials, in order to draw reliable conclusions. | 2016-12-22T08:44:57.161Z | 2016-12-09T00:00:00.000 | {
"year": 2016,
"sha1": "1938424a19a20b820071ac04442cad524aa33750",
"oa_license": "CCBYNC",
"oa_url": "https://opendentistryjournal.com/VOLUME/10/PAGE/664/PDF/",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1938424a19a20b820071ac04442cad524aa33750",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265402786 | pes2o/s2orc | v3-fos-license | Explorative analyses on spatial differences in the desire for social distance toward people with mental illness in a diverging city
Introduction Stigma is an individual and societal process based on attitudes and power and relates to both spatial disparities and social distinction. In this study, we examined differences in desire for social distance toward people with mental illness within a city using social and spatial information. Methods ANOVAs and Scheffé post-hoc tests analyzed varying desires for social distance toward people with mental illness within Leipzig (East Germany). Joint Correspondence Analyses (JCA) explored correspondences between desire for social distance, socio-economic status, age, life orientation, social support, duration of living in Leipzig, and shame toward having a mental illness in five city districts of Leipzig in LIFE study participants (by Leipzig Research Center for Civilization Disease, data collected 2011–2014 and 2018–2021, n = 521). Results Stigma varied among Leipzig’s districts (F(df = 4) = 4.52, p = 0.001). JCAs showed that a higher desired social distance toward people with mental illness corresponded with spatial differences, high levels of pessimism, high shame of being mentally ill, low social support, low socio-economic status, and older age (75.74 and 81.22% explained variances). Conclusion In terms of stigma, where people with mental illness live matters. The results identified target groups that should be addressed by appropriate intervention and prevention strategies for mental health care.
Introduction
Stigma is embedded in its cultural context and influences decisions and behavior; it shapes and is shaped by society through processes of beliefs, power, inclusion, and exclusion (1,2).Stigma toward people with mental illness refers to "labeling, stereotyping, separation, status loss, and discrimination" (1), aggravating the consequences of mental illness and posing a barrier to mental health care (3,4).Staiger et al. (5) investigated the double stigma of unemployment and mental illness and found that intersectionally stigmatized people reported more distress compared to singularly disadvantaged people.Else-Quest et al. (6) emphasized the importance of investigating many facets of social structures to gain information on the complex characteristics of stigma.Thinking further, intersectional approaches condense not only determinants of social inequality like gender and age but also spatial aspects, such as neighborhood, negative representations of places, and accessibility to infrastructure.These aspects additionally represent a part of health disparities and stigmatization processes (7)(8)(9)(10)(11).In detail, Wacquant investigates with quantitative data (for instance, from local community fact books), in-depth interviews, and ethnographic observation of territorial stigma over time (7).He points out that increasing inequalities in social determinants interrelate with spatial segregation processes and negative representation of places.People feel ashamed of living in a so-called "bad neighborhood" (for instance because of people with low socioeconomic status living there).Based on this, Halliday et al. make clear that these neighborhoods lack further in accessibility and social isolation, so they are of remarkable interest for public health research (and nevertheless under-represented in the body of research) (8).As mentioned above and the fact that intersections of stigmatized characteristics lead to stronger distress for people, it is of particular relevance to understand and overcome complex stigmatization processes.
Nevertheless, there is sparse knowledge about correspondences of spatial and social aspects and stigma toward people with mental illness.Current research seeking to close the research gap about stigma within cities provides perspectives on spatial (8) or territorial stigmatization (7,12) as well as social dimensions of stigma.Therefore, we aim to investigate the desire for social distance toward people with mental illness in cities.
Space is shaped by people and influences people's behavior (13).Hence, cities are realms of experience (14).Leipzig is a major city in East Germany and has areas teeming with opportunities, but it also showcases spaces marked by inequality and disadvantage (15).With more than 600,000 inhabitants in 63 city districts (16), Leipzig is one of the German cities with the fastest-growing populations (17).It is known for its art and culture scenes (18) and also for its heterogeneity (15), with the latter quality rendering Leipzig suitable for the current research question.To this end, we chose five selected city districts to portray the diversity of Leipzig's social and cultural atmosphere: The City Center around Leipzig Central Station and the marketplace is characterized by a flow of people in shopping malls, historical buildings, and renowned concert halls; Connewitz in the South is the district with the highest proportion of forest (19) and has a flourishing independent culture scene with a history of the left-wing activist movement (20); Gohlis-North in the North of the city's periphery has classical modern houses and a growing population (19).Grünau-North in the West of Leipzig is characterized by large-panel system buildings, and Heiterblick is an industrial area with green space.
The focus on districts as smaller units is especially important for research on the progression of social connections, distance, and networks (21).To supplement spatial data, the current study additionally investigates social features, which determine and constitute spatial varieties among individual city districts.Current analyses explore and condense past research on associations between stigma toward people with mental illness and socioeconomic status (SES) (22), social support (23), and life orientation (24), as well as associations between social distance, SES (25,26), and social support (27).Life orientation is operationalized through pessimism regarding recovery potential in people with mental illness (28).As mentioned above, social disparities interrelate with space and mental health.
Furthermore, it is well established that cities are characterized by a higher prevalence of mental illness (29, 30) and lower stigma (31) when compared to rural areas.
Little is known about how social and spatial features correspond with stigmatization toward people with mental illness, especially within cities in Germany.We attempt to close this research gap by condensing ongoing research and adding insights into relevant features that interrelate with stigma toward people with mental illness through explorative analyses.To this end, this paper investigates characteristics associated with a desire for social distance as an expression of mental health stigma in different city districts in Leipzig.
Research questions
The current paper aims to explore possible cohesiveness and disparities in the five city districts of Leipzig mentioned above, focusing on desired social distance toward people with mental illness by combining social and spatial information on city districts.This led to the following research questions: Are there differences in the desire for social distance toward people with mental illness between Leipzig's city districts?
Which aspects (SES, life orientation, social support, duration of living in Leipzig, and shame of having a mental illness) constitute and correspond with the desire for social distance toward people with mental illness in different city districts of Leipzig?
Sample
The LIFE-Adult-Study is a longitudinal cohort study by the Leipzig Research Center for Civilization Diseases (LIFE) evaluating a broad spectrum of common diseases in 10,000 randomly selected people residing in Leipzig (for further information about the LIFEstudy please see (32)(33)(34)).The LIFE-study includes data on psychological and medical examinations, laboratory studies, interviews, questionnaires, and cognitive tests collected during the first wave of the study from 2011 to 2014 (32) (32,34).Urban differences were mapped to investigate inner city's differences in attitudes and stigma (36).Leipzig has 63 city districts within nine superordinate areas.City districts as smaller, homogenous, spatial units were chosen for analyses and selected by two criteria: First, city districts had to be part of a superordinate area named after cardinal points or the city center.The second criterion was the cities with the highest number of cases.One exception is Connewitz instead of Südvorstadt for the south of Leipzig, as the participant number was nearly identical to Connewitz but not directly adjacent to the City Center.Comparing these two districts in the desire for social distance toward people with mental illness, no significant differences were found (t(df = 212) = −0.292,p = 0.770), justifying city districts.Finally, analyses include five of 63 city districts (n = 521): Leipzig's City Center, Connewitz in the south, Gohlis-North in the north, Grünau-North in the west, and Heiterblick in the east of Leipzig.
Data and variables
Research data were drawn from two waves of the LIFE-adultstudy (32,34) and open-source shape files for additionally visualized maps (37).
The following measures were elicited in the first wave of the LIFEstudy (2011-2014) (32): SES was operationalized according to Lampert et al. (38) through summed educational and professional status and income as social deprivation.The scale's calculated quintiles were summarized into three categories: low, middle, and high SES (38).As life orientation is related to stigma (39,40), dispositional and generalized pessimism and optimism were rated on a five-point Likert scale (1 "strongly disagree" to 5 "strongly agree") as part of the Life Orientation Test (for instance "In uncertain times, I usually expect the best") (41, adapted by 42, 43).Higher sum scores on respective instruments indicated higher levels of optimism or pessimism (44).Optimism and pessimism were seen as stable traits (41).Both scales were dichotomized at the sample's median to depict higher and lowerthan-average optimism or pessimism.Social support was operationalized by Likert-scaled answers (1 "none of the time" to 5 "all of the time") on five items of the ENRICHD-Social Support-Instrument (ESSI) (45 adapted for a German sample by 46,47).Analogous to Cordes et al. (47), scores were analyzed dichotomously: when two items scored less than four, participants were operationalized as lacking social support, while all other results indicated high social support.Personal master data and spatial information about the city districts the participants resided in completed the dataset.
The second LIFE survey (34) elicited the stigma variables (shame and desire for social distance) toward people with mental illness and the duration of living in Leipzig.The desire for social distance was measured using three questions that referred to acceptance regarding renting a flat to working with and living in a neighborhood with a person with mental illness, each on a five-point-Likert-scale (0 "definitely willing" to 4 "definitely unwilling, " with high values indicating a higher desired social distance) (48)(49)(50).To describe the desire for social distance, the sum scale was calculated and dichotomized using the sample's median due to a lack of standardized reference values.Values ranged from 0 to 12, with higher scores again indicating higher social distance.An additional question investigated anticipated shame when experiencing mental illness using a Likert scale (0 "Not at all" to 4 "strongly") (51).Shame as the emotional equivalent of self-stigma is known to be associated with the desire for social distance toward people with mental illness (52,53).Data on the duration of each participant's residency in Leipzig was part of the analysis, taking the known association between residential stability and the prevalence of depression into consideration (54).
We utilized Joint Correspondence Analyses (JCA) to combine social and spatial or environmental information for a multifaceted approach to stigma (55).
Analysis
After testing for normal distribution using the Kolmogorov-Smirnov test and homoscedasticity using the Levene test, an analysis of variance compared city district-specific mean values of desire for social distance toward people with mental illness to examine areaspecific differences (56).For non-normal distributed variances, the Kruskal Wallis test compared city district-specific mean values (56).The significance level was set to 95% (α = 0.05) (56).Scheffé's test analyzed and compared post-hoc contrasts (57,58).
We created a map of reported desire for social distance toward people with mental illness in different city districts of Leipzig by combining information from the LIFE-study sample with spatial data in the City of Leipzig (37).
To explore cohering and diverging variables for these variations in desire for social distance toward people with mental illness in city districts, two JCAs were calculated (55).Ordinal and nominal data (city districts, SES, and social support) were chosen, and metric items were condensed to quartiles (referring to the sample's distribution: age and duration of living in Leipzig) or dichotomized (referring to the sample's median: life orientation; desire for social distance toward colleagues, neighbors, and subtenants with mental illness; and shame) (59).JCA followed a weighted least-squared algorithm with steps comparable to factor analyses for non-metric variable categories (60,61).Data were principal-normalized as recommended for correspondence analysis with more than two variables to compare categories (62).The variable category frequencies were listed in a multiway contingency table (similar to chi-squared statistics) (63).The centroid marked the average row and column profiles (64).JCA reduces errors of diagonal values, which would depict correspondences of the same categories (55).Results were variances, inertias (λ, averaged frequencies) (55,65), and masses (or weights, w; explaining the categories' contributions to related variables for the whole matrix) (55,66).By decomposing JCA's inertia, distinct dimensions were identified and represented outlined deviations from numerical independence (64).These factors or axes were extracted; they structure the matrix of category frequencies.Explained variance for two dimensions reached more than 70%, so using more principal components was not conducive (67).For each dimension, the categories' eigenvalues as contributions (ctr k %) to dimension were calculated (64).
JCAs helped to find out about characteristics corresponding with varying desired social distance toward people with mental illness and referred to five districts: City Center, Heiterblick in the east, Grünau-North in the west, Connewitz in the south, and Gohlis-North in the north of Leipzig.The first JCA included desire for social distance as a sum score and the second JCA investigated three items of the desire for social distance scale separately.
JCA results were graphically represented by a matrix that mapped the resulting dimension 1 (horizontal axis) and dimension 2 (vertical axis) (64) with data points as variable categories.The latter can be interpreted as correspondences (or distances) from the centroid (average) between each category as well as categories and axes (62,63).
Cases with missing values were excluded from analyses as inherent in the JCA calculation procedure.Overall, there were n = 261 (8.72%) missing values in merged datasets on city district retrieval and n = 107 cases (3.58%) with missing values on the desire for social distance.We take this as a reference point to rely on van Buuren (68) to assume completely missing random data instead of imputation methods.Additionally, Diaz-Bone recommends excluding missing values in JCAs to keep analyses interpretable (59).
Software
All calculations were performed with Stata SE 16.0 (69) with additional packages 'SPMAP' to visualize spatial data (70) and 'grc1leg' to combine similar graphs with one legend (71).
Sample
Of all respondents in the first wave of the LIFE-study (n = 10,589, 51.69% women, age: M = 57.61y,SD = 12.51y, Min: 18.24y, and Max: 87.83y), information on the desire for social distance was available from those additionally included in the second wave (n = 2,993, 51.35% women; age at the time of the second survey: M = 62.72y,SD = 12.97y, Min: 26.00y, and Max: 86.00y).In our sample, 15 S2 for detailed results.No significant differences could be reported in the desire for social distance toward colleagues with mental illness between city districts.All results are listed in Table 1.Scheffé post-hoc tests can be found in Supplementary Tables S1, S2.
Joint correspondence analyses for the desire for social distance toward people with mental illness
As Figure 1 shows, high desire for social distance toward people with mental illness corresponded with living in Heiterblick or Grünau-North, low optimism, high pessimism, and high shame of having a mental illness.Compared to other city districts, study participants living in Grünau-North reported low social support, low SES, and high social distance toward people with mental illness.Low social distance toward people with mental illness corresponded with high social support, high optimism, low pessimism, low shame, high SES, and living in Connewitz or City Center.
Figure 2 shows that a high desire for social distance toward subtenants but also toward neighbors and colleagues with mental illness corresponded with a high shame of having a mental illness.Living in Heiterblick or Grünau-North, high pessimism, low optimism, low social support, and low SES as well as older age corresponded with high social distance toward subtenants with mental illness.Conversely, a low desire for social distance toward colleagues and neighbors with mental illness related to low shame, whereas a low desire for social distance toward subtenants with mental illness corresponded with high optimism, low pessimism, living in Connewitz or City Center, high SES, and high social support.
Discussion
Results indicate that it matters where people with mental illness live and in what socioeconomic circumstances they are embedded.We found variations in the desire for social distance toward people with mental illness corresponding to both social and spatial characteristics.The desire for social distance toward people with mental illness was lower in Leipzig's City Center compared to other districts.Results support that there still is a stigma in cities even if urban spaces have been connoted as representing postmodern heterogeneity, diversity, and fluidity (72).Current analyses support that cities and city districts are more than spatial units: districts combine social features, which are particularly relevant when investigating social distance toward people with mental illness.Encouraged by Link and Phelan's (1) proposal on multifaceted and multilevel approaches and Staiger et al. (5) and Else-Quest et al. 's (6) call for intersectionality in stigma research, micro (individual) and macro (urbanity-related) level factors might help understand, reflect on, and cope with stigma and desire for social distance toward people with mental illness.Investigating districts as socially constructed concepts adds insight into territorial (7,12) and spatial stigmatization processes (8).
Because Leipzig is a growing city regarding both population and cultural diversity (15), there are still variations and progressions in and between Leipzig's city districts (see Supplementary Figures S8-S14 in the Supplementary material for the depiction of additional characteristics of Leipzig).The five selected city districts differ not only in desire for social distance toward people with mental illness but also in SES, age, and social support implicating detailed urban and suburban research and comparisons (73).Residents in Heiterblick and Grünau-North reported low SES corresponding with high pessimism, low social support, and a high desire for social distance toward people Variables 6) concept of intersectional, socially constructed categories interfering with mental health stigma.Furthermore, results condensed past findings on higher social distance toward people with mental illness to be associated with higher age (74), lower SES (22), pessimism (24), lower social support (23), and higher shame of having a mental illness (52).Distinctions between city districts represent a self-selection bias as people choose where to live not only based on pragmatic aspects (75).Moving in different city districts as habitats might influence one's identification with prevailing characteristics and habitus such as values and cultural diversity, as well as socioeconomic characteristics of inhabitants (76, 77).This association can be exemplarily demonstrated through Leipzig's city district Connewitz with its long-term, leftist inhabitants (20).In the past, Connewitz was occupied by squatters who established a habitat for left-wing people (please see the election result in Supplementary Figure S13) and space for leftist discourses (78, 79).Current analyses showed high social support as well as low levels of desire for social distance toward neighbors with mental illness, accentuating a district-specific cohesion in Connewitz, regarding, for instance, shared values or lifestyles.These assumptions are consistent with past research on social segregation processes [in Leipzig: (80); but also as a postmodern phenomenon: (21)], neighborhood cohesion, and health status (81).These inner-city processes endorse interrelating social and spatial aspects as experience realms in Leipzig and other cities. Results may help establish destigmatization efforts and support people with mental illness when seeking to gain access to health care.
To conceptualize stigma, we compared a sum scale with single items of desire for social distance toward neighbors, colleagues, and subtenants with mental illness.The latter led to a more explained variance of the JCA.These results were consistent with previous research which states that items measuring the desire for social distance refer to different areas of life and that ranges of desire for social distance toward colleagues, neighbors, and subtenants cannot easily be summarized (27).
Data collection
The LIFE-adult-sample was collected in two different waves.While life orientation is recognized as a stable personality trait (41), possible changes in other data, such as participants moving between city districts, could not be depicted.Due to different questionnaires and information between the two waves, longitudinal analyses and reflections were not possible.Additionally, there were dropouts over time (34).
Despite anonymized data collection, social desirability might influence participants' response behavior to possibly objectionable questions regarding the desire for social distance toward people with mental illness.Furthermore, the desire for social distance labeled people with mental illness in general while research has shown varying desires for social distance between different disorders (26,82), for instance, for depression and schizophrenia (27).
Sample representability is limited as participants have higher social and health status compared to recruited non-participants (33).As the sample's health status is above average, possible results concerning mental illnesses or other health-related risk factors may be underestimated (33).Leipzig has a unique history as a city of fairs with significant influence of infrastructure and diverse perspectives from other countries (83).Additional research about past and current sociopolitical progress may help in understanding ongoing developments and problems, for instance, housing shortages because of bought-up flats or dead industries (84).Migration processes, spatial distribution, the density of schools in the city, and culturally used areas additionally reshape a district's social structure.Leipzig currently registers remarkable demographic growth compared to other cities, especially in the East but also throughout Germany (85, 86).
Methodological aspects
As variables were not all distributed normally, we reported results of a non-parametrical Kruskal Wallis test.JCA allows for explorations of cross-sectional data structure and frequencies although the direction of associations or causality cannot be determined (59).Additionally, data was dichotomized and categorized, referring to the sample's median because there was no reference data for normalization.As with all statistical calculations, correspondence analyses reduced complexity (59).The number of cases in different city districts varied; therefore, generalizations and comparative conclusions were limited (33).
Future directions
Future research should be aware of milieus or lifestyles in cities. Taking target groups into consideration, especially for anti-stigma interventions, may help to overcome social distance and support mental health literacy in marginalized groups, for instance, groups with low SES, low social support, high pessimism, and high shame toward having a mental illness.
Leipzig, with its remarkable history and current diversity, enables many possibilities for further investigations such as comparing Leipzig's population with other urban areas.Future studies should include data over a longer period of time to gain information on fluid and stable markers of social distance and social structure in cities to detect causes and predict consequences for progressions in stigma toward people with mental illness (87,88).
As the term 'social distance' refers to interpersonal and spatial information, future research should follow interdisciplinary approaches by combining historical knowledge with political, sociological, psychological, epidemiological, and geographic knowledge (89).Factors that might relate to stigma within cities are higher population densities, access to health care, or intersectional aspects (6,90).
These approaches may help to identify target groups as well as spaces and areas that should be addressed by appropriate intervention and prevention strategies for mental health care (91,92), like districtspecific health care centers addressing spatial and social help-seeking barriers (93).
FIGURE 1 Joint
FIGURE 1Joint Correspondence Analysis depicting sum scale on the desire for social distance toward people with mental illness, Leipzig's exemplary districts (City center, Heiterblick, Connewitz, Gohlis-North, and Grünau-North), SES, age, life-orientation scales including dichotomized optimism and pessimism scales, dichotomized ENRICHD-Social-Support-Instrument, duration of living in Leipzig, and shame of having a mental illness based on LIFE data (n = 521).
FIGURE 2 Joint
FIGURE 2Joint Correspondence Analysis including single items on desire for social distance toward colleagues, neighbors, and subtenants with mental illness, Leipzig's exemplary districts (City center, Heiterblick, Connewitz, Gohlis-North, and Grünau-North), SES, age, life-orientation scales including dichotomized optimism and pessimism scales, dichotomized ENRICHD-Social-Support-Instrument, duration of living in Leipzig, and shame of having a mental illness based on LIFE-data (n = 521).
TABLE 1
Sociodemographic characteristics for each of the five exemplary city districts of Leipzig and the whole sample, frequencies by column, and distributions (n = 2,993).
TABLE 1 (
Continued) with mental illness.These correspondences of disadvantages are supported by double stigma research (5) and by Else-Quest et al. 's ( | 2023-11-25T05:05:46.317Z | 2023-11-09T00:00:00.000 | {
"year": 2023,
"sha1": "f72528ecd6319d812f6abcbf4f400fb89e8117af",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1260118/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f72528ecd6319d812f6abcbf4f400fb89e8117af",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": []
} |
118911067 | pes2o/s2orc | v3-fos-license | Quantum Electrodynamics is free from the Einstein-Podolsky-Rosen paradox
I show that Quantum Electrodynamics (QED) predicts a sort of uncertainty principle on the number of the"soft photons"that can be produced in coincidence with the particles that are observed in any EPR experiment. This result is argued to be sufficient to remove the original EPR paradox. A signature of this soft-photons solution of the EPR paradox would be the observation of apparent symmetry violation in single events. On the other hand, in the case of the EPR experiments that have actually been realized, the QED correlations are argued to be very close to those calculated by the previous, incomplete treatment, which showed a good agreement with the data. Finally, the usual interpretation of the correlations themselves as a real sign of nonlocality is also criticized.
I. INTRODUCTION
In their famous 1935 paper [1], Einstein, Podolsky and Rosen (EPR) pointed out that the Quantum Mechanics probabilistic description of Nature apparently leads to some mysterious action at a distance. They eventually deduced that the Quantum Theory itself was necessarily incomplete. This suggested the need for some Hidden Variables, allowing for a causal local, deterministic description, but such an hypothesis can hardly agree with the results of a series of experiments carried out in the last several years [2,3].
It is now commonly believed that local realism is violated by quantum effects even in the relativistic case [4]. This is so paradoxical, that some authors still suggest the need for a "more consistent" theory beyond the present day Relativistic Quantum Mechanics, and argue that a conclusive experimental proof against Hidden Variables is lacking [5].
Actually, it turns out that no new physics is needed. In fact, there is already a very elegant theory, which describes with extreme accuracy all the known phenomena involving the electromagnetic interaction [10]. It is Quantum Electrodynamics (QED) [6,7]. Since it is by construction relativistic, and based on the local U(1) gauge principle, it is not surprising that it is protected against the EPR paradox, as I show in the present paper.
II. THE EPR PARADOX
Let me consider an ideal EPR experiment [1,8]. Two particles, e.g. two photons as in the actual experiments [3], are emitted by a source and travel in opposite directions [11].
Far apart, some conserved observable, such as energy, momentum, or a component of the angular momentum (spin, helicity or polarization), is measured on both of them.
According to the usual interpretation, the measurement carried on one of the two subsystems (call it A) reduces it to an eigenstate of the measured observable, whose conservation immediately forces also the second particle (call it B) to "collapse" into the corresponding "entangled" eigenstate. Since before the experiments the two particles are not prepared in an eigenstate of the measured quantity (see also point 2 few lines below), it seems that the observation on A implies an instantaneous change of the state of the distant particle B. The observed quantity would then get "an element of physical reality", according to the original definition: "If, without in any way disturbing a system we can predict with certainty (i.e. 2 with probability equal to unity) the value of a physical quantity, then there is an element of physical reality corresponding to this physical quantity" [1] (the italics and the parenthesis also belong to the original paper).
According to Einstein and collaborators, the problem of Quantum Mechanics is that: 1) the considered observable gets a definite value on B, after a measurement on the distant particle A, and this occurs with certainty, so that the observable gets a "physical reality" on B; 2) such a physical reality depends on the actual measurement that is done on A (for instance, if instead of measuring the component J z of the angular momentum we decided to measure an observable incompatible with it, such as J x , then the state of the distant particle B would correspond to a different physical reality) [1]. Such a situation is also called a violation of "local realism". This is the original EPR paradox of Quantum Mechanics. Only much later, mainly due to the work of Bell [2], it was reformulated in terms of the correlations (see also sections IV and V), in order to work out predictions that could be tested experimentally. Up to now, the two formulations were thought to be equivalent, but in section IV we shall see that this is not the case. For the moment, let us notice that, according to Einstein and collaborators, the existence of an action at a distance working with perfect efficiency would imply a much harder incompatibility with Special Relativity, as compared with a possible statistical nonlocality as shown in the correlations. I will come back later to the problem of the correlations, and concentrate for the moment on the original EPR paradox.
It is worth mentioning that Einstein and collaborators concluded that the solution to the paradox should necessarily imply that "the wave function does not provide a complete description of the physical reality" [1]. For the wave function, they meant that associated to the two particle system, describing A and B and nothing else. We shall see that they were right, even though they would perhaps not expect that the solution was to be found in the modern version of the Quantum Theory itself. In fact, the complete description in QED is not limited to the "entangled" state of A and B, since it allows also for the presence of an arbitrary number of "soft photons".
III. THE UNCERTAINTY IN THE NUMBER OF SOFT PHOTONS
The EPR paradox, as described above, is originated by the assumption of a two particle state, which is incorrect in Relativistic Quantum Mechanics. As we shall see, states involving two or more particles are not "stable" in QED. There are no entangled "stationary" states! In other (more correct) words, additional real particles can be created in coincidence with A and B. Which additional species can appear depends on the available energy. Since massless particles can have arbitrarily low energy, the possible presence of real "soft photons" (i.e. very low energy photons) should always be taken into account in the theoretical treatment.
In any case, since soft photons usually escape detection (or they are not looked for), no event can be guaranteed with absolute certainty to involve only the two observed "hard" particles.
Moreover, soft photons can also be created due to the interaction of both A and B with the measuring apparatus. Even though the latter effect will not be used in the rest of the present work, in this section I will mention it since it can be interesting for the Theory of Measurement (for instance, due to possible creation of soft photons during the measurement, the ideal measurement that is used to define the eigenstates of the observables should be considered as a mere approximation).
According to the previous discussion, there are two sources of indetermination on the number of real particles in an EPR experiment: at the production process, or at the measuring apparatus. I will prove this statement using QED perturbation theory (i.e. Feynman diagrams). For simplicity, I will only discuss two kinds of EPR experiments: i) those involving two charged spin 1/2 particles; and ii) those involving two photons. In both cases, I will give explicit examples predicting the creation of an arbitrary number of soft photons. i) In Fig. 1, I have drawn a tree-level diagram where the blob represents the particular elementary process that produces particles A and B. Even without specifying that part of the diagram (involving some "initial" particles), we see that an arbitrary number of real soft photons (three in the particular case of the figure) can be attached to each of the external fermion legs. This is a well known effect in QED [7]. charged particle belonging to the experimental device.
ii) The two photon case, which corresponds e.g. to Aspect et al. experiment [3], seems to be a bit more complicated from a theoretical point of view. Since no three photon vertex exist, we have to look for one loop effects. In Fig. 3, I show a "box" diagram for the production of two real soft photons [12]. The virtual particle in the loop can be any charged fermion (electron, muon, tau, quarks). On the other hand, the interaction with the measuring apparatus can proceed through diagrams such as the tree level one of Fig. 4. Here, soft photons can be attached in an arbitrary number to the line of the electron or nucleon belonging to the experimental detector [13].
It is clear that these considerations can be generalized: an arbitrary and unknown number of soft photons can always be created in any experiment, in any step that involves an interaction.
In the following discussion, we will be interested in particular in the soft photons that are created in coincidence with the two (or more) particles observed in an EPR experiment, as described by diagrams such as those in Figs. 1 and 3. We are now able to understand how QED is protected from the EPR paradox. momentum will not be given a "physical reality" on B after the measurement on A. According to the discussion of section II, this is sufficient to save the theory from the original EPR paradox.
It is important to recall that there is no possibility to control completely the uncertainty on the number of soft photons in a single event. In other words, QED is even less deterministic than Nonrelativistic Quantum Mechanics, due to this underlying sort of Uncertainty Principle on the Number of Particles. In fact, the only predictions that it allows are on probabilities and average values. This greater indetermination protects the theory from the EPR paradox. In other words, it seems that, to remove the paradox, one has to choose between the most extreme possibilities: determinism (hidden variables, the favorite option for Einstein and collaborators), or complete lack of determinism for the single processes (QED, the dice of God) [14].
Let me come back to our EPR experiments, and notice that the conservation laws, including energy and momentum, are not expected to hold strictly for the two particle (sub)system, A and B. A general single event will show apparent symmetry violations, except when by chance no soft photon is created. In particular, any violation of a discrete variable such as Notice that in the actual EPR experiments [3] the correlations between the polarizations of the "hard" photons (A and B) are evaluated. Such correlations are statistical averages over the results for the different single events. The data agree with the prediction of a Quantum Mechanics that did not take into account the soft photons, and are incompatible with the predictions of Hidden Variables theories, that were also considered to be the only possible locally realistic theories. This fact was interpreted as a proof that Nature is EPR paradoxical. However, such a conclusion is not correct. As we will see in the next section, in the case of the EPR experiments that have been performed up to now the QED prediction for the correlations is very close to that obtained in Quantum Mechanics by ignoring the soft photons, so that it can still agree with the data within the experimental errors. However, even a very small probability for soft photons creation is sufficient to forbid any certain prediction for the measurement on B as a consequence of the measurement on A, and this is enough to remove the original EPR paradox, as we have seen. This implies that the Notice also that this solution to the EPR paradox is based on two points: the existence of massless particles, the photons; and of the fermion-photon vertex, that allows photons to be created in any external line of the relevant Feynman diagrams. But it is well known that both the electromagnetic vertex and the masslessness of the photon are the direct consequences of the local gauge symmetry. Not only the local symmetry defines the theory, but it also protects against the EPR paradox and the violation of local realism.
V. THE QED PREDICTION FOR THE CORRELATIONS
Even though we have found that the study of the correlations is not relevant for the original EPR paradox, we have to check that the QED predictions still agree with the experiments. The calculation can be done by using the methods discussed in Ref. [7], and the result depends on the actual selectivity in energy and momentum of the experimental setting. Here, I will just provide a rough argument to show that the correlations are usually not expected to be seriously modified by the soft photons creation.
For simplicity, I will consider an ideal EPR experiment involving two charged spin 1/2 particles created after the decay of a zero spin system (let us forget here the difficulty in the measurement of the spin of the charged particles). In this case, the relevant correlation functions are the average values of the products of the components S u (A) and S v (B) of the spins of the two particles along arbitrary axes u and v [8]. For instance, let u = v, chosen to be the z axis. If we do not take into account the soft photons, according to Quantum Mechanics the two particles must have opposite spins in order to conserve the total angular momentum. Then we get Notice that this is the maximal correlation (in absolute value) that can be achieved for two observables whose eigenvalues are ±h/2. (If the spins were completely independent, In general, allowing for the soft photons creation through diagrams similar to that of Fig. 1, the correlation will be smaller than maximal. Now, as shown in Chapter 13 of Ref. [7], in the limit where the energy of the soft photons is neglected the helicities of the two fermion will remain opposite. Therefore, the correction to Eq. (1) due to diagrams such as that of Fig. 1 is suppressed by powers of where E is the "hard" fermions energy and E sof t is the typical soft photons energy (essentially, it is the "infrared cutoff" introduced in Ref. [7]). This parameter depends on the experimental settings, but it can be made small by increasing the energy selectivity in the observation of particles A and B. Moreover, in a "selective enough" experimental setting, the two particles will be detected in opposite directions with small angular indetermination, then the total transversal momenta of the soft photons will have a limited phase space available, and this will result in a further suppression of the corresponding diagrams. Diagrams involving an increasing number of soft photons will also be suppressed by the corresponding powers of the fine structure constant α ≃ 1/137.
For all these reasons, in the usual EPR experiments we expect that the correlation will be close to that computed using the "entanglement" theory, and the agreement with the data will not be spoiled within the experimental errors.
However, as we have discussed, even a very small probability for soft photons creation is sufficient to save the theory from the original EPR paradox, since it prevents the possibility of a certain prediction on the single event. For instance, in event involving a single photon travelling close to the direction of the two particles A and B, the two fermion helicities will most probably be found parallel rather than antiparallel, in order to cancel the ±h helicity of the photon.
A similar result can be found in the case of the actual EPR photons experiments. In fact, the probability due to the relevant diagram (Fig. 3) is suppressed by four powers of the fine structure constant, by the reduced phase space and by the electron propagators in the loop (the mass of the electron is large as compared with the typical energy of the "hard" photons involved in the experiment, which are typically in the eV range). Therefore, the prediction obtained in QED, taking into account the soft photons, is expected to be very close to that of the previous approach, and will still agree with the present experimental results. In principle, it could be hoped that such an apparent nonlocality of the correlations could be used to save some supposed applications of the EPR paradox, such as teleportation [9], that might thus be interpreted as intrinsically statistical processes. For instance, if teleportation is realized using ("hard") photons in the eV range, the probability for soft photons creation is very small, as we have discussed above, and the existing theory could be thought to be a good approximation. However, I think that a deep study of the measurement problem is needed to prove whether such an interpretation can be correct. This point is of extreme importance and urgency, since teleportation is presently used as a base for Quantum Information Theory and Quantum Computing.
Here, I will provide a possible qualitative argument against the nonlocality interpretation of the correlations, without pretending it to be definitive, in order to stimulate a debate on such an urgent problem. In fact, in QED the correlations are obtained from a covariant Lagrangian density that only involves local interactions; they are causal and the prediction for them is deterministic [7]. The fact that the Feynman diagrams of the kind of Figs. 1 and 3 imply a conservation of energy, momentum, angular momentum, etc., amongst the external legs, is a causal, deterministic consequence of the initial "in" state and the local interaction that occurs at the production point. Therefore, I think that it is not correct to interpret the global conservation law as a result of an "instantaneous agreement" occurring at the moment of the measurement, since the correlations showing such a global conservation are calculated as a deterministic result of the evolution from the common origin of the particles A, B and the soft photons. In other words, the global conservation is merely a causal consequence of the local conservation law. No mysterious action at a distance is then working. The real "quantum mystery" is the wave-particle duality, with the localization of the particles in the single events. But the amplitude of probability is a wave, whose evolution respects causality and locality. I think that for this reason Einstein and collaborators in Ref. [1] were not concerned with the (nonmaximal) probabilities or correlations, and they were so careful in defining the paradox as a problem occurring if A and B were perfectly "entangled" ("with certainty, i.e. with probability equal to one", as they say explicitly). Therefore, since we have found that this original paradox (i.e. the "perfect entanglement") is removed by the soft photons mechanism (even with a very small probability for them to be created), I think that also the nonlocality interpretation of the correlations is undermined. At least, such a "sort of nonlocality", that (roughly speaking) originates from the causal and local propagation of a "wave" from the production point, does not correspond to any mysterious action at a distance, and cannot be used e.g. as a base for teleportation.
VII. CONCLUSIONS
To conclude, I have shown that QED is protected from the original EPR paradox by the local gauge symmetry. This corresponds to the fact that it allows for the creation of an arbitrary number of soft photons in coincidence with the observed particles in an EPR experiment. This mechanism would be confirmed by the experimental observation of an apparent symmetry violation in a single event. On the other hand, the correlations are expected to be smaller than those calculated by ignoring the soft photons, but in the case of the actual EPR experiments that have been realized up to now the correction is expected to be very small, so that the agreement of the Quantum Theory with the present data is not spoiled. Such correlations are usually thought to be by themselves a sign of a "quantum nonlocality". Although here I have already presented an argument against such an interpretation, this problem deserves further research, which is particularly urgent in order to decide about the actual viability of several supposed applications of the Quantum Theory that were based on the EPR paradox. | 2019-04-14T03:18:17.183Z | 2001-10-09T00:00:00.000 | {
"year": 2001,
"sha1": "d12337896874f268eb663cb0744be249656e666d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d12337896874f268eb663cb0744be249656e666d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
230663627 | pes2o/s2orc | v3-fos-license | Effect of the Ripening Period and Intravarietal Comparison on Chemical, Textural and Sensorial Characteristics of Palmero (PDO) Goat Cheese
Simple Summary Palmero PDO (Protected Denomination of Origin) cheese is a typical product of La Palma (Canary Isles, Spain) and it is manufactured from raw goat milk of the Palmera breed. All goat herds must be fed with local vegetal resources: pastures and or/grazing. It is an uncooked, pressed cheese, commercialised both fresh (from 8 to 20 days), as semi-hard (from 21 to 60 days) and hard (from 60 days). The aim of this study was to evaluate the changes in the physicochemical and sensorial parameters of Palmero PDO cheeses during 90 days of aging, also making an intravarietal comparison between dairy plants. This characterization could lead a better and complete cheese definition. Some variations have been observed between cheese artisanal factories because each cheesemaker has some cheese making particularities that are inherited from parents to children. These differences can be used for purchasing and marketing as added values linked to “terroir” and cheese handmade practices. Abstract Palmero cheese is an artisanal dairy product from the Canary Islands (Spain), awarded with the Protected Denomination of Origin (PDO) from the European Union. It is made with raw milk from the Palmera dairy goat on La Palma island. The aim of this research covered the physicochemical and sensorial characterization of Palmero cheese along 90 days of ripening. Palmero cheeses from four cheese factories were analysed for basic physicochemical parameters, instrumental texture and colour and sensorial profile. Most of the basic composition and the texture and colour attributes of Palmero cheese changed significantly along maturation. During the 90 days of ripening an increase in hardness, fracturability and gumminess (p < 0.001) occurred while elasticity decreased simultaneously (p < 0.001). The internal lightness value decreased significantly (p < 0.001), while yellowness increased (p < 0.001) during cheese ripening. Ripening time affected six of nine sensorial texture characteristics and the entire odour and flavour parameters analysed (p < 0.001). Regarding to intravarietal comparison, in general, cheeses from the four dairy plants showed similar composition although significant differences were detected on textural, colour and sensorial characteristics.
Introduction
Food quality is a complex notion that includes several aspects: legal, nutritional, hygienic and sensorial [1]. Nowadays, there is a great deal of interest in the definition of quality, especially with regard to the Protected Denomination of Origin (PDO). These products are manufactured in a very traditional way, with differences between the texture, smell, aroma and taste of the cheeses depending on the producers [2].Therefore, it is of great interest to study the characterization of these products throughout their maturation process, in the different stages of their commercialization Palmero cheese is a typical product of La Palma (Canary Isles, Spain) and it is manufactured from the raw goat milk of and Palmero PDO cheese characteristics have been partially described before [21][22][23][24] but there is no in-depth study of the major physicochemical and sensory characteristics along maturation. Furthermore, to date, an intravarietal comparison has not been investigated. This paper is written as a part of the report for the strategic project of the Canary Government for increasing the characterization and differentiation of Canarian PDO cheeses, including studies of changes that occur during ripening and consumer preference evaluations. The aim of this work was to describe changes in the physicochemical and sensorial characteristics of the Palmero cheese during ripening, including an intravarietal comparison from four different dairy plants.
Sampling and Cheese-Making Procedure
With the agreement of a panel of professionals (veterinarians, farmers, cheese makers, and marketing agents), 4 artisanal cheese producers were selected for their quality and consistency of Palmero cheese production adhering to PDO regulations [3].
Cheeses were manufactured from raw milk following the specifications of the Palmero Cheese Denomination of Origin Regulatory Board (2002) [3]. A total of 48 Palmero goat cheeses were manufactured (12 from each cheesemaker) the same day as milking, using Palmero goats' milk with a minimum of 3.80%, 4.00% and 12.50% of protein, fat and total solids, respectively. The milk was coagulated with animal rennet from kid s abomasum, at a temperature between 27 and 33 • C for approximately 45 min. Curds were subsequently cut to obtain grains of less 3 mm in diameter with a previous press in the batch to eliminate part of the whey. The curd was then moulded into pieces of approximately 2 kg. Afterward, salting with dry sea salt was achieved by rubbing dry salt onto the surface of the cheeses. The cheeses were ripened in the same ripening chamber located in PDO Palmero Council with controlled ripening factors at 10 to 12 • C and 85% to 87% relative humidity. Although it is not a usual practice, the producers of Palmero PDO cheese ripen their cheeses in a collective ripening chamber supervised by the technicians of the PDO regulatory council. From each cheese factory, 3 cheeses were picked up after 15, 30, 60 and 90 days of ripening to study physicochemical and sensorial characteristics (four cheese factories × three cheeses × four ripening periods). These periods were selected by the PDO technicians as the most suitable from a commercial point of view. Cheese samples were coded with a letter representing the respective dairy plant where they were manufactured, and a sample ID number was assigned to all cheese samples. For each aging time, cheeses were sent to the laboratory in refrigerated boxes and analysed immediately to evaluate the changes.
Physicochemical Analysis
Chemical cheese analyses were made in triplicate using a near-infrared spectroscopy (Instalab 600, Foss Electric, Slangerupgad, Denmark), calibrated for total solids content by ISO 5534:2004, fat content by ISO 11870:2009 and nitrogen content by ISO 8968-1:2014.Besides, cheese pH was determined at 20 • C using a pH meter InoLab Level 1 from WTW (Weilheim, Germany).The pH of cheese exterior was measured at a depth of 1 cm from the outer surface, while internal pH was measured in the center of the cheese.
Texture and Colour Measurements
Texture characteristics were determined using a Texture Expert Exceed XT2i (Surrey, England) by carrying out a texture profile analysis (TPA) as is described in detail by Fresno and Álvarez [25]. Six parameters were obtained for a double compression: hardness (N), fracturability (N), adhesiveness (N.s), cohesiveness, elasticity (%) and gumminess (N).
Internal and external colour was determined using a portable Minolta spectrocolourimeter (Minolta CR-400, Osaka, Japan) following the guidelines previously described by Fresno and Álvarez [25]. Five colour parameters were determined according to the CIELCH and CIELAB colour space: L*, chroma, hue angle, a*, and b*.
Sensory Analysis
Sensory analysis was carried out after 15, 30, 60 and and 90 days of ripening. Samples, coded with 3-digit random codes, were presented balanced to avoid the effect of the presentation order. The methodology employed has been previously described, with odour and flavour attributes in accordance with those described by Berodier et al. [26], and texture following the guidelines published by Lavanchy et al. [27]. This methodology has been adapted to goat cheeses by Fresno et al. [28]. A panel of seven formally trained and highly experienced judges was used, who already work in collaboration with the Palmero PDO cheese sensory panel. Furthermore, before the beginning of this experiment, five extra training sessions with Palmero PDO cheeses were performed.
The sensorial evaluation was carried out in a specific room for sensory analysis in the Canarian Institute of Agrarian Research (ICIA) [29], following the methodology formerly explained in Álvarez et al [20]. Each judge received two portions per sample of cheese, one for texture and the other for odour and flavour. Using a structured scale from 0 to 7, 9 parameters for texture, and 6 attributes for odour and flavour, were determined. In addition, each judge described specific descriptors for odour and flavour.
Statistical Analysis
SPSS version 15.0 (SPSS Inc., Chicago, IL, USA) was the package used for statistical processing of the results. A General Linear Model (GLM), MANOVA, was used to establish statistical differences in the values of the physicochemical parameters and the scores of the sensory analyses according to the maturation time and the comparison of the cheese factory. Post hoc multiple analyses by Tukey's test and multiple regression were performed for ripening time factor and intravarietal comparison. Principal component analysis (PCA) was performed for varietal factor and discriminant analysis was performed for ripening time factor. Pearson correlations were undertaken between physicochemical and textural variables. The linearity of the relationships was graphically checked.
Basic Physicochemical Characteristics
The least square means for physical and chemical composition together with ANOVA results are presented in Table 1. Ripening affected all physicochemical characteristics (p < 0.01). Internal pH and fat content increased significantly along 90 days of ripening while moisture decreased. The external pH decreased from 6.50 to less than 5 points in the first 15 days after production (data not shown), afterwards it was maintained for up to 60 days of ripening and finally increased significantly. The pH of the cheese increased during ripening as a consequence of the consumption of lactic acid and the alkalizing effect of the compounds generated during protein degradation [30]. The cheese acidity level has great importance, influencing the growth of micro-organisms and enzymatic activity throughout the maturation process, as well as affecting the rheological properties and flavour [31,32].
Palmero pH cheese values fluctuated between 4.90-5.36 during the ripening period. The final average pH value (5.17-5.36) is close to the range of values observed by different authors for other Canarian PDO goat cheeses [25].
The role of pH in cheese texture is particularly important because changes in pH are related directly to chemical changes in the protein network of the cheese [7]. The surface pH value is higher than the internal one, as observed by Fresno et al. [33] when studying the Armada cheese variety. As regards the moisture content, the highest value was measured at 15 days, after which it decreased significantly till 60 days while remaining constant up to 90 days. Fresno and Alvarez [25] obtained a similar moisture behaviour in Majorero cheeses (another Canarian goat PDO cheese) although the Palmero values were considerably lower. Cheese moisture is controlled by the velocity and extent of syneresis and the contraction of the casein structure. In moulding, the pressing and salting processes, which follow coagulation, the decrease in pH plays an important role resulting in a significant whey diminution [34]. The protein concentration showed irregular values but with small fluctuation, only 4 percentage points between the lowest value (30 days) and the highest value (60 days). These results are similar, although slightly lower, than those determined by other authors for cheeses made with local breed milk [25,33]. The total fat content of the Palmero cheese at the start of ripening was 47.72%; this significantly increased to 52.43% (p < 0.01) at 60 days, and then remained constant till the end of the maturation period. The protein and fat contents observed in Palmero cheese during ripening (expressed as g/100 g of TS) showed values of around 30 and 50, respectively. These fat statistics are in the range of values observed for other Spanish goat's milk cheese types [25,33] but are quite lower compared to other Palmero cheese protein concentrations [19]. LSM: Least square mean; L*(e), chroma(e), hue angle(e), a*(e) and b*(e) correspond to parameters measured on cheese surface (external) and L*(i), chroma(i), hue angle(i), a*(i) and b*(i) correspond to parameters measured inside the cheese (internal); a-d : Within a row, means marked with different superscripts differ significantly (p < 0.05). Table 1 also shows the values for textural attributes for Palmero PDO cheeses derived from TPA analysis. Texture profile analysis is an important tool in cheese characterization. All textural parameters of the cheeses were affected by ripening time (p < 0.001), except cohesiveness that presented similar values for all different ripening periods. These results are in accordance with those reported by other authors [25,35,36]. Fracturability and gumminess increased along maturation, as well as fat raised [37]. However, Pinho et al. [7], studying "Terrincho" ewe's cheese, recorded an increase in these parameters up to 30 days and then a decrease afterwards to the end of the maturation. Although Palmero PDO cheese textural behaviour is similar to Majorero PDO cheese [12], fracturability, hardness and gumminess values are considerably higher, due to the different technologies applied. The most important factor could be the intensive cut until grains are less than 3 mm in diameter. An increase of 110% in the hardness values has been found at 90 days of ripening when compared with 15-day old cheese. A direct correlation has been detected between hardness and moisture, this correlation has also been noted by Pompei et al. [38] and Tejada et al. [39]. Decreased water content promotes a greater casein concentration and an increase in the number of casein bonds; both factors increased hardness values. Moreover, water plays a plasticizer role between the protein molecules, making the cheese softer [40]. Similar hardness value evolution was determined by Fresno et al. [19] on Palmero experimental cheeses made with different types of coagulants.
Texture
On the other hand, elasticity, which is the degree of recovery of a deformed piece of cheese after the deforming force is removed [41], decreased till 30 days ripening and remained constant till 90 days, while adhesiveness showed irregular fluctuations, decreasing in the first two months of maturation and increasing thereafter. These results could be related to the increasing fat content in the ripening process of the cheeses [42,43]. The fat content increment modifies the texture properties. These changes in texture are characterised by increased elasticity, decreased cohesiveness and decreased smoothness of mass [44]. Lawence et al. [45] reported that the pH of cheese had a large effect on textural properties. A direct correlation between fracturability, hardness (p < 0.001) and gumminess (p < 0.01) with pH has been detected. The observed increases in rheological fracturability, hardness and gumminess can be attributed to the dissociation of calcium phosphate bridges between casein molecules with increasing pH. Decreasing pH towards 5.4 allows greater hydrophobic interaction between the protein molecules and causes the curd to become firmer and more elastic. Everett and Olson [46] found that strain and fracture increased as the pH increased from 5.0 to 5.25 in Cheddar cheese. For Palmero cheese, pH increased from 4.90 to 5.37 till 90 days of ripening, allowing a firmer, elastic and crumbly curd. Oppositely, Watkinson et al. [31] found that increasing pH resulted in less crumbly and firmer cheeses. Fracturability and hardness, especially due to its high correlation with moisture, fat and protein content, and also gumminess could be used as interesting ripening predictors.
Colour
The mean values for L*, chroma, hue angle, a* and b* parameters are shown in Table 1. Cheese colour was statistically affected (p < 0.05) by ripening time. Only external chroma and b* and internal a* and hue angle were not affected by this factor. Both external and internal lightness decreased along maturation according to Rohm and Jaros [47]; this trend was more prominent on the surface than inside the cheese. As was referred for certain textural parameters, lightness could be an appropriate parameter for ripening prediction. The 30-and 60-days aged cheeses showed higher internal colour intensity while external colour tone was significantly (p < 0.05) higher in fresh cheeses (15 days). The increase in colour intensity with ripening time has already been reported in other cheese varieties, such as Cheddar [48], Mahón [49], Emmental [50], and Los Pedroches [39]. An indirect correlation (p < 0.05) was detected between moisture and colour, a finding also noted by Rohm and Jaros [47] and Frau et al. [49] in cow cheeses, and Tejada et al. [39] in experimental ewe cheeses made with different coagulants.An increase in yellowness b*(i) was observed up to 60 days of ripening; however, cheeses became less yellow at 90 days. Palmero PDO cheeses showed less L* and chroma values than Palmero experimental cheeses [19]. As was observed by other authors [7,50], there was a decrease in lightness and a slight increase in both redness (a) and yellowness (b) during cheese ripening in the present study.
Sensorial Analysis
The results of the sensory evaluation of Palmero cheeses are shown in Table 2 for texture attributes. Ripening time affected six of nine texture characteristics. Moisture (superficial and mouth) decreased (p < 0.001) throughout the maturation period, obtaining the lowest values at ninety days. The humidity loss is frequently associated with lower dough elasticity [51]. It has been suggested that higher moisture content allows a greater movement of the casein matrix and reduces resistance to deformation in hard cheeses [52]. Moisture content has been reported to influence the fracture mechanism during biting and mastication [53]. As was expected, superficial and mouth moisture were directly correlated with physicochemical moisture and indirectly correlated with protein and fat contents (Table 3). Roughness and friability increased until 60 days. The values stabilized in the last thirty days. Older cheeses were less adhesive than fresh ones. Adhesive parameter decreases along ripening, not being related with fat increment, in contrast with other cow [54] and experimental cheeses [43]. In addition, the firmness, solubility and granulosity values were very similar throughout the ripening period. These results are in accordance with Majorero [55] and Cheddar cheeses [56] although other studies recorded an increase for older cheeses [57,58].
Ripening time affected all odour and flavour parameters analysed ( Table 2). The odour and flavour intensity increased significantly (p < 0.001) during ripening, as was also recorded by Agabriel et al. [54]. A general increase in flavour is perceived in the ripening process, this increment with age is caused by the production of a wide range of volatile com-pounds during maturation by the metabolism of triglycerides and proteins [59]. In other cheeses, the intensity of the sensory attributes was found to increase with ripening time, even though the increase was not significant for all attributes [60]. Odour and flavour of cheese results from the correct balance and concentration of numerous sapid and aromatic compounds perceived during cheese consumption [13]. Along maturation, proteolysis and lipolysis increase, promoting the appearance of certain volatile compounds responsible for the odour and flavour characteristics [39,61]. These compounds have already been detected in goat milk cheese varieties [62,63].
Fresh cheeses presented lactic odours, mainly associated with goat raw milk, and citric aromas, commonly lemon, while older cheeses changed into more complex descriptors such as butter and hay odours, although lactic aromas were still evident. Compared with another Canarian cheese, Majorero PDO cheese [25], Palmero cheese showed higher friability, firmness and granulosity scores, while it was less humid and soluble. Trigeminal stimulation presented an unequal behaviour during ripening. While saltiness and pungency values increased as was described by Engel et al. [64], bitterness decreased up to 90 days and acidity up to 60 days, increasing afterwards. In Cantal cheese [54], saltiness, bitterness and pungency increase along maturation, while bitterness characteristics were previously observed by Tejada et al. [65] in Murcia al Vino cheese made with animal rennet. During maturation, due to the breakdown of the protein network bitterness increases due to bitter peptide formation [66]. Contrary to this, in our experiment bitterness values were higher at 15-and 30-days stages of maturation. The evolution of the acid content overlaps with the reports made by Gaborit et al. [62] in several goat milk cheeses. Finally, sweetness or astringent stimulation did not appear at all.
Relationships between Sensory and Rheological Parameters
Cheese texture is a critical quality attribute. Sensory texture is determined by descriptive analysis and instrumental texture is determined by rheological and fracture testing. In Tables 3 and 4 correlation analysis results are shown. As was expected, superficial and mouth moisture were directly correlated with physicochemical moisture and indirectly correlated with protein and fat contents. Hardness was correlated significantly with the sensorial parameters roughness, granulosity and friability and had a negative correlation with superficial moisture, elasticity and mouth moisture. Fracturability had the same correlations as hardness but with less friability interaction. Although Foegeding and Drake [8] have reported clear correlations between sensory and mechanical measures of hardness and firmness, these two parameters were not correlated in Palmero cheeses. None of the sensorial parameters correlated significantly with cohesiveness or adhesiveness. These results are in agreement with Foegeding and Drake's [8] studies, where chewdown sensory terms that measure adhesiveness, cohesiveness and gumminess were poorly correlated with mechanical parameters. Furthermore, elasticity was correlated with superficial humidity, a parameter which in turn was correlated with cheese moisture.
Intravarietal Comparison
The physicochemical composition, the texture attributes of TPA and the colour parameters of the 12 samples of Palmero cheeses with 30 days of ripening from four different dairy plants are shown in Table 5. This 30 day ripening period was chosen for representing the highest consumption stage of Palmero cheese. All the values of the parameters analysed remain within the ranges of the Palmero PDO Board [3]. Although all PDO cheeses must maintain similar physicochemical and sensory characteristics within a specific range of variability, each cheesemaker can slightly modify different technological aspects, both in cheese practices and, for example, influencing the maturation pattern by controlling the temperature and the humidity of the environment. correspond to parameters measured on cheese surface (external) and L*(i), chroma(i), hue angle(i), a*(i) and b*(i) correspond to parameters measured inside the cheese (internal); a-d : Within a row, means marked with different superscripts differ significantly (p < 0.05).
In general, cheeses from the four dairy plants presented similar compositions except for internal pH that showed moderate variability between cheesemakers (p < 0.01) fluctuating between 4.82 to 5.10. External pH, fat and protein content and moisture percentage presented small variability. This fact could be due to the similar physicochemical characteristics of the milks used and the similar feeding system (pasture and grazing). Regarding texture attributes of TPA, cheeses from B and C dairy plants presented similarity with respect to fracturability, elasticity and gumminess, while A and D factories showed similar results for cohesiveness, elasticity and gumminess. Hardness (p < 0.349) was the unique attribute that did not show variability between cheeses from different dairy plants. With respect to colour attributes, no significant differences were observed on L* measured on the exterior and the interior surface of the cheeses. Cheeses from plant C presented the greatest colour intensity and yellow percentage for external measures. These results were not repeated for internal measures. Although significant differences are perceived in most colour parameters, the determined values remain, however, in a narrow range of variability.
The sensorial attributes of Palmero goat cheese after 30 days of ripening are given in Table 6. Cheeses from plant A presented greater variability for organoleptic characteristics. This type of cheese was more firm, crumbly, salty and pungent while C cheeses were more acidic and soluble showing higher superficial and mouth moisture values. Cheese from the four dairy plants presented similar values for adhesivity. With respect to odour and flavour intensity, cheeses from cheese factory B presented the lowest values while dairy plant A showed the highest scores. It is important to note that all the sensorial values detected were within the ranges determined by the official sensorial panel of Palmero PDO cheese. Palmero PDO cheese is a local artisanal production; even as their cheeses are made under PDO Regulatory Board conditions [3], they are difficult to standardize. All are handmade producers with know-how from parents to children, so all of them use an old secret formula that makes cheeses different. These variations in sensory attributes can add extra value within the quality and origin guaranty of the PDO.
Finally, factor analysis, using the principal components method for the extraction of factors, was applied to all the samples of cheeses studied from different dairy plants, to obtain a more simplified view of the relationship among the sensorial parameters analysed. It was considered more interesting to carry out the PCA analysis only with sensory descriptors, without including the instruments both for the knowledge and description of Palmero cheeses and for the possibility of using these results by producers and technicians. Because their Eigen values were higher than one, four factors were chosen (88.9% of the total variance), and therefore, they explain more variance than the original variables (Table 7). A Varimax rotation was carried out to minimize the number of variables that influence each factor and then facilitate the interpretation of the results. The first factor that explains the higher percentage (45.74%) is associated with the intensity of flavour and odour and with textural attributes such as friability and granulosity. Superficial and mouth moisture and acidity had the highest loadings on the second component, accounting for 22.75% of the total variance. The third component, explaining 13.07% of the variance, was defined mainly by saltiness and acidity. The results of the PCA have been depicted on a two-dimensional plot ( Figure 1) that explained 68.5% of the total variance. The negative segment of the plot for PC1 was related to three sensorial texture parameters, elasticity, solubility and mouth moisture, and also bitterness as basic taste, whereas the positive segment of the plot for that factor was mainly related to odour and flavour intensity, granulosity, firmness and friability as texture characteristics, in addition to pungency stimulation. Moreover, the distribution of cheese samples is also presented where a high discrimination is registered between the 30-day cheese samples from the four dairy plants. Although the four plants are moderately separated, cheese factories C and D remain closer than A and B.
Finally, the stepwise discriminant analysis of all cheese samples according to ripening period (Table 8) showed that cheeses with different ripening stages were very well differentiated. Both 60-and 90-day cheeses are well classified; no cheeses were grouped in different groups. The 15-and 30-day cheeses are less well classified but a high percentage are still well grouped (92.9 and 82.1, respectively). Correlation plot with distribution of cheese samples. Table 8. Stepwise discriminant analysis of cheese samples according to ripening period, numbers and (%).
Conclusions
These physical results join together with chemical composition and sensorial properties will be used for a better and more complete definition of Palmero PDOcheeses at different ripening periods. The variation observed in texture and colour parameter results could be useful to estimate the ripening time, representing an objective method. Furthermore, some differences in sensorial profile have been detected between cheese artisanal factories. These differences showed that Palmero PDO cheese is not a standardised cheese, it is an artisan cheese. Differences in the sensory profile can be considered an advantage since these cheeses can satisfy the requirements of consumers with different tastes. Funding: This research was supported by DOQUECAN Regional Project funded by Canarian Government with FEDER funds.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data are not publicly available because they were obtained as part of the Canary Islands Government's (DOQUECAN) and currently belong to the Regulatory Council of the Palmero Cheese Protected Designation of Origin. The data presented in this study may be available on request from the corresponding author. | 2021-01-06T06:18:53.705Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "cd5c2b18bf2d14f970e38eaf122c2947255afb54",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/1/58/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59e3af4dbc8dfc787dd45737305a614a92b2ff49",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139090795 | pes2o/s2orc | v3-fos-license | CFD APPROACH AS DESIGN OPTIMIZATION FOR GAS TURBINE TUBULAR COMBUSTOR
Gas Turbine Combustor mixing processes are of paramount importance in the combustion and dilution zones. In the primary zone of combustor, good mixing is essential for high burning rates and to minimize soot and nitric oxide formation, whereas the attainment of a satisfactory temperature distribution (pattern factor) in the exhaust gases is very dependent on the degree of mixing between air and combustion products in the dilution zone. A primary objective of combustor design is to achieve satisfactory mixing within the liner and a stable flow pattern throughout the entire combustor, with no parasitic losses and with minimal length and pressure loss. Accordingly, in present study an attempt has been made through Computational Fluid Dynamics (CFD) approach using CFX 13.0 to analyze the flow patterns within the combustion liner and through different air admission holes, namely, primary zone, intermediate zone, dilution zone and from these the temperature distribution in the liner and at walls as well as the temperature quality at the exit of the combustion chamber is obtained for tubular combustion chamber designed for gas turbine engine. The aim is to illustrate what can be done and also to identify trends and those areas where further work is needed.
Introduction
Gas turbine combustor designs are becoming increasingly challenging in order to meet the stringent requirements such as lower maximum exit temperatures, lower emissions, higher durability, lower fabrication and maintenance cost and reduced design and time-to-market cycle times. These requirements necessitate more emphasis on Computational Fluid Dynamics (CFD) simulation of the combustion flowfield to reduce testing and improve performance which is a complex problem. The flowfield conditions at the combustor exit in real gas turbine engines are highly non-uniform in temperature, pressure, and velocity. These non-uniformities are a function of the combustion chamber flow arrangement and geometry. The major features of these combustion chambers include the chamber liners, entrance swirlers, fuel nozzles, liner cooling slots and holes, as well as primary and secondary dilution holes. One major consideration in the designing of a combustor is the division of the inlet mass flow into separate flow paths. A large portion of the combustor inlet flow is used for cooling the metal casing and diluting the combustion products. Several experimental and numerical studies have been performed to analyze and model the combustor flows. According to Snyder et al. [1] used the advanced computational analysis system to optimize the combustor exit temperature in the Pratt & Whitney PW6000 engine. They successfully used this computational design tool to tailor combustor exit temperature profiles to within design limits by performing a parametric study of dilution hole patterns. Although, no information with respect to the hole pattern geometry was given, the optimized dilution hole pattern was incorporated into the PW6000 engine to improve durability and p r o l o n g t u r b i n e l i f e i n d i c a t i n g t h e computational effort was successful. B. Zamuner et al. [2] carried out a study of Vol. 01, Issue 02, April 2016 numerical simulation of reactive flow in a tubular gas turbine combustor with detailed kinetic effects and concluded that, Flame structures, combustion regimes and pollutants emission are often difficult to predict in aircraft combustors because they are closely related to the turbulent nature of the flow and requires a complete iterative procedure to improve the predicted values. Sierra et al. [3] carried out work upon the combustion chamber was part of a 70MW gas turbine used in an operating combined cycle power station and concluded that the range of pressure imbalance of primary air applied in the refined model was adopted and this effect is interpreted as a flame expansion process caused by inlet air pressure variations. Fureby et al. [4] carried out an experimental and computational study of a multi-swirl gas turbine combustor. The computational approach pursued was large eddy simulation (LES), which provides a compromise between accuracy and cost. LES attempts to capture the dynamics and evolution of the large-scale flow, at the same time as it allows for inclusion of realistic flowand chemistry parameters. The flow was affected by the exothermicity through the volumetric expansion, increased molecular viscosity and baroclinic torque, resulting in flow acceleration downstream of the flame, and the development of wall jets. Di Martin et al. [5] studied a reactive CFD analysis of complete annular gas turbine combustor module for aero engines application. All the complex combustor features like cooling holes, primary and dilution holes, swirler, fuel injector etc. were considered and fully coupled into the CFD calculation. The internal combustor flow field of mixture were studied and observed that effusion holes, which were very small in diameter and very large in number was not meshed but the effect of drilled liners was properly modeled by means of source terms and well tested correlations that link holes mass flow rate with the pressure drop across the liners. Massimo Masi et al. [6] studied the numerical and experimental analysis of the temperature distribution in a hydrogen fuelled combustor for a 10 MW gas turbine as part of a ministerial project coordinated by ENEL Ricerca. The computed temperature profiles s h o w e d a c c e p t a b l e m a t c h i n g w i t h
Combustor geometry
In this study, combustor geometry for analysis is a tubular combustor having an axial flow swirler with 8 aerofoil shape vanes at the inlet of combustor provided to maximize the inlet air turbulence and the injector with 6 holes of 250 micron diameter is provided at the entrance of the primary zone. The length of the liner and measurements in the liner wall and cost CFD model presented was able to capture the mean temperature field within the combustor, which was the fundamental requirement for the planned thermoacoustic characterization of the analyzed combustor. Daero Joung and Kang [7] designed and developed a small size gas turbine combustor of a reverse flow, semi-silo type for power generation operated in a lean premixed mode to achieve stable combustion. In this experiment, combustor with multi swirler (main swirler and pilot swirler at inlet of burner), annular nozzle having, pilot fuel and main fuel injector is simulated. The premixed coherent flame model (PCFM) is applied for partially premixed methane/air with an imposed downstream flame area density (FAD) to avoid flashback and incomplete combustion. The combustion efficiency reached about 99.9% with higher inlet temperature. Channiwala & Kulshreshtha [8] the paper presented three dimensional model investigated by numerically to study the flow behavior in pre-diffuser, dump region, liner, inner and outer annuli and swirler exit. In this work, an attempt has been made through reacting CFD analysis to achieve the proper mixing of combustion products in the dilution zone and exit temperature uniformity. During this process, it has been ensured that the total pressure remains almost the same. Analysis has been carried out using CFX 13.0, for tubular gas turbine combustor without casing and, swirler and fuel injector at inlet. Results obtained through computation shows proper mixing of combustion products with the admission air through different zone holes and almost uniform temperature at exit.
Combustor mesh and boundary condition
The tetra-hedron unstructured grid has been generated using GAMBIT 2.2. The Fig. 5.2 shows the 3-D grid model of tubular combustion chamber with 148808 nodes and 781989 elements selected for CFD simulation. Total pressure and total temperature have been specified at the combustor inlet, static pressure along with the target mass flow rate has been specified at the combustor core exit. Turbulent intensity and hydraulic diameter have been specified as the initial conditions for the inlet turbulence. At the combustor bleeds where no combustion takes place mass flow rate boundary condition has been specified in such a way that the prescribed quantity of flow goes out of the domain. All the combustor walls have been treated to be adiabatic. The Fig. 5.3 shows the computational domain.
Governing equations
It exists six equations that should be solved to model the flow field. These equations are continuity, momentum, energy, species transport, turbulence and combustion equations. In the present study, flow is treated to be steady, turbulent, compressible and reacting. The governing Navier-Stokes equations (RANS) for the conservation of mass, momentum, energy and species concentration for the gas, together with an equation of state are approximated for each mesh cell. The resulting sets of equations are solved numerically to obtain the flow field, mixing and combustion data. The following Table-3 shows the computational model for the combustor analysis.
Flow solution
ANSYS CFX v13.0 is used as solver. The numerical settings for the solver are described below.
Time stepping
The problem is solved as a steady state flow problem, consistent with the RANS turbulence modelling used, which means that relatively large time steps are used in order to achieve a converged solution as quickly as possible.
Heat transfer
"Thermal energy" model is used, which means that the total energy models the transport of enthalpy including the kinetic energy effects. This model should be used where there is change in density or the Mach number exceeds 0.2; in both of these cases kinetic energy effects are significant. In ANSYS CFX, when one chooses thermal energy the fluid is modelled as compressible, regardless of the original fluid condition, i.e. gases with Mach number less than 0.2. One should know that incompressible fluid does not exists in reality but for the gases with Mach number less than 0.2 the compressible effects are in general negligible.
Turbulence
For the turbulence k-turbulence models are used. The k-model is one of the most common turbulence models. It is a two equation model that includes two extra transport equations to represent the turbulent properties of the flow. This allows the model to account for history effects like convection and diffusion of turbulent energy.
Combustion model
The laminar flamelet model is used as combustion model which solves only two transport equations for a large number of species (low computational cost). It provides information on minor species and radicals (such as CO and OH). As well as accounting for turbulent fluctuations in composition (presumed PDF), it models local extinction at high scalar dissipation rates or shear strain. The model is only applicable for two-feed systems (fuel and oxidizer), and requires a chemistry library as input. Diesel fuel library is generated using CFX-RIF. Diesel fuel, modeled as a two-Vol. 01, Issue 02, April 2016 component surrogate fuel (by mass 62.44% n-C10H22 and 37.56% A2CH3-C11H10) [12]. The same pressure level must apply to the whole domain and the model is only for non-premixed systems.
Convergence criteria
In order to determine if convergence is obtained, residuals are constantly monitored and when they are reasonable flatted out, the run is stopped and the results are post-processed.
Discussion
In this case, the incoming air at the pressure and temperature enters into the combustion chamber and through liner holes, reacts with the atomized fuel. The effect of providing aerofoil swirler at the inlet on flowfield and on the combustor performance is discussed below: Fig. 5.4 shows the velocity distributions at radial locations of the combustion chamber. Ve l o c i t y, p r e s s u r e a n d t e m p e r a t u r e measurements are carried out at centerline of combustion chamber and in the radial direction, at r/R = 0.35. In this case, the low velocities are encountered in the primary zone at axial as well as radial locations. These low velocities are beneficial for both combustion stability and mixing. The fact is evident from Fig. 5.5, which shows the flame stability. Good mixing and the recirculation is observed at central core of primary zone, which may offer stable narrow flame. The velocity levels are slightly higher in the dilution zone compared to the primary zone. As more air enters through the dilution zone, the velocity levels increases. This may be due to the fact that pressure drop is manifested in the increased velocity levels for cold flow studies. This pressure drop is graphically represented in The Fig.1 shows the cross sectional view of the combustor which is designed according to the design methodology proposed by Lefebvre A. H. and Ballal D. R [9].
Combustor mesh and boundary condition
The tetra-hedron unstructured grid has been generated using GAMBIT 2.2. The Fig. 2 shows the 3-D grid model of tubular combustion chamber with 148808 nodes and 781989 elements selected for CFD simulation. Total pressure and total temperature have been specified at the combustor inlet, static pressure along with the target mass flow rate has been specified at the combustor core exit. Turbulent intensity and hydraulic diameter have been specified as the initial conditions for the inlet turbulence. At the combustor bleeds where no combustion takes place mass flow rate boundary condition has been specified in such a way that the prescribed quantity of flow goes out of the domain. All the combustor walls have been treated to be adiabatic. The Fig. 3 shows the computational domain. It exists six equations that should be solved to model the flow field. These equations are continuity, momentum, energy, species transport, turbulence and combustion equations. In the present study, flow is treated to be steady, turbulent, compressible and reacting. The governing Navier-Stokes equations (RANS) for the conservation of mass, momentum, energy and species concentration for the gas, together with an equation of state are approximated for each mesh cell. The resulting sets of equations are solved numerically to obtain the flow field, mixing and combustion data. The following Table 3 shows the computational model for the combustor analysis.
Flow solution
ANSYS CFX v13.0 is used as solver. The numerical settings for the solver are described below.
Time stepping
The problem is solved as a steady state flow problem, consistent with the RANS turbulence modelling used, which means that relatively large time steps are used in order to achieve a converged solution as quickly as possible.
Heat transfer
"Thermal energy" model is used, which means that the total energy models the transport of enthalpy including the kinetic energy effects. This model should be used where there is change in density or the Mach number exceeds 0.2; in both of these cases kinetic energy effects are significant. In ANSYS CFX, when one chooses thermal energy the fluid is modelled as compressible, regardless of the original fluid condition, i.e. gases with Mach number less than
3.5: Convergence criteria
In order to determine if convergence is obtained, residuals are constantly monitored and when they are reasonable flatted out, the run is stopped and the results are post-processed.
Discussion
In this case, the incoming air at the pressure and temperature enters into the combustion chamber and through liner holes, reacts with the atomized fuel. The effect of providing aerofoil swirler at the inlet on flowfield and on the combustor performance is discussed below: In this case, the low velocities are encountered in the primary zone at axial as well as radial locations. These low velocities are beneficial for both combustion stability and mixing. The fact is evident from Fig. 5, which shows the flame stability. Good mixing and the recirculation is observed at central core of primary zone, which may offer stable narrow flame.
The velocity levels are slightly higher in the dilution zone compared to the primary zone. As more air enters through the dilution zone, the velocity levels increases. This may be due to the fact that pressure drop is manifested in the increased velocity levels for cold flow studies. This pressure drop is graphically represented in Fig. 6. Higher pressure drop is witnessed in the dilution zone which leads to higher velocities near the exit of the combustion chamber. The velocity contour for reacting flow in radial direction are shown in the Fig. 7. It is observed that in the primary zone of the combustion chamber, the velocities of flow are lower near the wall and at the central a small size recirculation zone occurred. The lower velocities are beneficial for the combustion stability and small size recirculation zone which is beneficial for the temperature distribution at the exit of the combustion chamber, narrow flame and good mixing.
In Fig. 8 shows a case of axial CFD result, higher temperature found initially which is beneficial for the complete combustion. Lower temperature at exit of the combustion chamber is beneficial for turbine blade life and nozzle life. Axial and radial CFD temperature distributions are quite differ from each other but does not affect at exit of the combustion chamber.
Conclusion
The design of tubular or can type gas turbine combustion chamber is carried out using diesel (surrogate) as fuel and the design is then validated using Numerical Approach. The results of numerical approach of present study compared with the previous experimental work done by Channiwala et al. It has been found the close relationship between results. Qualitative and quantitative agreement of CFD results suggests that the basic assumptions and boundary conditions as well as the problem definition for CFD analysis can be applied to understand the flow phenomena, temperature contours and air flow distribution for combustion chamber. The almost consistent centerline temperatures achieved along the centerline of combustion chamber validates the design methodology proposed and presented in this paper. The maximum centerline temperature recorded by CFD simulation is in the vicinity of 2100 K. The pressure loss along the combustion chamber is 10% of the inlet pressure. The velocity profiles show an increasing trend along the length of combustion chamber, but low velocities are encountered in primary zone which is beneficial for combustion stability. | 2019-04-18T05:41:02.038Z | 2016-04-15T00:00:00.000 | {
"year": 2016,
"sha1": "adada30d82ee7354188406e022529b1d150d9aef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.46565/jreas.2016.v01i02.004",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "adada30d82ee7354188406e022529b1d150d9aef",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
233284533 | pes2o/s2orc | v3-fos-license | Assessment and Prognostic significance of renal dysfunction in acute stroke patients
Context: During hospitalization due to stroke, patients can have acute kidney injury (AKI) as a possible complication which is frequently overlooked and underestimated in clinical trials. Aims: To assess the prevalence of renal dysfunction in acute stroke patients and to assess its prognostic significance Design, Methods and Materials: A total of 100 patients were recruited for this study with diagnosis of stroke. Renal dysfunction was evaluated in the form of Acute kidney injury and unrecognized renal dysfunction (Baseline Normal serum Creatnine<1.2 mg/dl with EGFR< 60 ml/min on admission). The primary functional outcome was measured using the modified Rankin Scale at the time of discharge, 1 month after stroke onset. Statistical analysis: Analysis of data was done using SPSS-17.Independent t-test and chi-square test were used to calculate difference between two groups Results: Our study shows prevalence of acute kidney injury in patients of stroke is 15%. Moreover, the study showed that AKI after stroke was associated with higher in-hospital mortality and poorer functional outcome at 1 month. Conclusions: Acute kidney injury appears to be a common complication after stroke and is related to increased mortality and disability in stroke. Key Messages: AKI is common complication after stroke and frequent cause of poor functional outcome. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Stroke is the second leading cause of death worldwide. 1 Stroke is classically defined as "rapidly developing clinical signs of focal (or global) disturbance of cerebral function, lasting more than 24 hours or leading to death, with no apparent cause other than that of vascular origin". 2 Transient ischemic attacks are episodes of temporary and focal dysfunction of vascular origin, which are variable in duration, commonly lasting from 2 to 15 minutes, but occasionally lasting as long as a day (24 hours) with no persistent neurological deficit. 3 The incidence of stroke ranged from 105 to 152/100,000 persons per year and the crude prevalence of stroke ranged from 44.29 to Stroke is a major cause of long-term disability among patients and has enormous emotional and socioeconomic consequences. 5 The increasing economic burden that patients with stroke impose, as well as the significant loss of manpower, renders the study of prognostic factors that can affect short-and long-term mortality after stroke indispensable. Renal dysfunction is commonly seen in hospitalized stroke patients. Ischemic stroke is frequently associated with renal dysfunction and nearly a third of patients hospitalized with intracerebral haemorrhage (ICH) have chronic kidney disease (CKD) (estimated glomerular filtration rate [e-GFR] <60ml/minute per 1.73m2). 6,7 Occurrence of acute renal failure (ARF) is more common in patients with intracerebral haemorrhage (ICH) compared with those with other stroke subtypes. 8 Impaired renal function is a significant predictor of both short and long term mortality in these patients 9 Patients with stroke are often at increased risk of dehydration as they have a reduced level of consciousness, are physically dependent, unable to communicate, have difficulties in swallowing and decreased oral intake. 10 Elderly patients presenting with transient ischaemic attack or acute ischaemic stroke often demonstrate increased plasma osmolality that likely represents a fluid depleted state, and possibly contributes to cerebral ischaemia and worse neurological outcome in stroke patients. 11 The role of volume depletion or dehydration as a risk factor contributing to early neurological deteriotion has been demonstrated. blood urea nitrogen/creatinine (BUN/Cr) ratio higher than 15 and urine specific gravity (USG) > 1.010 were more frequently seen in SIE (Stroke in Evolution) patients. 12,13 Early identification of dehydration is essential for timely intervention to improve outcome. Unfortunately, the clinical assessment of dehydration by physicians is not always accurate especially in geriatric patients.
Hence, biochemical parameters like plasma osmolality, BUN/creatinine ratio and urine specific gravity have been used by various investigators for assessment of hydration status, but the results have been inconsistent. 10 Additionally, severity of the renal impairment and the requirement of renal replacement therapy for stroke patients in the course of their treatment are important management issues not currently well addressed by the literature.
Hence proper and routine evaluation of renal function in hospitalized patients with cerebrovascular accident needs to be ensured to improve the outcome of these patients The present study is aimed to assess the prevalence of renal dysfunction in acute stroke patients and to assess its prognostic significance.
Aims & Objectives
1. To study the prevelance of renal dysfunction in patients with acute stroke, both AIS and ICH. 2. To evaluate the effect of renal dysfunction on stroke morbidity and mortality, both AIS and ICH.
Materials and Methods
This study will be conducted in department of Neurology, GMC, Kota in the period for period of one year after getting clearance from ethical committee. Subjects will be included from adult subjects admitted in stroke unit, emergency. All patients will be included after written informed consent.
Study population
A total of 100 patients were recruited for this study with clinical features and imaging findings consistent with acute ischemic or hemorrhagic stroke. On admission, patients were assessed by physical examination, neurological examination and scoring with the GCS and NIHSS scores. The severity of stroke was graded as mild (NIHSS ≤8), moderate (NIHSS 9-15) or severe (NIHSS ≥16). The severity of impaired level of consciousness if present was rated as mild (GCS 15), moderate (GCS [8][9][10][11][12][13][14], or severe (GCS ≤7). Immediately after assessment, patient was sent for imaging studies (CT/MRI Brain), ECG and routine blood investigations including Blood urea and Serum Creatinine, Blood urea nitrogen, Urine specific gravity.
Renal dysfunction was evaluated in the form of Acute kidney injury and unrecognized renal dysfunction. AKI was diagnosed by either an increase in serum creatinine by > 0.3 mg/dl (26.5 µmol/l) within 48 hours; or increase in serum creatinine to >1.5 time's baseline, which was known to have occurred within the prior 7 days. 14 Serum creatinine level at admission was taken as baseline serum creatinine. Assessment of blood urea, serum creatinine, blood urea nitrogen, was done on alternate day. Unrecognized renal dysfunction will be defined as Baseline Normal serum Creatnine<1.2 mg/dl with EGFR< 60 ml/min. 15 The primary functional outcome was measured using the modified Rankin Scale at the time of discharge, 1 month after stroke onset. An unfavorable functional outcome was defined as mRS of 3-6 points. Patients will be followedup for a period of 1 months after stroke onset to confirm whether recurrent ischemic stroke had occurred or not.
Statistical analysis
Analysis of data was done using SPSS-17. For the categorical (qualitative) variables, frequency and percentage were calculated. Mean and standard deviation (SD) were calculated for numerical (quantitative) variables. p<0.05 was taken as significant. Independent t-test and chi-square test were used to calculate difference between two groups. In the study it was observed that hypertension was the major risk factor in 88% for overall stroke patients. In Ischemic and haemorrhagic stroke it was 90% and 84.2% respectively. Other information regarding this is provided in the table above.
Inclusion criteria
:In the study it was found that the Prevalance of Unrecognised Renal dysfunction in stroke patients is 15.68%. For Ischemic stroke it was 16.12%, while for Hemorrhagic stroke it was 15.78%. (Table 1) In the study it was found that the Prevalance of AKI in stroke patients was 15%. For Ischemic stroke it was 14.51% and for hemorrhagic stroke it was 15.78% respectively. (Table 2) In the data observed it was found that the AKI in relation to location of bleed in Hemorrhagic stroke was highest in Putamen and Lobar at 33.33% while lowest in Brainstem and Thalmus at 16.67%. But P value for all was insignificant.
In our study it was found that mean hematoma volume in patients who had AKI was 29.16±3.97, while in patients who didn't have AKI, mean hematoma volume was 25.9±8.23. p value was non significant for the group.
Here we found that there was a non significant difference in patients who had mild and moderate NIHSS score in AKI and non-AKI patents (P-value>0.05) at presentation. But, there was a significant difference in patients who presented with severe NIHSS score in AKI and Non-AKI patients (P-value=0.0209). (Table 3 ) In our study it was observed that the recovery in NIHSS in relation to Renal dysfunction stood at 12.06±3.94 at admission and 11.30±4.34 at discharge. For Non renal dysfunction 11±5.56 at admission and 8.37±5.53 at discharge.
In our study it was observed that the Recovery in mRS in relation to renal dysfunction stood at 3.37±0.54 at admission and 3.51±1.17 at 28 days and for Non renal dysfunction it stood at 3.23±0.71 at admission and 2.52±0.97 at 28 days. P value at admission is 0.11 which shows that the difference is non-significant. P value at 28 days is 0.0001 and shows significant difference. (Table 4) In our study it was observed that in stroke patients there was significant difference in mortality in renal and nonrenal dysfunction patients (P-value= 0.0002). Mortality in stroke patients in relation to renal dysfunction in Ischemic stroke was 18.7% while P value is 0.022. The result shows significant difference as the p value is <0.05 and in Hemorrhagic stroke it was 27.3% with P value 0.10. The result is non-significant. 81.3% patients of renal disfunction in ischemic group are discharged and 72.7% patients are discharged in haemorragic group. The p value for ischemic and haemorragic group is 0.6 which shows the difference is non-significant. (Table 5)
Discussion
Renal dysfunction is considered a valuable predictor of poor outcomes including mortality in patients with ischemic stroke. 16,17 Very few studies have evaluated the role of renal dysfunction in short term mortality and morbidity in stroke patients.
In our study it was found that the Age profile in Stroke patients was 60.36±10.7. For Ischemic stroke it was 61.56±11. 3 Our study shows that the prevalence of Unrecognised Renal dysfunction in stroke patients is 15.68%. In Ischemic stroke it was 16.12%, while in Hemorrhagic stroke it was is 15.78% respectively. In a study by Pereg D et al, 15 around 10.4% patients had unrecognized renal dysfunction similar to our study. Mahmood et al 18 founded that 30.2% had renal dysfunction (eGFR < than 60 ml/min/1.73 m2) which was high. Presence of baseline renal dysfunction was recorded as an independent predictor of early mortality in the setting of acute ischemic stroke beside other well-known prognostic factors.
In the study it was found that the prevalence of AKI was around 14.70% in overall stroke patients. In Ischemic stroke is at 14.51% and hemorrhagic is at 15.7%. Khatri M et al 19 founded that AKI was common and developed in 18% of the overall cohort, with significantly higher rates amongst Hemorrhagic stroke cases as compared to ischemic stroke (21% vs. 14%). Tasgalis G 5 et al and Covic A 20 et al shown similar result with our study.
In the our study it was seen that the AKI in relation to location of bleed in Hemorrhagic stroke was highest in Putamen and Lobar at 33.33% while lowest in Brainstem and Thalmus at 16.67%. P value for all was insignificant. In our study it was found that mean hematoma volume in patients who had AKI was 29.16±3.97, while in patients who didn't have AKI, mean hematoma volume was 25.9±8.23. p value was non significant for the group. A study conducted by Shrestha et al 9 found that the the location of the intracerebral bleed in the haemorrhagic stroke group did not predispose the patient to renal impairment. In the haemorrhagic stroke group, the mean volume of hematoma in the patients that developed renal impairment(29.23 ± 24.23 mL) and in the patients that did not develop renal impairment(28.61± 42.24 mL) was also not statistically significant (p=0.966). Hence, the volume of bleed did not influence the patient developing renal impairment. We found that there was a non significant difference in patients presenting with mild and moderate NIHSS score in AKI and non-AKI patents (P-value>0.05). But, there was a significant difference in severe NIHSS score in AKI and Non-AKI patients (P-value=0.0209). Mahmud et al 18 showed that the stroke severity (NIHSS) was higher in those with impaired than those with normal renal function (13.4±1.42, 8.98±2.75 respectively. Patients with severe stroke at presentation are often at increased risk of dehydration as they have a reduced level of consciousness, are physically dependent, unable to communicate, have difficulties in swallowing and decreased oral intake which may predispose them to acute kidney injury. In our study it was observed that the recovery in mRS in relation to renal dysfunction was 3.37±0.54 at admission and 3.51±1.17 at 28 days and for Non renal dysfunction it stood at 3.23±0.71 at admission and 2.52±0.97 at 28 days. P value at admission is 0.11 which shows that the difference is non-significant. P value at 28 days is 0.0001 and shows significant difference. In a study by Saeed et al 8,17 Patients of hemorrhagic stroke with ARF had higher incidence of moderate to severe disability (41.3% versus 30%; P<0.0001).In a similar study patients of ischemic stroke with ARF had higher proportion of moderate-tosevere disability (49.5% versus 44.2%; P, .0001) In our study it was observed that there was significant difference in stroke mortality in patients with renal dysfunction (22.2%) vs Non renal dysfunction (4.1%). And p value was significant. (P-value= 0.0002). Pereg et al found that Mortality rates were highest in patients with recognized renal insufficiency, followed by patients with unrecognized renal insufficiency, and were lowest in patients with normal renal function (9.9%, 9.1%, and 4.4%, respectively, P < .0001).Higher mortality rate in our study may be because of small sample size. The association between reduced renal function and adverse outcomes in patients with acute stroke is not completely understood and seems to be multifactorial. Factors associated with impaired renal function that may contribute to the adverse outcome of patients with stroke include insulin resistance, oxidative stress, inflammation, endothelial dysfunction, vascular calcifications and increased plasma levels of fibrinogen and homocysteine. 15
Limitations
Our study has few limitations. Main limitation was the absence of long term follow up at 3 months. Long term follow up would have given us better picture of post stroke disability in patients with and without renal dysfunction. Other main limitation in our study was small sample size.
Conclusion
The prevalence of renal dysfunction is common in stroke patients, both ischemic and hemorrhagic stroke. Our study shows patients who have acute kidney injury following stroke or those patients who have unrecognized renal dysfunction (Egfr<60ml/min with Normal Serum creatinine) have significant mortality as well as poor functional recovery at 1 month. Furthermore, appropriate approach to deal with patients with renal dysfunction (i.e. adequate hydration, avoidance of nephrotoxic drugs, drug dose adjustment etc) should be considered as preventive and therapeutic strategies of acute stroke which can influence overall mortality and morbidity.
Source of Funding
No financial support was received for the work within this manuscript.
Conflict of Interest
The authors declare they have no conflict of interest. | 2021-04-17T19:53:17.649Z | 2021-03-15T00:00:00.000 | {
"year": 2021,
"sha1": "90eb55c656effe032c0705b21e5a5b65ecd12347",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijnonline.org/journal-article-file/13355",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "90eb55c656effe032c0705b21e5a5b65ecd12347",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248218141 | pes2o/s2orc | v3-fos-license | Breakthrough Gastrointestinal COVID-19 and Intrahost Evolution Consequent to Combination Monoclonal Antibody Prophylaxis
Abstract Breakthrough gastrointestinal COVID-19 was observed after experimental SARS-CoV-2 upper mucosal infection in a rhesus macaque undergoing low-dose monoclonal antibody prophylaxis. High levels of viral RNA were detected in intestinal sites contrasting with minimal viral replication in upper respiratory mucosa. Sequencing of virus recovered from tissue in 3 gastrointestinal sites and rectal swab revealed loss of furin cleavage site deletions present in the inoculating virus stock and 2 amino acid changes in spike that were detected in 2 colon sites but not elsewhere, suggesting compartmentalized replication and intestinal viral evolution. This suggests suboptimal antiviral therapies promote viral sequestration in these anatomies.
The coronavirus disease 2019 (COVID-19) pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been the subject of much recent research regarding therapeutics and viral evolution [1]. A monoclonal antibody (mAb) therapy, produced by Regeneron, currently has emergency use authorization to be administered to COVID-19 patients relatively early in the course of disease, with successful results [2]. Other mAb-based therapies are currently under consideration as well, with one combination undergoing clinical trials (NCT04700163) targeting 2 separate locations within the receptor binding domain (RBD) of spike protein [3]. This mAb combination has been shown to protect prophylactically [4] and therapeutically [5] in a Rhesus macaque (Macaca mulatta; RM) model of SARS-CoV-2 infection, with viral loads in the respiratory and gastrointestinal tract significantly blunted.
Infection by SARS-CoV-2 has increasingly been seen as a gastrointestinal disease in addition to the typical respiratory disease it has been associated with since the start of the pandemic [6], with symptoms including nausea, vomiting, diarrhea, and abdominal pain present in 40% of infected individuals [7]. Intestinal damage has been noted during autopsy of individuals with fatal cases of disease [8], including viral protein detection in multiple locations [6]. Approximately 50% of patients admitted to hospitals for active disease exhibit digestive symptoms, with 5% of patients displaying digestive symptoms in the absence of respiratory complications [9].
Recently, the delta variant of SARS-CoV-2 (B.1.617.2) has outpaced other variants of concern to become dominant [10]. This was preceded by other variants including B.1.1.7 (alpha), B.1.351 (beta), and P.1 (gamma) [11]. Some of the sequence changes seen within these variants include those that aid in immune escape, such as those in E484 of spike RBD, which can lower neutralization capacity by more than an order of magnitude [12]. Mutations arising that display differential antibody binding properties are of significant concern for antibody therapeutic and prophylaxis approaches to combating the ongoing pandemic.
In this report, we focus on one RM, LT54, from a study that used a combination antibody therapy as a prophylaxis against SARS-CoV-2 infection [4]. We identified high levels of viral genome and subgenomic RNA within intestinal compartments after respiratory clearance. Intrahost evolution of virus was seen, with sequences changing in accordance with enhanced replication capacity, as well as site-specific differences potentially indicating preferential replication.
Virus and Cells
Virus used for animal inoculation was strain SARS-CoV-2; 2019-nCoV/USA-WA1/2020 (BEI No. NR-52281) prepared on subconfluent VeroE6 cells (ATCC No. CRL-1586) and confirmed via sequencing. VeroE6 cells were used for live virus titration of biological samples and were maintained in Dulbecco's Modified Eagle's Medium (Thermo Scientific) with 10% fetal bovine serum.
Animals and Procedures
A total of 16 RMs (Macaca mulatta), between 3 and 11 years old, were utilized for this study. All RMs were bred in captivity at Tulane National Primate Research Center (TNPRC). The RMs were infused with 20, 6, or 2 mg/kg mAb cocktail 3 days before challenge. They were then exposed via intratracheal/intranasal installation of viral inoculum (1 mL intratracheal, 500 μL per nare, total delivery 2e+6 50% tissue culture infectious dose [TCID 50 ]).
The animals were monitored twice daily for the duration of the study, with collections of mucosal swabs (nasal, pharyngeal, rectal), as well as bronchioalveolar lavage, taken preexposure as well as postexposure days 1, 3, and at necropsy. Blood was collected preexposure, as well as days 1, 2, 3, 5, and at necropsy. Physical examinations were performed daily after exposure, and necropsy occurred between 7 and 9 days postexposure. During physical examination, rectal temperature and weight of each animal was performed. No animals met humane euthanasia endpoints during this study. During necropsy, tissues were collected in media, fresh frozen, or in fixative for later analysis.
Sample Collection and RNA Isolation
Swabs were collected in RNA/DNA Shield (Zymo Research). RNA was isolated using the Zymo Quick-RNA Viral kit, with the addition of the swab into the collection column to ensure complete removal of fluid. Bronchoalveolar lavage (BAL) cells and tissues were collected in Trizol, tissues were homogenized, and RNA was isolated using a RNeasy Mini Kit (No. 74106; Qiagen) after phase separation with chloroform.
Quantification of Viral RNA Using Quantitative Real-Time PCR Isolated RNA was analyzed in a QuantStudio 6 (Thermo Scientific) using TaqPath master mix (Thermo Scientific) and appropriate primers/probes [4] with the following program: 25°C for 2 minutes, 50°C for 15 minutes, 95°C for 2 minutes followed by 40 cycles of 95°C for 3 seconds and 60°C for 30 seconds. Signals were compared to a standard curve generated using in vitro transcribed RNA of each sequence diluted from 10 8 down to 10 copies. Positive controls consisted of SARS-CoV-2-infected VeroE6 cell lysate. Viral copies per swab or BAL were calculated by multiplying mean copies per well by volume in the total swab extract or BAL, while viral copies in tissue were calculated per microgram of RNA extracted from each tissue.
cDNA Conversion cDNA was generated using Protoscript II (New England Biolabs) as follows: 10 µL template RNA, 1 µL 10 µM random hexamers, and 1 µL 10 mM dNTPs were incubated at 65°C for 5 minutes and then place directly on ice for 1 minute. The following was then added: 4 µL PSII buffer, 2 µL 100 mM DTT, 1 µL RNase inhibitor, and 1 µL PSII reverse transcriptase, and incubated at 42°C for 50 minutes, then 70°C for 10 minutes, followed by a hold at 4°C.
Sequencing
DNA libraries were made using the standard SWIFT Normalase Amplicon Panels protocol (SWIFT Biosciences) utilizing the SNAP UD indexing primers. The libraries were normalized to 4 nM and pooled. Paired-end sequencing (2 × 150) was performed on the Illumina MiSeq platform.
Data Analysis
Primer sequences were trimmed, and sequence reads were aligned to the SARS-CoV-2 genome (WA1/2020 isolate, accession MN985325) using the built-in mapping function in Geneious Prime software. Variants were called that were present at greater than 10% of reads at that site.
RESULTS
The animal that is the subject of this report (LT54) was a male Indian origin TNPRC purpose-bred and reared male approximately 4 years in age and a weight of 6.50 kg when assigned to the treatment study, both measures of which were comparable to other animals in the study cohort [4]. The only clinically remarkable aspect of this animal prior to assignment was a history of intermittent soft stool. Antibody combination (C-135-LS and C-144-LS, 2 mg/kg intravenous) was administered prophylactically 3 days before mucosal SARS-CoV-2 challenge. Clinical development of experimentally induced COVID-19 in this animal was generally mild. Daily veterinary physical examination yielded saturated blood oxygen measurements of .98% and auscultation within normal limits throughout disease course up until study termination and necropsy at day 7 postinfection. Transient sinus arrhythmia was noted day 2 postinfection. Mild anorexia early in COVID-19 infection resulted in negligible weight loss (approximately 6% of total body weight at termination of study). Viral loads via genomic (N gene) and subgenomic (E gene) RNA were followed for 1 week. Swab samples were acquired from pharyngeal, nasal, and rectum, as well as cells isolated from BAL fluid. In both real-time quantitative polymerase chain reaction (RT-qPCR) assays (genomic N and subgenomic E) viral loads in LT54 increased 1 to 2 days postinfection at respiratory sites, before resolving by necropsy at 7 days postinfection ( Figure 1). Swab analysis showed near complete protection against replicating virus throughout the study, with subgenomic E analysis unable to detect virus in any respiratory site except pharyngeal (Figure 1). This indicates robust respiratory protection from challenge after prophylactic administration of antibodies.
This contrasts with viral RNA loads measured in the rectal site, seen as a delayed increase in relation to respiratory viral RNA trends that did not resolve by necropsy. Controls exhibited an increase at this site as well, though not to the extent of LT54 (Figure 1). Tissues collected at necropsy indicated complete protection in respiratory sites of LT54, with controls still exhibiting high amounts of viral RNA in almost all sites (Supplementary Figure 1A). The intestinal sites of LT54 showed the opposite pattern, with high amounts of viral RNA in all sites except the duodenum. The highest levels were seen in the jejunum and descending colon (Supplementary Figure 1B), potentially indicating preferential replication in those sites, although this could be related to sampling error rather than biological preference. The ileum of LT54 showed minimal changes upon infection (Supplementary Figure 1C), despite positive viral staining via immunofluorescence (Supplementary Figure 1D).
Intrahost viral evolution is of interest due to the continued appearance of variants of concern throughout the pandemic. We sequenced virus at each site it was present to determine evolutionary patterns present in LT54 during this study. Present in our challenge inoculum at low levels were deletions in the furin cleavage site that disappeared during challenge, as has been seen before [13][14][15], presumably due to lack of replication favorability. The H655Y and N149K changes were seen in the jejunum and transverse colon (Figure 2). H655Y has been reported before in a primate model of SARS-CoV-2 infection, with the change being near the furin cleavage site likely favoring increased replication [15]. N149K is found in the N-terminal domain of spike protein, in an area less well characterized than RBD. This site is not targeted by the antibodies administered, so is unlikely to have arisen due to this prophylactic approach. Rather, this is more likely reflective of the low dose of antibody administered, as well as a more rapidly decreasing antibody level in LT54 than in others of the same lowdose cohort. Interestingly, the sequences seen in the descending colon and rectum were identical to the WA1/2020 isolate ( Figure 2).
DISCUSSION
Here, we present data from 1 animal that was challenged with SARS-CoV-2 after prophylactic administration of combination anti-spike antibody therapy. Despite robust respiratory protection afforded using this antibody combination, high viral loads were observed in the intestinal mucosa. In addition, sequence changes were present indicating increased replication, as well as site-specific changes that may indicate increased replication at those sites, allowing for those evolutionary patterns to emerge. Finally, we consider the disappearance of the furin cleavage site deletion present in the challenge inoculum to be further evidence of the importance of an intact furin cleavage site in the in vivo infection and replication of SARS-CoV-2. Further work may include adapting these sequence changes into current circulating strains to determine the effect, if any, on replication and pathogenesis in vivo.
Supplementary Data
Supplementary materials are available at The Journal of Infectious Diseases online (http://jid.oxfordjournals.org/). Supplementary materials consist of data provided by the author that are published to benefit the reader. The posted materials are not copyedited. The contents of all supplementary data are the sole responsibility of the authors. Questions or messages regarding errors should be addressed to the author. | 2022-04-18T06:23:01.967Z | 2022-04-16T00:00:00.000 | {
"year": 2022,
"sha1": "2c10ccb8bb305c7cbebd38a9fb958e232587c6da",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9213849",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a81895040d4b9386a95b1faa3cd8f6505eccf6f9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116341243 | pes2o/s2orc | v3-fos-license | THE EFFECT OF ROOM MATERIAL TO OVERCOME THE NOISE DUE TO MOTORCYCLE WORKSHOP ACTIVITY AT HOME BASED-ENTERPRISED NEARBY ROADWAY
Home based-enterprised is currently booming in Indonesia. Home basedenterprised is a right choice for people who want to have a business that integrates with their home. From some of business which has the highest noise level is a home with motorcycle repair business. Because the source of noise in the workshop is the engine and other supporting machinery. Home business is generally situated near from roadway and has acoustic problems aspect. Motorcycle repair service located near from roadway receives noise from inside and from outside (roadway). The sound sources distribute noise through building elements, such as walls, floors and ceilings. Thus it needs to be researched the elements of room that potentially reducing noise, especially noise that transmits through the airborne. To determine the noise level and effect that transmit through the airborne, the study using physical data collection with field studies. While the method using the experimental methods with simulation methods where in its implementation procedures use some software like Ms. Excel and Ecotect to analyze the Sound Pressure Level (SPL) and Transmission Loss. The result showed that the characteristic of the materials (Transmission Loss & Leq) influence on noise through the airborne. Thus it is obtained the right solution to overcome and reduce the noise at shophouse through the selection of appropriate material that can reduce airborne noise. The appropriate organic and anorganic material selection are expected to be the solution of the existing problems. The experiment results show that all of the material (organic & anorganic) can reach the goal, but the best result is combination of both material, the organic material and anorganic material.
INTRODUCTION
Nowadays, home-based business is popular with the residences serve as a place of business services, office, and commerce.The developing of home business can not be separated by the public opinions that see a house is not only as a place to live, but also a workplace.The facilities and infrastructures as well as home locality should became essential elements that must be observed.So, the work activities at home are more productively while ensuring a good shelter where the paradigm of living and working applies by itself (Silas, 1993).The business types of home business such as office, salon, bakery, café, motorcycle workshop, printing and etc.From the various business types, home with motorcycle repair business near from the roadway has a big noise problem.It can be found easily in big cities.This is in accordance with the increasing of motorcycle users every year and the need of motorcycle repair services are quite high.In addition the condition of roadways where motorcyles pass through them each day, it will cause noise to the surrounding environment.The noise problems in motorcyle wokshop of home based-enterprised above can be solved by determining the main source of noise in the workshop and noisy environments.Then, it can be known by the design application which can reduce the noise, either the distribution through the airborne or structureborne.These noise sources will spread out its noise either through the structure (interior elements such as walls, floors and ceilings) or through the air.
From the architectural aspect, space-forming elements (floors, walls, ceilings) in motorcyle repair business are the first elements which receive a sound from the sound source in the workshop.These three elements have a potential as barrier elements, absorptive elements or may be reflective sound elements.These are influenced by the characteristics of each material affected on the noise.So the results of comparison can be seen from the most influential elements in spreading out the noise into living space.
The selection of material for space elements to be tested has been set, namely organic and anorganic material.The reasons for selecting an organic material is the organic material capable of fulfilling its duties as an acoustic material and has been proven through conducted studies before, the selected organic materials are abundant in Indonesia and have not been used optimally, acoustic coating materials commonly available in the market are economically quite expensive.If the acoustic material component only can be replaced by organic material, the demand for acoustic coatings can be met with lower prices.While anorganic material with some reasons as material manufacturer in various shapes and appearances and easy in maintenance.
Based on the above explanation, the purpose of this study is to analyze the condition of the noise in motorcycle workshop home business located near from the roadway.Noise is produced by the noise source spreading out into homes through media or element, especially home building elements such as walls, floors and ceilings.Which room material is appropriate to be applied to the motorcycle workshop in order to create acoustic comfort inside.Besides that, knowing the characteristic influences at each material to overcome noise caused by the motorcycle wokshop home-based business, namely the noise causesd by activities of the workshops and roadway.
Noise
The problems in the sound control involve three things: the sound source, receiver, and pathways between them depending on the medium.The mediums can be a gas, roof, walls, windows, or air.The sound source comes from outside or inside the building and the occurred problem is always related to the frequency.
Good background of noise level will provide an ideal environment depending on the space usage.These NC numbers show the background noise level between desirable minimum value and allowed maximum value.The dimension of background noise levels have been assigned according with the functions and activities that take place inside.Source: Doelle, L. and Prasetio, L., 1972 According to Mediastika, types of noise propagation can be distinguished by the mediums of sound waves: 1. Airborne Sound is the sound waves propagation through the air medium.This propagation type will go into building, If there are holes, slits, or cracks in building elements, especially on vertical element (wall).The workings of the barrier can be a reflection of sound waves back toward the source or in the other direction beneath the barrier or use objects of capable material of absorbing sound waves.The barrier objects are made from a material that is soft enough, the surface is not slippery, but have a sufficient thickness and weight will be able to work well (Mediastika, 2005).2. Structureborne sound is a term used for the process the sound propagation through solid objects.In this context, a solid object associated with elements of the building itself, so it is called Structureborne sound.Propagation through building elements commonly occurs when the noise sources are bound or very close to those elements, for example bonding to or very close to the wall.(Mediastika, 2005).
Some of methods that can be applied to prevent external noise into the room: 1. Solid Wall A wall with one brick thickness is basically able to reduce incoming sound up to approximately 45 dB (A).Each additional thickness of one brick is able to reduce the incoming sound up to 5 dB (A).So the wall thickness of two bricks is equivalent with reduction up to 50 dB (A) (45 dB (A) + 5 dB (A) = 50 dB (A)).Cavity created between the wall will also provide extra reduction.
Window Glass Layer
Basically the layer of window glass which already duplicated two layers are still considered less in terms of acoustics as an "air gap".For better insulation, it is recommended that sheet of glass needs to be separated approximately 200mm although the ideal distance of 60-80mm is enough.Overall, the layer of glass whose an air gap is much better than window whose a single layer of glass with no air gap at all.
Sealing All Air Gap
This is very important because sound waves can pass through the gap, no matter how small it is.A place where needed an acoustic handling, the doors should or have a function as a refrigerator door which is sealed.So the cold air does not leak or the outside air does not come into.
Acoustic insulation is an important factor in the comfort of a room.The Noise Criteria (NC) of building type and room type that is reviewed must be considered by an acoustic insulation.This is very influential because the building and the room have a different function.With the proper use of noise criteria, the a good sound insulation and comfort in the room can be maintained.In the acoustic insulation, the quality of insulation can be described by the transmission loss.Transmission loss is the amount of insulation value in a partition, where the greater the loss the greater the value of the ability of a material to insulate sound (Hemond, 1983).The values of transmission loss are measured with certain frequency, from low frequency to high frequency.These values can be obtained from the following equation (Barron, 2001). % : Sound pressure level rata-rata ruang pertama, sumber bunyi (dB (A)) ' : Sound Pressure Level rata-rata ruang kedua, penerima bunyi (dB (A)) / : Area permukaan transmisi (m 2 ) 0 : Besaran absorpsi ruang penerima (m 2 .sabins)NR : Noise Reduction (dB (A))
Home-Based Enterprised
According to Johan Silas (1993) on the general concept of working and living are included as social and cultural dimensions.Some details of the house can be categorized in the types of occupancy as follows: 1.Home is the type of home as a place to stay without any other significant activities.In this type is usually for upper-middle income bracket.2. Home business is the type of a portion of the home used for productive (business) or economic activity.Consequently, it raises the relationship between aspects of production and home care.
The existence of a productive home or home business shows the home functions for human life.As a product of human technology for facility tools and the goal of human life.
The comparison or proportion from the two types (home and home businesses) contain on the following criteria: 1. Mixed Type.Type in which the function of the residence is in the same location with workplace.The home function is still dominant and the resident is still as a primary function.2. Shared Type.The residences are separated from workplace in the same building.
Access to the workplace is sometimes emphasized and separated where people outside the home are also involved.3. Separated Type.In this type, the workplace is dominant and take most of the rest of room.Sometimes the residence is placed on the back.
Roads Class in Surabaya
The Government has arranged some rules regarding the division of class roads that indirectly referable to limit the noise level at a particular road class.Serving the public transport with long-distance travel, high speed, and the entrance is limited efficiently.Serving shuttles collector with medium distance travel characteristics, low speed, and the number of the entrance is limited.Serving public transport with close travel characteristics, low speed, and the number of the entrance is not limited.
Source: UU no.13/1980 andPP no.26/1985 Arterial roads are passable by motor vehicles including cargo with a maximum width of 2.5 m, a maximum length of 18m and with the axis of the heaviest loads of> 10 tonnes.Arterial roads are passable by motor vehicles including cargo with a maximum width of 2.5 m, a maximum length of 18m and with the axis of the heaviest loads up to 10 tons.. Arterial or collector roads are passable by motor vehicles including cargo with a maximum width of 2.5 m, length max.18m and the heaviest maximum load axis 8 tonnes.Collector roads are passable by motor vehicles including cargo with a maximum width of 2,5 m maximum length of 12m and a heaviest maximum load axis 8 tons.Local roads are passable by motor vehicles including cargo with a maximum width of 2.1 m maximum length of 9 m and a heaviest maximum load axis 8 tons.
Source: PP no.43/1993 As stated in www.surabaya.go.id that Arif Rahman Hakim road is included in the Type II -Secondary Collector.Following the designation of the road: From several the tables above, there are several things that may be associated with noise.The faster the velocity of vehicles, the higher the noise level in the vehicles.The higher the ratio of two-wheeled engine vehicle two stroke compared to a two-wheeled four stroke on a road section, then the higher the noise.
Sampling Method
In this study is to find the subject of research and model determination using a purposive sampling method.Sampling aims to conduct by taking a subject which is not based on strata, random or region but a particular purpose.
The subject of research for sampling area or region needs a population data of Surabaya.The population data were selected based on population density.This is related to the level of noise that occurs in the environment.As state in www.smart.surabaya.go.id year 2011, The number of population in Surabaya reach 3,022,461 inhabitants.Region of East Surabaya is the most populous region.Tabel 6 shows the total population in 2011.
Tabel 6. Total population of Surabaya city in 2011
Source: smart.surabaya.go.idFrom the seven districts, East Surabaya (see Table 6) whose the busiest road area has been selected.On www.surabaya.go.id says that there are about 120 roadtype collector in Surabaya, followed by 77 arterial roads type.So from here, selected 7 districts in East Surabaya which have a collector road class.Where the characteristics of this collector roads class is the road that serve the transportation collection or the division, such as medium traveling distance, medium average speed, the width road of not less than 7 m.So that, this roadways have potential to become a noise source due to daily traffic events.
The next stage is the selection of road sections collector which allocated 7 districts in east Surabaya as a commercial area.It can be seen on the Surabaya designation map, the collector road sections in East Surabaya.Here is an example of designation map of commercial area in East Surabaya, shown by pink area.
From some commercial areas with collector roads type in east Surabaya, selected some roads that are considered to have a heavy traffic.Besides that, it also has the same function of the building as a home with motorcycle repair business.The obtained data is presented in Table 6.Based on the sampling criterias that have been made, from several types of categories for home business with a motorcycle workshop, chosen one sample which represents business home with motorcyle repair service located in the dense settlements and near from the roadway.The sampling was focused in east Surabaya.With the existing building that has been determined, the next handling is on several parameters that affects the research variables.Standard parameters in this study using a Sound Pressure Level and Transmission Loss.
After that, the related variables above are associated with the room materials which have been applied and analyzed.Analysis of this materials consist of comparing the acoustic quality of environmental-friendly materials and conventional materials.
Subject Research and Model Determination
The research subject is focused on home enterprised that located in the dense settlement and nearby the roadside (secondary collector roads), which has the acoustic comfort disruption of business activity due to motorcyle workshop and roadway.This study uses data sampling method with purposive sample to determine the type of building subject.The definition of purposive sample is a sampling technique that is not focused on broad generalizations population and more focused to determine useful criteria in a group or population (Groat and Wang, 2002).There are considerations of sampling data criteria from the sampling method as follows: 1. Located in east Surabaya.2. The sideroad whose similar road class with the type of secondary collectors (Jl. Manyar, Jl.Menur Pumpungan, Jl.Arif Rahman Hakim, Jl.Nginden Semolo).3. Activities are likely similar: motorcycle workshop.
The next stage is the survey some types of residential building area, home business or shophouse.This aims to determine research area samples.There are several width sample of business houses which are used as a reference for determining the applied broadness.Based on sampling (table 6), it can be seen broadness typology of motorcyle repair home business from 4m-10m.Below are the survey results of home business in dense settlement areas, as follows: From the comparison of volume and the ratio of residential zoning with business spaces, it can be concluded that most home businesses along the roadway have an area and building typologies such as the following: Based on the above sampling, building research subjects will be taken to the typology in the range of 6-10 m wide area, length of 18-20 m, and the ratio of business space zoning with residential space 1: 4. It can be seen in Table 7, zoning 1: 4 is a home business zoning in general.From the analysis of selected building typology is home businesses no.3, size of the 6.00 x 18.00 with similar activities that can meet the criteria of study.This home business is also in densely populated settlements and is situated on the roadside with a secondary collector class.
Data Collection
The required datas for this study are determined based on the variable and the sam-
RESULTS AND DISCUSSION
Research subject in this study was obtained from the sampling process.Research subjects must be determined and identified first to know what things are influential.The approachings of subject analysis are qualitative and quantitative data, which are expected to help find the right method.
The location of the research subject is on Jl.Arif Rahman Hakim, East Surabaya.The neighborhood of the research subject has several characteristics which located in dense settlements in the surrounding area where many found various types of home businesses both similar and dissimilar.With this type of secondary collector roads where the width of the road for this type is not less than 7 m, the speed of the plan at least 20 (twenty) km per hour.Here is a site-plan of the research subject.which was considered to represent all day.Based on interviews with owner, the workshop is open every day starting at 8:00 AM to 5:00 PM.The motorcycle workshop that has a number of employees 6 people, every day is always full of visitors.So it can not distinguish between working days and holidays.
Field data collecting is an image of the existing condition of motorcyle repair home business site plan, furniture layout, furniture dimensions, room dimensions and material elements.The existing condition is shown in Figure 9.To test the acoustic quality of habitable room on this research using data collection SPL both in room (workshop) and outside the habitable room (roadway).Although basically experiments are situated as an original existing building, on the application of the experimental layout and construction materials are still conditioned with the motorcyle repair home business.Here are the zoning and data retrieval reference point SPL using SLM & Vibrometre and tripod connected to software on laptop.
Total of SLM placement are 3-points: roadway, motorcycle workshop and habitable room.On roadway (green zoning) laid 1 SLM on the edge of workshop with tripod's height of 120 cm and sensor direction towards the roadway to collect data roadway noise.
For zoning workshop is also in place 1 SLM, the SLM position is within 60 cm of the wall bordering the habitable room, 70 cm of the left wall of workshop and 20 cm from the compressor which is located in the left side workshop wall.The position of the front-facing sensor for measuring the noise generated by the workshop activities, from the compressor sound, engine noise and workshops activity.The SPL data in the habitable room also taken by putting 1 SLM behind the garage wall.3 SLMs are placed at the same height of 1.2 m from the floor and parallel to each other.Basically the direction is toward the SLM at the time of data collection depending on the direction of the analyzed receiver.Results of selected data will be directed to accurate data and covers all the design needs of habitable room.
SPL measurements was conducted 3 times an hour with an interval of 10 minutes simultaneously between 3 points.Then the SPL data were transferred to the laptop and begin processing the data using Microsoft Excel.SPL value is used to determine Transmission Loss which as one of the test data in this study.
The noise is measured through propagation of structureborne using Vibrometer, because SLM can not reach the low frequencies produced by structureborne.
Here is a picture of vibrometer mounting points with total of 9 points.And they were checked three times with every half-hour time.Starting at 14:00 PM.
The Results of Sound Pressure Level Data at The Research Subject
Results of noise measurement through SLM were recorded using Ms. Excel.The data value of Sound Pressure Level (SPL) were obtained from each point, then averaged for each time point, so each time point of measurement is 1 Leq value.Then compared the results between the three measurement points.Following are the results of field measurement data graphs: Based the results of existing Leq measurements, it's seen in the graph that the average dB (A) on habitable room is above the allowable noise limit exposure levels.The distance between the noise exposure level with the noise in the habitable room around 20 dB (A).While the two sources of noise, the garage and roadway are far above standard noise level, ranges from 35 dB (A).But with the current existing condition, the generated noise from the two sources at least able to reduce the noise that goes into habitable room, although it has not been able to reach the standard of allowable noise limit level.
From the graph it can be seen the noise condition of the research subject.The two sources of noise affects the acoustic comfort in the occupancy.In fact, the current condition has not able to fullfil the acoustic comfort of occupancy, because the ability of the dividing partition between the two spaces (workshops and occupancy) is currently only able to reduce the value of NR range of 10 dB (A).While the good partition that can reduce external noise is between 25-30 dB (A), in order to achieve the standard noise level.
Analysis
Besides the research subject located on the edge of roadway, it is also affected noisy street and also gets noisy from inside the building itself (the garage).So it can be said in this study that there are two sources of noise from outside (road) and inside (workshop).Based on the chart above, the noise of roadway (green line) is one of the sources of noise that affects the acoustic comfort of occupancy where the highest point (red round) is at 13:10 which amounted to 75.41 dB (A).This could be due to many factors, such as the large volume of vehicles, vehicle horns, and the vehicle drove at high speed.And the lowest point (round green) is on at 11:50 amounting 70.03 dB (A).The highest point (at 13:10) on the roadway shows 75.41 dB (A), the receiving room (occupancy) shows the noise level of 60.3 dB (A).This shows that the noisy road decreased by 15.11 dB (A) into the occupancy.
Based on the graph, instability is on the low frequency but there is a significant increasing in noise starts at a frequency of 500 Hz to 2000 Hz with a range of about 51 dB (A) noise -56 dB (A).So it can be presented in the table below, the frequency range and noise level produced by noise sources (roadway and workshop).From both noise sources, the highest noise level is in the frequency range of 500 Hz to 2000 Hz, so the range will be set to determine the materials which used at a later stage.Because the elected frequencies are included in the category of high frequency, the material which will be applied to experiments must reduce the noise at that frequency.
Organic & Anorganic Materials
The walls are vertical elements of the building or room that will deliver the sound waves directly.This is different from the floor and ceiling which are in horizontal position, so it does not deliver the noise directly.The research shows that the use of the principle of sound insulation in the walls would more effectively reduce the propagation of sound rather than the use of floor or double ceiling (Templeton and Saunders, 1987).The design in calculation refers to some of organic material that has been tested through the acoustic quality of some previous studies.As research conducted by Suranto about the acoustic panel of corncob waste and bagasse as well as the research from Kartika Ratri, regarding the coco fiber composites and phenol formaldehyde resin.Here is a summary of material characteristics which will be applied to the design of occupancy area at motorcyle repair home business(research subject).
Roadway Garage
Graph of Frequency Slices which will be taken dB
Design Process
To determine the material of design application which will be used in residential space, the existing of acoustic condition needs to be known with the design conditions that exist in the field.After having the magnitude of TL in the existing partitions, then the calculating of Transmission Loss value can be achieved through material design solutions.Here is the calculation of the condition's existing partitions with walls and a ceramic material as shown in following 3D.The reduction values, the whole of partitions area (S) and the value of A from the multiplication of material area (s) with the absorption coefficient (α) are needed to determine TL.Based the image above(Figure 5.9) 13.63 S is derived from the length multiplied by the width of the field.As for the other calculations described as follows.
𝑇𝐿 = 𝑁𝑅 + 10𝑙𝑜𝑔 𝑆 𝐴
Here is the result of transmission loss of the existing partition field in the frequency band of 500-2000 Hz.Then the value of TL per frequency band is used to find LP2 which shows the value of each noise frequency in the room.Standart of comfort value in family room or home of LP2 is 30-35 dB (A).
Table 14 shows that in all frequency bands of LP2 value exceed 35 dB (A), neither of the calculations manually or using SLM (data field).It shows that the design of the existing partition is still not able to reach a predetermined acoustic comfort.
It is necessary to design a further material processing in order to achieve an acoustic comfort in the desired occupancy area.
Modification Design 1
In the first modification of this design uses wall material with covered corncob which is according to research by Suranto, corncobs have ɑ 0.63 value in the range of 0 -1800 Hz.Organic wall panelling in the form of corn cobs with full ratio composition covers the wall surface.As shown in Figure 18.From table 17, it can be seen that there is a significant reduction in LP2 of the existing value to the LP2 modification 1.But only 35 dB (A) at a frequency of 500 Hz reach the standard rate.While the band 630 Hz -2000 Hz is still above of 35 dB (A).
Modification Design 2
In the second modification design uses wall materials from coconut coir which according to Kartikaratri's research that coconut coir has a value ɑ 0.9 in the range of 756-6400 Hz.Design partition of organic wall materials in the form of coconut coir panelling with full-ratio composition cover wall surface.As shown in the figure 21.From table 19, it can be seen that there is a significant reduction in LP2 the existing value the LP2 modification 2. With the most of values indicate at 35 dB (A).It can be concluded that the modification design 2, the wall material from coconut coirs have good absorptive acoustic performance at all the frequencies that need to be reduced in this study.
Modification Design 3
In the third modification design uses bagasse wall material which according to Anggraini's research in 2010 that coconut coir has a value ɑ 0.8 in the range of 452-2900 Hz.Design partition of organic wall materials in the form of bagasse panelling with full-ratio composition cover wall surface.As shown in Figure 22 below.From table 21, it can be seen that there is a significant reduction in LP2 of the existing value to the LP2 modification 2. With the most of values indicate at 35 dB.It can be concluded that the modification design 3, the wall material from coconut coirs have good absorptive acoustic performance at all the frequencies that need to be in this study.From table 23, it can be seen that the comparison between full partitions of organic material (corncob) are modifications 1 and 4 (ceramics combination), there is a significant difference from the frequency of 630 Hz.It can be concluded that the damping performance of modification design 4 (combination of corncob and ceramics) is better and stable in any frequency band than modifaction 1 which uses a full layer of corncob.
Modification Design 5 (Modification 4 : Ceramic)
Modification design 5 is similar with 4 that conducts a composition of organic acoustic panel but using modification 2 (coconut coir) with a manufacturer material (ceramic).Based on the area of existing ratio.As shown in the figure below.From table 25, it can be seen the value comparison of LP2 between the full partition of organic material (coconut coirs), modification 2 and 5 (ceramic combination).There is no significant differences, relatively stable at frequency of 500 Hz to 1250 Hz.From the graph above, it can be seen that all three organic materials (corncob, bagasse and coconut coir) which have been able to achieve the standard rate, because they can reduce noise up to the value of around 35 dB LP2.Either designed with full organic or combined with ceramics.
However, from the three organic materials that are applied to whole partitions (modification 1, 2 and 3) which has the best value is modification 3, wall with bagasse.While the partition design with organic material and ceramic combination which have the best acoustic performance is modification 4, the wall with corncobs and ceramic combination.Modification 4 is the best design solution Graph of LP2 Modification dB (A) amongst all experimental designa alternatives above, is the wall with corncobs and combination of ceramic.
CONCLUSIONS
From 5 design treatments for partition walls of the research subject showed that to apply the three types of organic materials are able to achieve an acoustic comfort in a habitable room, because it can reduce noise up to the value of LP2 around 35 dB (A).However, the modification 4 (modification corncob + ceramics) has the most excellent damping performance, because it has the lowest LP and stable at all frequencies.Followed by modification 5 where they both use a combination of ceramic.It concluded that a combination between organic and anorganic material are the proper selection for the design of a partition wall in habitable room of motorcycle repair home business.
Figure 1 .
Figure 1.Spreading Noise from the Source of Noise
Figure 6 .
Figure 6.1st Floor Plan (Left) and 2nd Floor Plan (Right) ple used.Data types are classified into two, namely: 1.The Primary Data.The obtained data from a researcher is used for preliminary data research.Here are the primary data in this study: a.The samples data area of a few home business with motorcyle workshop in east Surabaya.b.Data types of road class in Surabaya.c.Data of zoning home business based on activity.d.Data of Sound Pressure Level inside the room.e.Data of Sound Pressure Level outside the room.2. The Secondary Data.The secondary data are obtained from relevant sources and without direct measurement process.Here are some form of secondary data: a.The data requirements of the noise acoustic parameters, Sound Pressure Level which will show the noise mapping at a motorcyle repair business.b.Literature study material related to this study.c.Acoustic studies of the Transmission Loss that can be correlated with existing needs
Figure 7 .
Figure 7. Site Plan of the Research Subject Source: www.google.com/maps/place/Nogo+Baru+Motor
Figure 8 .
Figure 8. Existing Photos of the research subject
Figure 9 .
Figure 9. Airborne noise collecting point using SLM (left).Taking the point of Structure borne collecting point noise using Vibrometer (right).Note : • Black Point is a source of noise, where the workshop area is detected two points, the location of compressor and the point in motor area.• Red Point is the placement of SLM with current SPL data collection.• Red rectangle is vibrometre.
Figure 11 .
Figure 11.Leq of Research Subject
Figure 12 .
Figure 12.Noise Leq of Research Subject
Figure 13 .
Figure 13.Graph of frequency slices which will be taken
Figure 14 .
Figure 14.Example of the application of double wall with air cavity
Figure 15 .Figure 16 .
Figure 15.Wall Partition Material of Habitable Room and Garage
Figure 20 .
Figure 20. Figure of Partition Using Bagasse and Ceramic Wall Material
Figure 22 .
Figure 22. Figure of LP2 Modification Graph of the Research Subject (Habitable Room)
Table 1 .
Background Noise Level.Recommended Noise Criteria
Table 2 .
Class Roads in Indonesia by Their Function
Table 4 .
Vehicle Speed according to the Types and Classes of Road Source: Direktorat Jenderal Bina Marga, Standar Perencanaan Geometrik JalanPerkotaan, 1988
Table 5 .
Vehicle Speed according to the Types and Classes of Road Motorcycle Repair Home-Based Business Samples in East Surabaya
Table 8 .
Size Categories of House Business Building in East Surabaya.
Table 9 .
The Results of Structure Borne 1 (mm/s)
Table 10 .
The Results of Structure Borne 2 (mm/s)
Table 12 .
Results of Two Sources Noise Range
Table 13 .
Organic Material Characteristics | 2019-04-16T13:26:19.132Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "2ab31e9863d4b381b0e2f32e24a3ca8e54259627",
"oa_license": "CCBYSA",
"oa_url": "http://iptek.its.ac.id/index.php/joae/article/download/2935/2330",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2ab31e9863d4b381b0e2f32e24a3ca8e54259627",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
85701207 | pes2o/s2orc | v3-fos-license | Resistance to Phytophthora cactorum in Diploid Fragaria Species
. Sixty accessions/genotypes (entries) of diploid Fragaria sp. were tested for susceptibility to Phytophthora cactorum in greenhouse tests. Four experiments, each with four to 35 entries, were conducted, and each entry was represented by 36 to 45 plants per experiment. The plants were graded according to the number of weeks of survival during the first 4 weeks and for plants surviving more than the first 4 weeks, scoring was based on amount of necrosis in the crown. Statistical analysis showed no significant difference among the four experiments. A majority of the accessions (48) were categorized as being resistant or moderately resistant to Phytophthora cactorum . The disease score for this group varied from 1.06 to 3.09. Five accessions with disease scores ranging from 6.25 to 7.43 were considered highly susceptible. Within F. vesca , a highly significant proportion of the total variation in disease scores (57.6%) was attributable to the differences between accessions and, hence, of genetic nature. There was no indication of any Fragaria species being more resistant or susceptible than others and no systematic differences resulting from geographic origin.
The genus Fragaria in the rose family (Rosaceae) is well known for their edible fruits, and the economically important octoploid Fragaria ·ananassa Dutch. produces large red strawberries and is grown all over the world. In 2007, the world production of strawberries was more than 3.8 million t (FAOStat Agricultural Data, http://www.fao.org). However, annually, strawberry producers face serious economical losses as a result of the development of diseases caused by pathogens, one of them being the destructive oomycete Phytophthora cactorum (Lebert & Cohn) J. Schröt., which causes crown rot.
P. cactorum causes disease in more than 200 plant species, including 150 genera representing 60 plant families, several of them within the rose family (Erwin and Ribeiro, 1996). The pathogen causes fruit rot, root and crown rot, cankers, leaf blights, wilt, and seedling blight (Nienhaus, 1960). P. cactorum was first reported as the cause of crown rot of strawberry (F. ·ananassa Dutch.) in 1952 in Germany (Deutschmann, 1954). It has since become an important disease in most European countries and can be a limiting factor to successful strawberry production worldwide (Maas, 1998).
P. cactorum is homothallic and produces oospores in diseased plant tissue, which makes the pathogen able to survive in the soil for many years. There are few means of eradicating it once a field has become infested, and even with fumigation, this pathogen is rarely eliminated (Sneh and McIntosh, 1974;Wilhelm and Paulus, 1980). It is therefore also almost impossible to eliminate all sources of infection in strawberry nurseries (Fennimore et al., 2008). Many of the most commonly grown strawberry cultivars in Europe are susceptible to P. cactorum (Eikemo et al., 2003) and this enhances the spreading of the disease and the severity of the disease outbreaks. However, genotypes resistant to crown rot do exist. These include accessions from the octoploid species Fragaria chiloensis and Fragaria virginiana, species that have been used as sources for other useful traits in strawberry breeding (Hancock et al., 2002). van de Weg (1997) postulated one single dominant major gene for the resistance to the oomycete Phytophthora fragaria var. fragaria in strawberry. For resistance to Phytophthora root rot caused by Phytophthora fragaria var. rubi in the closely related diploid red raspberry (Rubus ideaus), a two-gene model with dominance has been suggested (Pattison et al., 2007). Previous findings do not support a simple model for P. cactorum resistance in F. ·ananassa. Shaw et al. (2006Shaw et al. ( , 2008 indicated an additive, polygenically inherited resistance, and Denoyes-Rothan et al. (2004) found five putative quantitative trait loci for resistance in an experimental F. ·ananassa population. Focusing on a simpler system than the octoploid strawberry, e.g., a diploid model system, thus appears attractive to get an understanding of the nature and inheritance of the Phytophthora crown rot resistance.
F. vesca has several features that make it attractive as a model species. The plants are easily grown and propagated both through seeds and runners, and they are relatively easy to transform genetically (Oosumi et al., 2006). Moreover, the F. vesca genome is only slightly larger than the genome of Arabidosis thaliana (Folta and Davis, 2006), and genetic maps exist for both the diploid (Cipriani et al., 2006;Davis and Yu, 1997;Sargent et al., 2004Sargent et al., , 2006 and the octoploid strawberry (Lerceteau-Köhler et al., 2003;Weebadde et al., 2008). Finally, a high degree of macrosynteny and colinearity between diploid and octoploid strawberry exist, and no major chromosomal rearrangements seem to have occurred (Rousseau-Gueutin et al., 2008).
The octoploid strawberry progenitors F. virginiana and F. chiloensis are believed to be diploidized allopolyploids, each descending from four diploid ancestors. The ancestry of F. virginiana and F. chiloensis is not fully known, but the main diploid candidates are F. vesca, F. iinumae, F. nubicola, and F. orientalis (Folta and Davis, 2006;Potter et al., 2000). This conserved organization within the Fragaria genus supports the use of diploid Fragaria as a model system for gaining genetic knowledge that subsequently can be transferred to the more complex and economically important octoploid F. ·ananassa (Davis and Yu, 1997;Sargent et al., 2004).
The work presented here is part of a project in which the main goal is to generate basic knowledge about P. cactorum resistance in diploid strawberry species. Second, we aim to identify genes and develop genetic markers that can be used as tools in the amelioration of resistant strawberry cultivars or to develop more effective control measures for disease management. Gaining knowledge about the general level of resistance/susceptibility in our model species is a natural first step and the screening of selected genotypes of diverse geographic origin is reported here.
Plant material and plant propagation.
Accessions of wild strawberry were either collected as runners across Norway or obtained as seeds from East Malling Research (Kent, U.K.) or the National Clonal Germplasm Repository (Corvallis, OR). The accessions come from all over the world with 36 originating from Europe, 14 from Asia, eight from the Americas, and one accession being of unknown origin (Table 1).
The 60 accessions of diploid Fragaria sp. belong to the species F. vesca (48, including different subspecies), F. nilgerensis (three), F. iinumae (three), F. nipponica (three), F. bucharica (one), F. nubicola (one), and F. pentaphylla (one). Seed was germinated in mist chambers. All accessions were propagated as runner plants for use in the resistance test experiments. One representative plant, originating from either seed or runners, was used as a source for all the runner plants. Propagation was done in a greenhouse with 16 h day/20°C and 8 h night/14°C. After multiplication and establishment, the plants were grown for an additional 1 to 2 weeks before pathogen inoculation. Artificial light was provided by high-pressure sodium lamps (SON/T, 120 mEÁs -1 Ám -2 ) in periods with less than 16 h of natural light. Before inoculation, the plants were subjectively graded for size relative to each other using a 1 to 3 scale.
Preparation of inoculum, inoculation, and disease scoring. One isolate (Bioforsk isolate ID number 10300) of P. cactorum, originally isolated from the rhizome of a field-grown strawberry plant in Norway, was used in all experiments. Previous tests of aggressiveness revealed no differences between P. cactorum isolates (Eikemo, 1998). In agreement with this, amplified fragment length polymorphism displayed a very low level of molecular variation within the crown rot pathotype of P. cactorum isolates from all over the world (Eikemo et al., 2004). Zoospore suspensions of P. cactorum were prepared as described previously (Eikemo et al., 2000). Plants were gently wounded in the rhizome with a scalpel and inoculated with 2 mL of the zoospore suspension (1 · 10 5 spores/mL) added onto the crown and lower parts of the plant with a pipette. This method of inoculation was chosen because previous experience has shown that inoculation of plug plants without wounding can lead to poor disease development (Eikemo et al., 2000). The plants were watered 1 to 2 h before inoculation to ensure that the soil was wet and postinoculation watered only on the pot trays, not directly onto the soil.
Disease was scored on a scale from 1 to 8 (Eikemo et al., 2000;Simpson et al., 1994). The plants that died during the first, second, third, or fourth week after inoculation were given the scores 8, 7, 6, or 5, respectively. After 4 weeks, the remaining plants were bisected longitudinally and scored 1 to 4 based on the degree of necrosis in the crown: 1 = no symptoms, 2 = a few brown/dark speckles, 3 = small patches of necrosis, and 4 = more than 50% of the crown necrotic.
Experimental setup and statistical analysis. Four similar experiments were conducted, each with a varying number of accessions being tested. In the first experiment, 31 genotypes were tested, and in the second, 29 additional genotypes as well as six of the extremes (susceptible and resistant) from the first experiments. In Expts. 3 and 4, 24 and four of the extreme genotypes, respectively, were tested again. Each experiment consisted of three replicates, each replicate with 12 to 15 plants, and all experiments were organized in a completely randomized block design. Control plants (wounded and inoculated with water) were included in all experiments. To get within-experiment means, the data from each experiment were analyzed using analysis of variance in which the replicates were considered random and the accessions fixed. When all the experiments were analyzed together, the effect of experiment was also considered random. The statistical model used for the overall analysis was y ijkl = m + exp i + rep j ðexp i Þ + Gen k + ðGen 3 expÞ ik + e ijkl where y ijkl is the disease score on a single plant; m is the grand mean; exp is the random effect of experiment i; i = 1 to 4; rep is the random effect of replicate; j = 1 to 3; Gen is the fixed or random effect of accession k; k = 1 to 60; and e is the error on the l -th plant; l = 1 to 12 (or 15).
Disease score least square means were estimated from the mixed effects model in which the accessions were considered fixed and the experiments and replications were considered random. For the estimation of the variance components, a completely random effects model was used. The significance of the random effects was tested using the likelihood ratio (Self and Liang, 1987). Finally, the covariate (i.e., plant size) was included. All computations were done using Proc MixedÒ in SASÒ (SAS, 1999).
Results and Discussion
There are only a few reports on resistance to P. cactorum in wild strawberry species. Harrison et al. (1998) tested wild octoploid strawberry (F. virginiana and F. chiloensis) for resistance to crown rot, and Parikka (1998) included some wild diploid Fragaria genotypes among a wide selection of cultivated strawberry (F. ·ananassa) cultivars. Results from both these studies indicated that there is variation in resistance to P. cactorum among wild strawberries and, consequently, it should be possible to find genotypes with extreme qualities in a larger collection of accessions. The results from the present study show that diploid Fragaria species vary significantly (P < 0.0001) in their expression of P. cactorum resistance. The most resistant genotypes had an average score of 1.06 (only a few plants showing symptoms after 4 weeks) and the most susceptible a score of 7.43 (all plants dead within 2 weeks). None of the control plants showed symptoms of crown rot in any of the experiments. The combined statistical analysis showed that the variance component resulting from different experiments was not significant and neither was the effect of replication within experiments. However, the accessions responded somewhat differently to the pathogen in the different experiments resulting in a different grade or rank of the cultivars between experiments, revealed as significant a genotype · experiment interaction (P < 0.0001). The effect of plant size on disease score was not significant and hence not included in the analysis. The accession means from each experiment and the overall least square means and their corresponding SEs are given in Table 1. Figure 1 shows the distribution of the least square means of the tested accessions across the four experiments. From this distribution, we cannot postulate anything concerning the nature of the Phytophthora resistance.
Despite the seemingly continuous distribution, the possibility of major genes being involved cannot be excluded because the noise in our data is large both the unexplained residual variance and the variance from the accession · experiment interaction. Using the F. vesca subset of our data in a variance component analysis, 57.6% of the total variance was attributable to differences between accessions, whereas 13.8% was the result of the accession · experiment interaction. Hence, a majority of the observed variation was genetically regulated. In an exploratory experiment like the present one, however, it is not possible to suggest any genetic mechanism for this regulation.
We are unable to postulate any differences between the most resistant accession CFRA1363, with an average score of 1.06, and the accession CFRA1856, which has an average of 3.09. The difference between CFRA1363 and CFRA1866 with disease score 3.49 is, however, significant (P = 0.0367), indicating that accessions with disease scores 3.49 and higher belong to a different group as far as susceptibility concerns. On the susceptible side of the distribution, we could not find any significant differences among the five most susceptible genotypes-CFRA175, Haugastøl 3, CFRA1218, FDP821, and CFRA424-leaving them as a putative distinct group. In conclusion, a majority of the accessions (48) were categorized as being resistant or moderately resistant to Phytophthora cactorum. The disease score for this group varied from 1.06 to 3.09. Five accessions with disease scores ranging from 6.25 to 7.43 were considered highly susceptible. A majority of the accessions (48) were categorized as being resistant or moderately resistant to Phytophthora cactorum. The disease score for this group varied from 1.06 to 3.09. Five accessions with disease scores ranging from 6.25 to 7.43 were considered highly susceptible.
There was no indication of any Fragaria species being more resistant or susceptible than others and no systematic differences resulting from geographic origin. The majority of accessions (48 of 60) tested were F. vesca, and among these, the disease scores varied from 1.06 to 7.43. A maximum of three accessions was tested from the other Fragaria species; hence, no conclusion could be made about the general resistance level in these species. In general, the distribution of resistance to P. cactorum in diploid Fragaria is comparable to results found in F. ·ananassa, in which the disease score ranged from 1.15 to 6.44 using the same method of disease scoring but a slightly different method of inoculation (Eikemo et al., 2003). The present results also showed that wild F. vesca accessions collected from the same location may have very different levels of resistance. The three Norwegian accessions named Haugastøl 1, 2, and 3 were collected in the same area within only 2-to 3-km distance. Two of the accessions were quite resistant (1.46 and 1.56), whereas the third, Haugastøl 3, was very susceptible (6.65). All three Haugastøl accessions were tested in two or three experiments. Our own unpublished microsatellite analysis has confirmed the divergence of Haugastøl 3 compared with the other two Haugastøl accessions. One explanation for these differences is that different accessions have been imported by humans and subsequently naturalized. Hence, the three Haugastøl accessions may have quite different origins. Multiple collections within the other sites (mainly from the Norwegian collection) showed more similar degrees of resistance.
In conclusion, we report here the results from testing of diploid Fragaria accessions for resistance to Phytophthora cactorum in a greenhouse. Both resistant and susceptible accessions have been identified. This information is necessary for gaining basic knowledge about the P. cactorum resistance mechanism in the F. vesca model system and for the identification for resistance genes and genetic markers to such genes. It is believed that such knowledge eventually will lead to the advancement in the development of F. ·ananassa cultivars. Moreover, recent research has confirmed that there is a high degree of similarity between more distantly related genera of the Rosaceae family (Shulaev et al., 2008;Vilanova et al., 2008). This indicates that information from the F. vesca model system is also relevant to other economically important crops like apple or pear. | 2019-03-30T13:11:57.895Z | 2010-02-01T00:00:00.000 | {
"year": 2010,
"sha1": "1e7e67cfe0d88de32975fac2cca7aecb636dd9a7",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/45/2/article-p193.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "89d3151ddb45cfd1de5287b29c18a4d3e0a7776f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
14723824 | pes2o/s2orc | v3-fos-license | PCNA and XPF cooperate to distort DNA substrates
XPF is a structure-specific endonuclease that preferentially cleaves 3′ DNA flaps during a variety of repair processes. The crystal structure of a crenarchaeal XPF protein bound to a DNA duplex yielded insights into how XPF might recognise branched DNA structures, and recent kinetic data have demonstrated that the sliding clamp PCNA acts as an essential cofactor, possibly by allowing XPF to distort the DNA structure into a proper conformation for efficient cleavage to occur. Here, we investigate the solution structure of the 3′-flap substrate bound to XPF in the presence and absence of PCNA using intramolecular Förster resonance energy transfer (FRET). We demonstrate that recognition of the flap substrate by XPF involves major conformational changes of the DNA, including a 90° kink of the DNA duplex and organization of the single-stranded flap. In the presence of PCNA, there is a further substantial reorganization of the flap substrate bound to XPF, providing a structural basis for the observation that PCNA has an essential catalytic role in this system. The wider implications of these observations for the plethora of PCNA-dependent enzymes are discussed.
INTRODUCTION
Structure-specific endonucleases recognise and cleave a variety of branched structures that arise during DNA replication, recombination and repair (1,2). In eukarya, the nuclease xeroderma pigmentosum complementation group F (XPF), which forms a complex with the excision repair cross complementary group 1 (ERCC1) protein, is a component of the eukaryotic nucleotide excision repair (NER) machinery and cleaves a 3 0 single-stranded flap structure on the 5 0 side of DNA lesions (3). In humans, mutation of XPF can cause xeroderma pigmentosum (XP), which is characterized by extreme sensitivity to UV light and a high frequency of skin cancer (4). In crenarchaea, XPF forms homodimers composed of nuclease and helix-hairpin-helix (HhH 2 ) domains connected by a short linker (5)(6)(7). The crystal structure of the Aeropyrum pernix XPF (ApeXPF), both bound to a dsDNA and in an unliganded form, revealed important features of substrate recognition and cleavage of branched DNA structures by these proteins (8). The structure revealed that the nuclease and HhH 2 domains independently form tightly associated dimers with equivalent domains. The dimeric HhH 2 domains were predicted to bind the DNA substrate, inducing a 90 bending angle between the downstream and upstream duplexes (8). This is similar to what was observed for the RuvA tetramer bound to a planar Holliday junction (9). This DNA substrate rearrangement is also accompanied by a large inter-domain movement, with a 30 Å shift and 95 rotation occurring in the protein. In a parallel study, Nishino et al. (6) proposed a broadly similar model for the recognition of the fork substrate by the euryarchaeal version of XPF (Hef).
A major conformational change in the DNA structure has been also reported for the interaction between the 5 0 flap endonuclease Fen-1 and DNA substrates using fluorescence resonance energy transfer (FRET). The decrease in the distance between the ends of the DNA supported a kink angle of $90 -100 with the kink centred at the phosphate opposite the flap junction (10). In vitro, Fen-1 activity is increased by up to 50-fold in the presence of the sliding clamp PCNA (11,12), a ring-shaped protein that encircles DNA and acts as a platform for the recruitment of a variety of non-sequence-specific enzymes including polymerases, nucleases, helicases and glycosylases (13,14). We have shown previously that Sulfolobus solfataricus XPF has significant endonuclease activity only in the presence of the heterotrimeric crenarchaeal PCNA (5,15,16). Crystal structures of heterotrimeric S. solfataricus PCNA on its own (18) and in complexes with Fen-1 (19,20) and DNA ligase (20) have revealed a close resemblance to homotrimeric eukaryotic and euryarchaeal orthologs.
Using a continuous FRET assay we demonstrated that S. solfataricus PCNA activated the XPF and Fen-1 nucleases by two fundamentally different mechanisms and proposed a novel role for PCNA as an essential XPF cofactor (21). For Fen-1, PCNA activation mainly arises from an increased affinity for DNA, which represents the accepted role of PCNA. In contrast, for XPF, PCNA increases the catalytic rate constant by almost four orders of magnitude without affecting the K M , indicating that PCNA seems to reduce the activation barrier of the catalytic reaction.
In the absence of a crystal structure of the XPF/ DNA/PCNA complex, our current knowledge of XPF interactions with DNA and PCNA is limited. Here, we analyse the conformational changes occurring on the flap substrate upon binding to XPF and the XPF/PCNA complex using FRET. Analysis of these conformational changes has revealed, for the first time, substantial differences for the structure of the flap DNA substrate bound to the XPF nuclease in the presence and absence of PCNA, and suggest a role for the PCNA sliding clamp as an architectural organiser of the XPF/DNA complex.
Protein expression and purification
S. solfataricus wild-type and the C terminal truncated Á6 XPF and PCNA heterotrimer were expressed and purified as described previously (16,17). The HhH 2 domain of XPF was amplified from S. solfataricus strain P2 genomic DNA using the following primers: SDS-PAGE confirmed the protein was essentially pure. Protein stocks were stored at À80 C in 15% glycerol until required.
Oligonucleotide labelling and purification
Oligonucleotides were purchased from Integrated DNA Technologies labelled with the donor dye fluorescein or/ and an internal amino modifier C6-dT. A succinimidyl ester derivative of the fluorophore Cy3 (GE Healthcare) was used according to the manufacturer's protocol for the specific labelling of the DNA oligonucleotides. Following the labelling reaction, the oligonucleotide was ethanol precipitated followed by a 70% ethanol wash before being allowed to dry. The pellet was re-suspended in 200 ml of 50% formamide and incubated at 55 C for 5 min before loading on to a pre-run 20% denaturing acrylamide gel at 22 W (limited to 55 C using a temperature probe) for 3 h. Bands were visualized by UV-shadowing, cut and then extracted from the gel using an overnight crush and soak protocol at 4 C (CSH Protocols, 2006; doi:10.1101/pdb.prot2936), followed by ethanol precipitation. The absorption spectrum from 600-220 nm was taken to determine DNA concentration and labelling efficiency of the fluorescent dyes. To provide a DNA scaffold that could act efficiently as substrate for the XPF/PCNA complex, given that the footprint of XPF into DNA extends $7-8-nt from the nicked site (8) and that PCNA is expected to require $10 bp DNA duplex for binding, the length of each DNA stem was set at 19 and 18 bp for the up-and downstream regions, respectively. The 3 0 -flap substrate was assembled using 0.1 OD of each strand (Table 1) and hybridized as described previously (21).
Fluorescence binding assay
Binding experiments were performed in 30 mM HEPES, pH 7.6, 40 mM KCl, 5% glycerol, 0.1 mg/ml bovine serum albumin with 50 nM DNA substrate. For experiments performed in the presence of PCNA, addition of the clamp loader RFC was not required as PCNA can readily diffuse on to the short synthetic DNA substrates used in this study. Experiments were performed using a Cary Eclipse spectrofluorimeter (Varian Inc., Palo Alto, USA), equipped with a Peltier temperature controller set to 20 C. FRET measurements were performed under magic angle conditions to avoid anisotropy effects and analysed by exciting the donor dye fluorescein at 490 nm recording the emission spectrum from 500 to 650 nm. The acceptor dye (Cy3) emission spectrum was also recorded using an excitation wavelength of 545 nm and emission monitored from 557 to 650 nm. Anisotropy measurements were recorded using the automated polariser accessory using a 5 s averaging time and four replicates for each measurement (l ex = 490 nm, l em = 535 nm). Differences in the fluorimeter response to vertical and horizontal polarized light (G-factor) were corrected automatically by the spectrofluorimeter. Dissociation constants were calculated by non-linear least-squares fitting of the raw data to the standard equation describing the equilibrium D þ E $ DE (D is the oligonucleotide, E is the protein and DE is the oligonucleotide-protein complex).
where A represents the measured signal at a particular protein concentration (E) and fluorescent oligonucleotide (D), A min indicates the minimum signal value, A max represents the maximum signal value and K D is the dissociation constant. Assuming that for a given protein the changes in FRET efficiency observed for the three flap constructs are induced by the same type of protein-DNA interaction, global analysis was carried out using also Equation (1) but with the K D being optimized as a global parameter with a single value for all the flap constructs.
Determination of FRET efficiencies and distances
The efficiency of energy transfer from the donor (fluorescein) to the acceptor (Cy3) was calculated following the (ratio) A method (22). It is commonly found that when proteins bind to fluorescent constructs, potential changes in the fluorescence intensity of the donor and/or the acceptor may occur due to changes in the local environment of the fluorophores (23,24). Changes in acceptor and donor quantum yield do not interfere with the calculation of the FRET efficiency following the (ratio) A method. However, because the quantum yield of the donor enters directly into calculations of the Fo¨rster energy transfer distance (R o ), this must be taken into account when comparing FRET values in absence and presence of different protein concentrations and particularly, when transforming the experimental FRET efficiency to distance values (22,24). Corrected FRET efficiencies and distance values have been obtained following procedures reported in the literature (25). Briefly, for each data point in a titration experiment, the percentage of protein-induced donor quenching was assessed under the same experimental conditions using a donor-only construct. Thus, the corrected distance can be calculated from the expression: where R o represents the Fo¨rster distance in the absence of protein (55.6 Å ), f protein D /f D indicates the ratio between the donor emission at each protein concentration and in the donor-only construct, and E exp is the experimental FRET efficiency obtained following the (ratio) A method.
Angle calculations using a single-and a double-kink DNA model The dye-to-dye distance obtained from FRET efficiency measurements was correlated with conformational changes within the DNA substrate upon XPF and XPF/PCNA binding by calculating the DNA kink angle a using a single-kink model (26). The kink centre was assumed to be at the phosphate opposite the ss/dsDNA junction and flanked, for Flap-13, by 11 bp (L 1 ) and 10 bp (L 2 ) duplex DNA regions, which represent the position of the dyes in the upstream and downstream duplexes, respectively. For Flap-12, L 1 and L 2 take the values 11 bp duplex DNA and 8 nt of single-strand DNA, and for Flap-23 the values of 10 bp (L 1 ) and 8 nt ssDNA (L 2 ). Duplex DNA length was calculated assuming canonical B-DNA and for ssDNA length we used the limiting values reported in the literature of 4 and 5.6 Å for the inter-base distance and also the value of 5.3 Å obtained from the average of six crystal structures of ssDNA/protein complexes (see text). The angle a was calculated for each FRET vector from cos(a) = [(R 2 FRET À L 2 1 À L 2 2 )/2L 1 L 2 ]. To explore the compatibility of the FRET distances obtained with a model including XPF-induced melting of the DNA template, we also applied a double-kink model (23). In this case the DNA substrate is treated as a rigid rod with three segments, L up (upstream duplex length), L down (downstream duplex length) and L m (single-strand DNA region linking the upstream and downstream), and two 'hinges' that separate the three segments ( Figure 4d). The total kink angle between the upstream and downstream region is a = 2y, where y represents the exterior angle at each ss/dsDNA interface. L up and L down are given by the expressions L up = (11 bp À L m )  3.4 Å , and L down = 10 bp  3.4 Å . Assuming no twisting of the construct, all three segments lie at the xy plane with the position of the Sequence upstream dye at the origin of a coordinate system and the position of the downstream dye is defined by the expressions: The dye-to-dye distance in the protein-DNA complex is given by R 2 b = x 2 + y 2 . By applying these equations, we estimate the L m value taking a as 90 , assuming symmetric kink at each ss/dsDNA interface (y as 45 ) and assigning the distance, R b , as the experimentally obtained distance by FRET measurements.
XPF directs PCNA loading onto 3 0 flap DNA substrates
Although PCNA is known to associate with DNA flap substrates, it has not yet been possible to determine a structure of an isolated PCNA/DNA complex by crystallographic methods and its relative upstream or downstream DNA binding orientation in the presence and absence of XPF remains unclear. We have shown previously that a 3 0 flap DNA structure labelled internally with fluorescein is an adequate substrate for XPF with catalytic rates similar to that obtained from an unlabelled substrate (21). Here, we have used PCNA-induced quenching of internally labelled fluorescein emission to determine the affinity constant of PCNA for a DNA flap structure and its relative orientation in the presence and absence of XPF. Fluorescein dyes were located either 11 nucleotides upstream or 10 nucleotides downstream of the 3 0 flap junction on the substrate ( Figure 1a) and PCNA titrations were carried out in a background of 10 mM Ca 2+ to prevent cleavage. As for other divalent metal-ion dependent endonucleases, Ca 2+ ions have been demonstrated to inhibit the catalytic step while efficiently stabilizing protein-DNA interactions (3,27). In the absence of XPF, both the upstream and downstream fluorescein dyes exhibited a similar degree of quenching ($30%) on PCNA binding and similar dissociation constants of 8.5 ± 1.6 and 6.9 ± 1.8 mM (Figure 1b), respectively. For comparison, K D values of 1 mM for the binding of homotrimeric yeast PCNA to a 24 bp dsDNA and $100 mM to a 50 bp dsDNA carrying a 5-nt overhang have been obtained using a quartz crystal microbalance approach (28). Titration of PCNA with DNA pre-equilibrated with 1 mM wild-type XPF showed a remarkably different behaviour. No quenching was observed for the downstream fluorescein, suggesting that PCNA cannot bind there in the presence of XPF ( Figure 1b). For the upstream duplex, $75% quenching of the fluorescence signal was observed and a K D of 36 ±7 nM was measured, representing an $200-fold decrease in the dissociation constant when compared to PCNA alone. These data confirm that in the XPF-PCNA-DNA ternary complex PCNA binds the DNA duplex that is upstream (5 0 to the flap), as predicted by McDonald and colleagues (8).
XPF affinity for 3 0 flap substrates using a FRET-based assay We have previously reported a dissociation constant of 3.8 ± 0.6 mM and a 1:1 stoichiometry for the XPF/ PCNA complex using isothermal titration calorimetry (21); however, binding affinities of XPF and XPF/PCNA for DNA substrates have not been reported to date. Here, we used an intramolecular FRET assay to quantify the affinity of wild type XPF (XPF-wt) and truncated variants lacking the C-terminal PCNA-interacting peptide (XPF-ÁPIP), or the nuclease domain Table 2.
In the absence of PCNA, addition of wild type XPF induced significant changes in the FRET efficiency for all the vectors analysed, allowing the calculation of dissociation constants. These were of the same order of magnitude, with values of 2.6 ± 0.4, 9.2 ± 3.0 and 5.0 ± 0.6 mM for Flap-12, -13 and -23, respectively. Global modelling involving the three flaps yielded a K D value of 5.3 ± 0.6 mM that fit accurately (r 2 = 0.99) all XPF-wt binding isotherms (Figure 2a-c). Although a systematic investigation of XPF binding affinities to different substrates is not available, the observed K D is in the same range as reported for other HhH domain-containing proteins (7). For example, the C-terminal domain of UvrC binds with an apparent K D of $1 mM to specific DNA substrates containing ss-ds junctions (29).
Experiments performed under identical conditions, but in the presence of 1 mM PCNA, show a strong effect on the XPF-wt binding affinity, yielding similar K D values for the three FRET vectors (Table 2) and a global fitted value of 0.06 ± 0.01 mM, $90-fold lower than in the absence of PCNA. This value was very similar to that obtained when doing the reverse titration, 0.037 ± 0.007 mM, where PCNA binding was analysed in a background of XPF using the fluorescence quenching assay (Figure 1b) and to the K M value of 0.08 ± 0.01 mM reported by us in a previous study (21). A similar affinity enhancement ($73-fold) was observed for the XPF-Ánuc variant (Figure 2g-i) where global fit K D values of 0.11 ± 0.01 and 8 ± 1 mM were obtained in the presence and absence of PCNA, respectively. These data highlight the major contribution of the C-terminal HhH 2 domain to DNA binding affinity. In marked contrast to XPF-wt and XPF-Ánuc, XPF-ÁPIP yielded global dissociation constants of 3.1 ± 0.5 mM in the presence and 2.7 ± 0.6 mM in the absence of PCNA (Figure 2d-f), confirming that the increase in affinity observed for wild-type XPF and XPF-Ánuc is due to the formation of a specific complex between these proteins and PCNA. The extent of quenching at saturating protein concentrations depended on the dye position. Thus, the experimental FRET efficiencies and associated inter-dye distances were corrected as described in 'Materials and Methods' section. For each experimental condition, three titrations were carried out to reduce the error in the observed FRET efficiency. The corresponding dye-to-dye distances are summarized in Table 3 Table S1). Relative changes in inter-dye distance upon association of XPF, with and without PCNA, are shown in Figure 3 for each vector analysed. In the absence of wild type XPF, the FRET efficiency obtained for Flap-12 substrate is 0.66 ± 0.02 and it decreased to a value of 0.58 ± 0.01 at saturating concentrations of XPF-wt ($70 mM). This implies a moderate 3 Å increase in distance from 50.2 ± 2.0 to 53.4 ± 0.8 Å (Figure 3a). A similar decrease in FRET efficiency was also observed for the Flap-23 substrate upon addition of XPF-wt, from 0.51 ± 0.01 (55.6 ± 2.3 Å ) in the absence of protein to a minimum value of 0.43 ± 0.02 (58.4 ± 1.3 Å ) at saturating concentrations (Figure 3g). We obtained similar trends in FRET efficiencies for Flap-12 and -23 substrates containing 3-nt single-stranded flaps (data not shown), confirming that the observed variations were not due to environmental effects on the dyes upon XPF binding and were also independent of the flap length.
In contrast, Flap-13 undergoes an increase in E FRET from 0.21 ± 0.03 to 0.57 ± 0.01 upon addition of wild-type XPF. When quantified, the average inter-dye distance for Flap-13 decreased by nearly 17, from 70 ± 1 Å with no added XPF to 53.3 ± 1.3 Å at saturating XPF concentrations (Figure 3d). Given that for Flap-13 both fluorophores are located in internal positions of the DNA substrate and to ensure that the FRET changes observed were not caused by a breakdown of the 2/3 approximation for the k 2 orientational factor when bound to XPF, we measured the donor anisotropy in the presence and absence of added XPF. We obtained values of 0.05 (no XPF-wt) and 0.18 (5 mM XPF-wt) indicating that the donor retains most of its mobility when bound to XPF and confirming that the FRET changes reflect XPF-induced variations in the Flap-13 average inter-dye distance. Assuming that XPF bends the flap substrate at the phosphate opposite the flap junction, and taking into account the position of the dyes, the dye-linker length, the helical structure of B-DNA, and the experimental inter-dye distances obtained by FRET, we obtained a model for the Flap-13 structure in the presence and absence of XPF. From this single-kink model, the kink angle changes from 157 to $93 upon association of XPF. This value is in very good agreement with the kink angle of $90 estimated for the XPF/DNA complex using a combination of modeling and X-ray data (8) and with the kink angle observed for a 5 0 flap DNA substrate bound to Fen-1 using a similar FRET approach (10). For XPF-ÁPIP, the relative change in dye-to-dye distance obtained for the three flap constructs were very similar to those observed for XPF-wt (Figure 3b, e and h). Hence, XPF-ÁPIP distorts the DNA flap in the same manner as the wild type. However, similar experiments carried out with XPF-Ánuc showed that whilst Flap-13 FRET distance (Figure 3f) was similar to that reported for XPF-wt (Figure 3d), Flap-12 ( Figure 3c) and Flap-23 (Figure 3i) exhibited an opposite effect. Upon XPF-Ánuc binding, Flap-12 E FRET increased from 0.66 ± 0.02 to 0.87 ± 0.03 and Flap-23 from 0.51 ± 0.01 to 0.79 ± 0.03. These FRET efficiencies at saturating XPF-Ánuc concentrations correspond to dye-to-dye distances of 41 ± 0.5 Å (Flap-12) and 45 ± 1 Å (Flap-23), much shorter than those induced by XPF-wt. Together these data suggest that the C-terminal HhH 2 domains are responsible for duplex DNA bending whilst the nuclease domains must contribute to organise the flap substrate in a proper conformation for recognition and cleavage.
Flap conformation in presence of the XPF/PCNA complex. FRET changes occurring upon titration of XPF were also performed in the presence of 1 mM PCNA. Although the FRET efficiency curves as a function of XPF-wt concentration followed a similar pattern as in the absence of PCNA (Figure 2a-c), the plateau values obtained at saturating XPF-wt concentrations ($5 mM) were significantly lower ( . Comparison of relative changes in dye-to-dye distances (Å ) observed for the three 3 0 flap DNA constructs upon association to XPF and related variants in the presence and absence of PCNA. Dye-to-dye distances at 20 C in the presence of 10 mM CaCl 2 were evaluated from the FRET efficiencies obtained following the (ratio) A method (Materials and methods section). Relative distance changes where calculated with respect to the inter-dye distance obtained in the absence of proteins (ÁR = R protein À R free ), with positive and negative ÁR values representing an increase or a decrease in inter-dye distance, respectively. Top panels: relative change in dye-to-dye distance obtained for Flap-12 upon association to XPF-wt (a), XPF-ÁPIP (b) and XPF-Ánuc (c). Middle panels: relative change in dye-to-dye distance obtained for Flap-13 upon association to XPF-wt (d), XPF-ÁPIP (e) and XPF-Ánuc (f). Bottom panels: relative change in dye-to-dye distance distances obtained for Flap-23 upon association to XPF-wt (g), XPF-ÁPIP (h) and XPF-Ánuc (i). decreased from 0.58 ± 0.01 with no PCNA added to a value of 0.36 ± 0.01 with PCNA, and from 0.57 ± 0.01 and 0.43 ± 0.02 to values of 0.35 ± 0.01 and 0.31 ± 0.02 for Flap-13 and Flap-23, respectively. The dye-to-dye distances for the three flap substrates bound to the XPF/ PCNA complex were extracted from the FRET efficiencies and compared to those observed when bound to XPF alone ( (Figure 4a). Using the single-kink model and the inter-dye distance obtained from the FRET assay in the presence of XPF/PCNA complex, we obtained a kink angle of 115 between the up-and downstream duplexes. This represents an increase in the kink angle by nearly 23 with XPF/PCNA when compared to XPF alone. As a control, when the concentration of XPF-ÁPIP variant was increased from 0 to 70 mM in the presence of 1 mM PCNA, the E FRET followed the same variation and reached the same plateau values as in the absence of PCNA. We conclude that the observed conformational changes are triggered by the specific interaction of the DNA substrate with the XPF/PCNA complex. The flap structure bound to the XPF-Ánuc variant was also investigated in the presence of PCNA (Figure 2g-i). The three flap substrates exhibited a decrease in FRET The up-and down-stream duplexes were built using canonical B-DNA parameters with the base-pair sequence matching the substrate sequence used in this work. The 8-nt 3 0 substrate overhang was built as a single-strand of canonical B-DNA with an A nucleotide followed by a T-repeat sequence of 7-nt. The dye-linker structure of fluorescein and Cy3 has been built and energy minimized using HyperChem v.8 and then joined to the DNA structure at the appropriate location. The downstream duplex region was positioned using the distance dye-to-dye distance information extracted from the FRET experiments whilst keeping the upstream duplex at a fixed position. (b) Cartoon representing the conformational distortions taking place on the flap structure bound to XPF (grey) upon association to PCNA (yellow) assuming a single-kink model (see text for details) and an inter-nucleotide distance of 5.3 Å as obtained from the average of six crystal structures of single-strand/protein complexes (see text). (c) Structural basis for a double-kink model (8,15). Cartoon shows the two HhH 2 domains engaging the upstream and downstream duplexes to induce a 90 global kink angle, whilst the uncleaved strand linking both duplexes interacts with the nuclease domain promoting melting of the upstream duplex. (d) Parameters involved in the double-kink model used to calculate the angle a to which XPF kinks the DNA substrate. The DNA has been modelled as a rod consisting of three segments, L up , L down and L m , accounting for the up-, downstream and ss regions, respectively. The model is similar to that reported for the analysis of DNA bending by TBP (23). efficiency as XPF-Ánuc was titrated in the presence of 1 mM PCNA. At saturating XPF-Ánuc concentrations the E FRET values obtained were 0.55 ± 0.02 (Flap-12), 0.15 ± 0.03 (Flap-13) and 0.38 ± 0.02 (Flap-23). The inter-dye distances extracted from these values are listed in Table 3. It is clear that XPF-Ánuc/PCNA distorts the flap substrate in a very different manner than the complex XPF-wt/PCNA, and this appears to be attributable to the lack of the nuclease domain.
These findings provide the first, to our knowledge, experimental evidence that PCNA association can lead to a substantial reorganization of a DNA substrate, suggesting that the sliding clamp can have an active role in the modulation of DNA structure in collaboration with partner proteins. This is consistent with a model where PCNA acts not only as a protein-recruiter of XPF to the flap ss/ds junction but also as molecular scaffold that enables XPF to further reorganize the DNA substrate, towards a more catalytically competent structure, first proposed on the basis of kinetic studies of this system (21). Similar dual functionalities have recently been indentified for TFIIA cofactors in the context of eukaryotic mRNA transcription (23).
XPF and PCNA assemble in a defined orientation on flap DNA substrates
Despite much interest, structural information regarding the interaction of the PCNA with the nucleic acid scaffold and the overall orientation of the sliding clamp relative to the DNA damage site in the presence and absence of a given protein partner is very limited. Here, we developed a fluorescence assay that used PCNA-induced quenching of fluorescein emission to analyse the organization of the PCNA-DNA complex in the presence and absence of XPF. The extreme sensitivity of fluorescein to its microenvironment has been used previously for the analyses of protein-DNA interactions (30). Our data suggest a model where PCNA on its own can assemble randomly with a 1:1 stoichiometry and equal affinity at either side of the 3 0 flap substrate, as reflected by the identical quenching ($30%) exhibited by a fluorescein dye located at almost symmetrical positions, either on the upstream (11-nt) or on the downstream duplex (10-nt), and by the similar dissociation constants obtained (Figure 1b). It has been proposed that Escherichia coli b sliding clamp and human PCNA can transverse small DNA secondary structures including short flap structures of 10-nt, being efficiently blocked by longer flaps ($28-nt) (31). Thus, we cannot rule out the possibility that the observed spontaneous assembly of PCNA at either side of the flap substrate could be coupled to PCNA sliding over the eight nucleotide ss flap used in this study. Although currently there are no data available to further confirm this aspect, the proposed architectural model remains unaffected. Similar random orientation has been observed previously for the assembly of mammalian PCNA on a DNA template-primer containing a ds/ss junction (32).
In the presence of XPF, a priori, one could envisage two possible arrangements for PCNA on the DNA flap substrate: PCNA could be loaded onto the duplex upstream of XPF or downstream. The strong bias in fluorescein quenching by PCNA in the presence of XPF provides clear evidence for an XPF-directed assembly of PCNA exclusively onto the upstream region (Figure 1b). Preferential PCNA loading onto the upstream region also provides a mechanistic explanation for the observed activation of XPF activity by PCNA in gapped and splayed substrates, as the upstream region is the only DNA duplex close to the enzyme complex in these structures (15).
PCNA and XPF cooperate to bind DNA substrates
To dissect the role that the sliding clamp plays in XPF recognition and cleavage, we have used FRET assays to quantify the affinity of XPF for its preferred substrates, 3 0 DNA flaps, with and without PCNA. The dissociation constants obtained using a global fit analysis of the three flap vectors (K D = 5.3 ± 0.6 mM) was very similar to those obtained for the binding of PCNA to the upstream (8.5 ± 1.6 mM) and downstream duplexes (6.9 ± 1.8 mM), and for the interaction between XPF and PCNA determined previously using isothermal titration calorimetry (3.8 ± 0.6 mM) (21). The 90-fold decrease in the dissociation constant for DNA binding by XPF in the presence of 1 mM PCNA reflects the energetically favorable interactions XPF can make with both PCNA and DNA. The C-terminal HhH 2 domain of XPF yielded values similar to those for full length enzyme, suggesting that the nuclease domain contributes little to DNA binding. In vivo, PCNA is likely to be pre-loaded onto duplex DNA by the RFC complex and will arrive at non-canonical DNA structure by 1D diffusion or during the course of DNA replication. On encountering a branched DNA structure PCNA may stall or at least pause, allowing recruitment of an endonuclease competent to process the DNA. In the absence of PCNA, XPF will bind very weakly to branched substrates; PCNA is therefore essential in 'marking' potential substrates for nuclease action, but in the case of XPF it has a further role which is discussed below.
Structural organization of XPF/DNA complexes
For both HhH 2 domains to engage in interactions with the upstream and downstream duplex simultaneously, it has been proposed that the 3 0 flap substrate would have to bend by almost 90 (6,8). However, the crystal structure of ApeXPF bound to the DNA substrate contained only the downstream DNA motif and thus no quantitative experimental evidence of this or other XPF-induced conformational changes on the flap substrate have been reported to date. To investigate the conformational changes taking place on the DNA substrate upon binding of XPF, we translated the observed FRET efficiencies in inter-dye distances. Of all the flap structures investigated, the duplex-bending reporter Flap-13 showed the highest relative change in dye-to-dye distance ($16 Å ) upon XPF binding. This was in contrast with the very moderate relative increase in distance ($3 Å ) obtained for Flap-12 and -23 (Table 3), which report conformational changes between the duplex and flap regions. The decrease in the distance associated with Flap-13 can be explained by a kink centered at the phosphate opposite the flap junction acting as a flexible hinge, resulting in an angle of 93 between the upstream and downstream regions (Figure 4a). This value is similar to that proposed for Fen-1, whose activity is also known to be PCNA-activated (10). Thus, a kinked DNA conformation seems to arise as a common feature in the recognition mechanism of branched DNA structures. Sharply kinked protein-induced DNA structures have been also proposed for the UvrB helicase (33) and for nick recognition by the NAD + -dependent DNA ligase from T. filiformis (34).
Although the single-kink model provides an appropriate explanation for the conformational changes observed on the DNA upon XPF binding and agrees with the deformation model proposed for other endonucleases such as Fen-1, it does not contemplate additional interactions between the XPF nuclease domain and the flap substrate observed on the ApeXPF cocrystal structure and also suggested by biochemical methods (8,15). According to this, a hydrophobic strip on the surface of the nuclease domain is predicted to function as a binding site for the ssDNA linking the upstream and downstream duplexes, generating a small stretch of unpaired ssDNA in the substrate strand that is cleaved in the nuclease active site (Figure 4c). This XPF-induced melting of the DNA substrate is consistent with footprinting data that demonstrate opening of a DNA duplex near the cleavage site by the Hef protein (6), a 'long form' analog of XPF present in most euryarchaea. Thus, the overall organization of the HhH 2 and nuclease domains around the DNA substrate could be reminiscent of double-kink models proposed for DNA bending enzymes such as the TATA binding protein (TBP) (23) and the CENP-B/DNA complex (35), among others.
To envision whether DNA-melting associated to the interaction between the nuclease domain and the flap substrate could account for the changes in dye-to-dye distances observed for Flap-13, we applied a similar model to the XPF/DNA complex (Figure 4d). The proposed double-kink structure is characterized by an overall kink angle a assumed to be 90 as explained above, two equal kinks of exterior angle occurring at the ss/dsDNA hinges so that a = 2, the distance R b that corresponds to the inter-dye distance when bound to XPF, experimentally obtained by FRET, and the lengths L down , L up and L m whose magnitudes correspond to the downstream, upstream and the unpaired segments, respectively. An intrinsic uncertainty in this model arises from the inter-base distance to be assigned for the single-strand region (L m ) with literature values ranging from 4 Å (36) to 5.6 Å (37). Thus, to estimate L m , we first calculate an average inter-nucleotide distance using crystal structures of ssDNA bound to different proteins including DNA polymerase b (PDB code 9ICM), NS3 helicase (PDB code 1A1V), E. coli Rep helicase (PDB code 1UAA), E. coli SSDNA (PDB code 1EYG) and human RPA (PDB code 1JMC). We obtained an average value of 5.3 ± 0.7 Å , which is close to the average rise of 5.1 Å per nucleotide reported for ssDNA bound to RecA (38). Using this value and the double-kink model, the experimental FRET distance of 53.3 ± 1.3 Å would be compatible with a ssDNA linker L m extending $4-nt between the upstream and the downstream duplexes. Interestingly, biochemical data have previously shown that the XPF (15) and Mus81-Mms4 (39) cleavage site is located 4-5-nt from the 5 0 -end of the downstream duplex. Hence, we suggest that melting of the upstream duplex to position the flap substrate in an optimal conformation ready for cleavage might reflect a fundamental conserved element in the DNA processing pathway for both enzymes.
Surprisingly, the DNA substrate bound to the XPF variant lacking the nuclease domain (XPF-Ánuc) shows a very different organization (Figure 3c, f and i) with Flap-12 and -23 inter-dye distances being much shorter than those obtained when bound to the full length XPF. This suggests that the nuclease domain plays an important role to position the flap substrate in the proper conformation.
PCNA alters the conformation of XPF/DNA complexes
In the presence of PCNA, the inter-dye distances obtained for the three flap constructs were higher than in the presence of XPF-wt alone (Table 3). Flap-12 and -13 both showed increases of $17 %, whilst Flap-23 showed only a 9% increase (Figure 3a, d and g). The observed changes in DNA conformation are due to the formation of a specific complex between XPF and PCNA, as the truncated XPF-ÁPIP, unable to interact with PCNA, showed similar dye-to-dye distances to that reported only with XPF-wt (Figure 3b, e and h). This is in agreement with our observation that the last six C-terminal residues of XPF mediate the interface with PCNA and with the lack of PCNA stimulation of nuclease activity previously reported for XPF-ÁPIP (16). Using the single-kink model, we found that the angle for Flap-13 increases from 93 in the absence of XPF to 115 upon the association of PCNA. Flap-12 and -23 also undergo a PCNA-induced opening of the corresponding angle, from 86 to 101 and from 102 to 117 , respectively ( Table 2, Supplementary Data). Hence, we suggest that PCNA association promotes an additional reorganization of the DNA flap substrate beyond that induced by XPF-wt alone. A single-kink model of the flap substrate based on the inter-dye distances and angles obtained for the XPF/DNA complex in the presence and absence of PCNA is shown in Figure 4b, assuming a ss internucleotide distance of 5.3 Å . Alternatively, applying the double-kink model (Figure 4c and d) and the average inter-nucleotide distance of 5.3 Å , we found that the FRET distance obtained for the Flap-13 substrate bound to XPF and PCNA is compatible with a ss melted region covering $6-7 nucleotides between the up-and the downstream duplexes, which represents 2-3 nt additional melting compared to XPF-wt alone. For comparison, values of 9 and $5-6 nt were obtained using the limiting values of 4 and 5.6 Å reported in the literature for the inter-base distance. In these calculations we have kept the kink-angle constant at 90 and assumed that the distance changes are exclusively caused by an increased upstream melting in the presence of PCNA. Our data do not allow us to further evaluate whether the relative changes in distance arise exclusively from unwinding, from a modification of the kink angle induced by PCNA binding, or from a combination of both. However, they demonstrate that PCNA association leads to a different relative positioning of the up-and down-stream complexes when compared to that in the complex XPF/DNA. These data are consistent with the kinetic studies which demonstrated an essential role for PCNA in the catalytic cycle of crenarchaeal XPF (21).
In summary, we have demonstrated that XPF recognition of the DNA flap substrate requires both the nuclease and HhH 2 domains to act in a concerted manner to position the DNA structure in the proper conformation for cleavage. Our data confirm that XPF distorts the substrate mostly by inducing a 90 kink angle between the down-and upstream regions as predicted from the ApeXPF cocrystal (8). Moreover, for the first time, we provide experimental evidence showing that the structure of the XPF/DNA complex is markedly different in the presence and absence of PCNA. Our model underlines the significant role that PCNA plays as a molecular scaffold, enabling XPF to further distort the DNA structure, and points towards the XPF/PCNA unit as the active complex. The absolute requirement of PCNA for XPF activity, together with this novel dimension of PCNA function coordinating the XPF-induced deformation of the flap substrate, might represent a finely tuned quality control mechanism, designed to ensure that cleavage only occurs when the DNA damage is recognized correctly by the repair machinery. Given the ubiquity of PCNA in DNA processing pathways and the ever growing number of endonucleases and other proteins whose function is known to be stimulated by PCNA, our findings provide a framework to understand PCNA function beyond a hand-off platform where proteins are transiently exchanged. | 2014-10-01T00:00:00.000Z | 2009-12-11T00:00:00.000 | {
"year": 2009,
"sha1": "8795bc691445dea75cc0057bd22cb35eb6b1c9e9",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/38/5/1664/16769550/gkp1104.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f0c4affa904e1d2f38f6162e1c83d11d41b0df2",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
3603227 | pes2o/s2orc | v3-fos-license | Heregulin-HER3-HER2 signaling promotes matrix metalloproteinase-dependent blood-brain-barrier transendothelial migration of human breast cancer cell lines.
HER2-positive breast tumors are associated with a high risk of brain relapse. HER3 is thought to be an indispensible signaling substrate for HER2 (encoded by ERBB2) and is induced in breast cancer-brain metastases, though the molecular mechanisms by which this oncogenic dimer promotes the development of brain metastases are still elusive. We studied the effects of the HER3-HER2 ligand, heregulin (neuregulin-1, broadly expressed in the brain), on luminal breast cancer cell lines in vitro. Treatment of SKBr3 (ERBB2-amplified), MDA-MB-361 (ERBB2-amplified, metastatic brain tumor-derived) and MCF7 (HER2-positive, not ERBB2-amplified) cells with exogenous heregulin increased proliferation and adhesive potential, concomitant with induction of cyclin D1 and ICAM-1, and suppression of p27. All three cell lines invaded through matrigel toward a heregulin chemotactic signal in transwell experiments, associated with activation of extracellular cathepsin B and matrix metalloproteinase-9 (MMP-9). Moreover, heregulin induced breast cancer cell transmigration across a tight barrier of primary human brain microvascular endothelia. This was dependent on the activity of HER2, HER3 and MMPs, and was completely abrogated by combination HER2-HER3 blockade using Herceptin® and the humanized HER3 monoclonal antibody, EV20. Collectively these data suggest mechanisms by which the HER3-HER2 dimer promotes development of metastatic tumors in the heregulin-rich brain microenvironment.
INTRODUCTION
The development of brain metastases is a growing public health problem affecting more than 100,000 patients in the United States every year [1], including 10-30% of breast cancer patients [2,3]. This complication is associated with severe morbidity and virtually 100% mortality, as currently there is no treatment strategy with proven efficacy. HER2-positive breast cancer patients are at particularly high risk [4,5], with around half developing brain metastases during the course of disease [6]. HER2-targeted drug therapies delay the onset of brain metastases in these patients, and improve median survival after a diagnosis of metastatic brain relapse [7][8][9][10]. These observations indicate that HER2 plays a critical role in brain relapse. However, the www.impactjournals.com/oncotarget molecular mechanisms underpinning this relationship have not been investigated in detail.
HER2 is an orphan member of the human epidermal growth factor receptor (HER/ERBB) family. It undergoes obligate heterodimerization with HER3, and to a lesser extent HER4 and EGFR [11][12][13][14]. The HER3-HER2 dimer is regarded as the major oncogenic unit in HER2positive breast cancer [11,15,16]. Ligand-activated dimers transduce potent survival and proliferation signals through the PI3K-AKT and ERK1/2 pathways [11,17]. HER3 has two ligands: Heregulin (HRG; also known as neuregulin-1 (NRG1)) and neuregulin-2. HRG is the better studied of the two and is broadly expressed in the brain by neurons, glia and the cerebral endothelium, functioning to promote survival, differentiation, migration and cytoprotection [18][19][20]. At least 17 HRG isoforms have been described, including secreted, membrane-bound and nuclear 'back-signaling' isoforms that are generated through a combination of alternate transcription start sites, splicing and post-translational processing [21]. Importantly, primary breast cancers that over-express HER3 are associated with a significantly higher rate of isolated brain metastases [22], and induction of HER3 is associated with development of brain metastases from both breast and lung cancers [23,24]. Despite this, the functional relationships between HRG, HER3 and HER2 in breast cancer-brain metastases have not been elucidated.
HRG and HER2 signaling can also induce certain matrix metalloproteinase enzymes (MMPs) [25][26][27][28]. MMPs are zinc-dependent endopeptidases that degrade extracellular matrix (ECM) proteins. Their activities facilitate various normal physiologic processes (e.g. wound healing and organ development), and their dysregulation can be associated with pathological processes, including progression-associated changes in the tumor microenvironment. Regulation of MMP expression and activity is complex -they are expressed and stored as zymogens, secreted and activated ondemand by 'convertases' including other MMPs, and in vivo, their activities are fine-tuned according to the local balance between MMPs, TIMPs (tissue inhibitors of metalloproteinases) and other physiologic inhibitors [29] (e.g. the metastasis suppressor, RECK [30]). Therefore measurement of MMP expression is not a reliable surrogate for function. MMPs have been strongly implicated in the development of brain metastases from breast cancer [31][32][33][34][35]. For example, expression of MMP-9 is relatively higher in the brain-seeking MDA-MB-231 breast cancer cell line variant compared to parental and bone-homing counterparts [36], and ectopic expression of MMP-2 in MDA-MB-231 cells increased the incidence of brain metastases after intracardiac injection [37].
In order to establish distant brain metastases, disseminated breast cancer cells must initially traverse the blood-brain-barrier: a specialized endothelium that is resistant to diffusion of hydrophilic or large molecules by virtue of endothelial tight-junctions that are unique to the central nervous system. This specialized microvasculature is in close contact with astrocytic foot processes, pericytes and a thick basement membrane, which collectively facilitate the high substrate selectivity that is necessary to protect brain tissue from circulating pathogens and toxins, including chemotherapeutic agents [38,39]. Extravasation of tumor cells across this barrier is therefore thought to be an active process. Following extravasation, tumor cells must establish growth-promoting interactions with the neural niche [40][41][42][43].
This study aimed to investigate molecular mechanisms by which HER3-HER2 signaling may promote the development of brain metastases from breast cancer. In light of the inferred associations between HER3, HER2, MMP activity and brain metastases, and the ubiquitous expression of heregulin in the brain, we hypothesized that heregulin may activate molecular mechanisms conducive to the establishment and growth of HER2-positive breast cancer cells in the brain microenvironment.
Expression of ERBB3 and NRG1 isoforms in breast cancer cell lines
To characterize the expression of heregulin (NRG1 gene) and HER3 (ERBB3 gene) in breast cancer cell line models, and select the most appropriate lines for functional experiments, we investigated the relative baseline mRNA expression levels of ERBB3 and NRG1 (α and β heregulin splice isoforms) in a large panel of breast cancer cell lines by quantitative reverse transcription-PCR (qRT-PCR). The gene expression profile-based molecular subtypes of these cell lines (luminal, luminal-ERBB2-amplified, basal-A and basal-B/claudin-low) were derived from published reports [44][45][46], marked in Figure 1. This screening experiment revealed an inverse association between ERBB3 and NRG1 expression, with highest levels of ERBB3 in luminal cell lines, and highest levels of NRG1 in claudin-low cell lines, consistent with their mesenchymal-like phenotype [46] ( Figure 1). ERBB3/NRG1 expression phenotypes were mixed in basal-A cell lines.
Paracrine activation of HER2-HER3 in luminal breast cancer cell lines
We next investigated the responses of three representative ERBB3-expressing cell lines to treatment with exogenous heregulin (HRG): MDA-MB-361, MCF7 and SKBr3. These three cell lines are luminal-like when stratified based on transcriptomic profile [44][45][46]. Other key features to note are that MDA-MB-361 and SKBr3 harbor ERBB2 www.impactjournals.com/oncotarget amplification, MDA-MB-361 was isolated from a breast cancer-derived metastatic brain tumor, and SKBr3 cells do not express estrogen receptor (ER-negative) [44]. All three lines are capable of colonizing the brain in animal models ( [47,48] and unpublished observations).
To begin to examine the effects of exogenous HRG, cells were deprived of serum ('serum-starved') before HRG treatment, since serum contains many growth factors including HRG itself. Forty-eight hours of HRG treatment resulted in noticeable morphological changes, including stellate features and pseudopodia formation by MCF7 and SKBr3 cells (Figure 2A), consistent with other reports suggesting HRG treatment induces an epithelialto-mesenchymal phenotypic shift in these cell lines [49,50]. Morphologic change for MDA-MB-361 was consistent with the other two cell lines but more subtle overall, with cells becoming less cohesive and developing some stellate projections.
We also investigated HER3-HER2 downstream signaling 30 min after HRG treatment. All three cell lines responded to exogenous HRG with phosphorylation of HER3 and its preferred dimerization partner HER2, but not the other HRG receptor HER4 ( Figure 2B). There was also HRG-induced phosphorylation of AKT and ERK1/2, important downstream targets of HER2 that regulate tumor cell survival, proliferation and invasion [17]. Though of lesser magnitude than the phosphorylation induction, there was also an increase in total HER3 protein levels. The short time frame of this experiment suggests this may involve post-transcriptional mechanisms, such as protein stabilization or translation efficiency.
In contrast to the HER2/HER3-positive luminal cell lines, three representative claudin-low cell lines (Hs578T, MDA-MB-231 and SUM-159-PT; Figure 1) did not show induction of HER3 expression or phosphorylation following treatment with exogenous HRG (Supplementary Figure 1).
Exogenous HRG treatment induces cell linedependent proliferation and adhesion of luminal breast cancer cells in vitro
Since AKT and ERK1/2 were potently activated by HRG in three luminal breast cancer cell lines and they are known to induce tumor cell proliferation in other contexts [51], we investigated the effects of HRG on proliferation of HRG-treated versus untreated MDA-MB-361, MCF7 and SKBr3 cells. As shown in cell lines indicated were cultured to sub-confluence then total RNA was isolated for quantitative RT-PCR analysis of heregulin (neuregulin-1 α and β splice isoforms: NRG1α and NRG1β) and ERBB3. Data shown are means ± standard deviation. We observed an inverse association between ERBB3 and NRG1 with reciprocal expression in luminal compared to claudin-low (basal B) breast cancer cell lines. Figure 3A, HRG induced a time-dependent proliferative response in MDA-MB-361 and MCF7, but not SKBr3 cells. Others have demonstrated that HER2 promotes proliferation through deregulation of cell cycle checkpoints [52]. We therefore investigated the effects of HRG on cyclin D1 and p27 protein levels by Western blot analysis. As shown in Figure 3B, HRG treatment attenuated p27 and induced cyclin D1 expression in the two proliferative cell lines.
Tumor cell adhesion to extracellular matrix proteins enhances survival and metastatic potential of circulating tumor cells [53], and adhesion of circulating tumor cells to the brain endothelium is thought to be a critical step preceding endothelial retraction and active extravasation [54,55]. To investigate whether HER2-HER3 signaling increases the adhesive properties of luminal breast cancer cell lines, we assayed adhesion of HRG-treated cells to collagen I, which is a substrate for a range of cell adhesion molecules. HRG enhanced the adhesion of MDA-MB-361 and SKBr3 cells to collagen I ( Figure 4A), concomitant with induction of ICAM-1 ( Figure 4B-4C), a β2-integrin receptor associated with enhanced invasion, motility and metastasis in breast cancer [56][57][58]. Collectively these data show that exogenous HRG promotes proliferation and adhesion of luminal breast cancer cell lines, though these responses could be context-dependent since they were not consistent across the cell lines tested.
Exogenous HRG induces luminal breast cancer cell line invasion and secreted protease activity
AKT signaling induces aggressive breast cancer cell behavior and others have reported that HRG induces motility and invasion through ECM proteins (e.g. [50,[59][60][61]). Consistent with this, transwell assay experiments showed that the three cell lines migrated toward a HRG chemotactic signal ( Figure 5A). Moreover, this response was maintained after coating the transwell inserts with matrigel ( Figure 5B). These data show that HRG promotes both migratory and invasive behavior of luminal breast cancer cell lines, which otherwise migrate very poorly in vitro.
Since this invasive activity requires remodeling of ECM proteins, and AKT-associated invasion is marked by secretion of several proteases that are important in invasion and metastasis [62], we next investigated expression of MMP-2, MMP-9, urokinase plasminogen activator receptor (PLAU; uPAR) and its ligand (PLAUR; uPA), and cathepsin B (CSTB) in the HRG-treated cells. Using qRT-PCR analysis we found HRG induced cell line-dependent increases in expression of MMP2, PLAUR and PLAU, and a modest but significant decrease in expression of CSTB ( Figure 6A). MMP9 was consistently induced in all three cell lines ( Figure 6A). This was also evident at the protein level, with Western blot analysis confirming induction of MMP-9 protein in all three cell lines, and variable changes for the other proteolytic proteins ( Figure 6B).
Ultimately we were interested in common HRGinduced changes in the secretion and extracellular activity of ECM proteases, and so we assessed the proteolytic activities of MMP-9 and MMP-2 activity in conditioned media from the HRG-treated cells by gelatin zymography (gelatin is a substrate for both enzymes). This experiment confirmed that HRG-mediated induction of MMP-9 was associated with activation of extracellular MMP-9 activity in cultures of all three cell lines, though the total amount was relatively lower in MDA-MB-361 compared to SKBr3 and MCF7 cells ( Figure 6C). MMP-2 activity was induced in MDA-MB-361 and SKBr3 cells.
Others have previously reported that in both breast tumors and cell lines, there is a negative association between the expression of MMP-9 and one of its natural h, allowed to adhere to collagen-I-coated dishes for 15 min then adhesion potential was determined using optical density measurements of treated versus untreated controls. *p = 0.05-0.01, **p = 0.01-0.001 (2-tailed, unpaired student's t-tests). HRG increases expression of ICAM-1 RNA (B) and protein (C) in MDA-MB-361 and SKBr3 cells. Cells were treated as above, then total RNA or protein were prepared for quantitative realtime PCR or Western blot of ICAM-1 expression respectively (HPRT1 or α-tubulin loading controls, respectively). **p = 0.001-0.0001; ***p < 0.0001 (2-tailed, unpaired student's t-tests). FC, fold-change.
inhibitors, RECK [30], a key breast cancer metastasis suppressor gene. Furthermore, RECK is repressed in brain metastases compared to primary breast cancers [63]. The KAI1 (CD82) metastasis suppressor has also been implicated in MMP-9 repression [64]. We therefore investigated expression of KAI1 and RECK by qRT-PCR and found that HRG treatment repressed both genes ( Figure 6D), suggesting this could be one mechanism by which HRG increases extracellular MMP-9 activity. Interestingly, the two RECK-suppressed cell lines (MDA-MB-361 and SKBr3) are ERBB2-amplified, and RECK is known to functionally oppose oncogenic HER2 signaling by interfering with HER3 dimerization [65].
Finally, we investigated the expression and activity of extracellular cathepsin B, as this has been implicated in mediating invasive behavior of HER2-positive breast cancer cells [66], and activation of MMP-9 and infiltrative tumor cell growth in glioma [67][68][69]. There was no substantial change in cathepsin B expression following treatment with exogenous HRG ( Figure 6E), but there was a significant increase in extracellular cathepsin B activity in all three breast cancer cell lines ( Figure 6F). Others have shown that PI3K mediates cathepsin B secretion by lysosomal exocytosis [70], and therefore increased secretion of cathepsin B may be one mechanism by which HRG increases its extracellular activity in breast cancer cell lines.
Exogenous HRG induces transmigration of breast cancer cell lines across a tight barrier of primary human brain microvascular endothelial cells
MMP-2 and MMP-9 have been associated with degradation of endothelial tight junction proteins, permeabilization of the blood-brain-barrier (BBB) and subsequent brain colonization in mouse models of leukemia [71]. Therefore we investigated whether active MMP isoforms in conditioned media from HRGtreated breast cancer cell lines are sufficient to stimulate transmigration across an endothelial barrier. We established an in vitro model of the BBB using primary human brain matrigel (B). HRG was supplemented in the lower chamber, and media was changed regularly to maintain a concentration gradient. After 48 h, cells on the lower surfaces of the porous membranes were quantified by crystal violet staining. Data shown are means ± standard deviation, normalised to the untreated control group for each cell line. **p = 0.05-0.01, **p = 0.01-0.001 (unpaired, 2-tailed student's t-tests of HRG-treated cells compared to untreated controls). FC, fold-change. www.impactjournals.com/oncotarget microvascular endothelial cells (HBMECs) and matrigel to simulate the brain endothelium and basement membrane respectively ( Figure 7A). The integrity of this barrier was validated by measuring dextran-FITC diffusion ( Figure 7B), and by confirming strong induction of tight junction proteins claudin-5, ZO-1 and occludin in the HBMEC layer ( Figure 7C) as described [72].
Treatment of co-cultured breast cancer cell lines with exogenous HRG ( Figure 7A) caused active migration of all three cell lines across the tight HBMEC layer ( Figure 7D). This response was attenuated upon inhibition of HER2, HER3 or MMP activity, assessed by supplementing the upper chambers with saturating doses of humanized monoclonal antibodies against HER2 and HER3
Figure 6: Treatment of luminal breast cancer cell lines with exogenous HRG increases extracellular protease activity. (A, B) HRG increases expression of proteolytic cascade proteins. Serum-starved cells were treated with HRG for 48 h, then
total RNA or protein were isolated from the cells for qRT-PCR and Western blot analyses respectively (HPRT1 and α-tubulin were used as normalization and loading controls, respectively). (C) HRG increases secreted MMP-2 and MMP-9 proteolytic activities. Starved cells were treated with HRG as above and then conditioned media was concentrated and analysed for MMP-2 and MMP-9 activity by gelatin zymography (enzymatic activity is proportional to the intensity of the white bands). (D) HRG represses expression of RECK and KAI1 metastasis suppressor genes. qRT-PCR analysis was performed as for (A). HRG treatment does not substantially alter cathepsin B protein expression (E), but increases extracellular cathepsin B proteolytic activity (F). Cathepsin B expression was analyzed by Western blot analysis as for (B), with β-actin as the loading control. Enzyme activity was assayed using a fluorometric enzyme activity assay. *p = 0.01-0.001, **p = 0.001-0.0001, ***p < 0.0001 (unpaired, 2-tailed student's t-tests comparing treated to untreated control samples). FC, fold-change. www.impactjournals.com/oncotarget (Herceptin ® and EV20 respectively) or the broad-spectrum MMP inhibitor, GM6001. Treatment with GM6001 did not completely abrogate the response, indicating that MMPs may not be the only transmigration mechanism activated by HRG-HER3-HER2 signaling. It is noteworthy that complete inhibition of transmigratory activity was only achieved through combined blockade of HER2 and HER3 (Herceptin ® + EV20; Figure 7D, green lines).
DISCUSSION
The development of brain metastases is a devastating complication that affects 10-30% of women with breast cancer [2], causing challenging neurological symptoms including headaches, cognitive impairment and seizures [73]. Current treatments can prolong life expectancy and improve quality-of-life, though ultimately (D) Breast cancer cell line transendothelial migration activity was measured in response to HRG ligand with and without drug treatments as indicated. Data shown are means ± standard deviation (n = 3 from a representative experiment). The statistical significance of differences between treatments and the untreated control was determined using unpaired, 2-tailed student's t-tests (*p = 0.05-0.01, **p = 0.01-0.001 and ***p < 0.001). BCa, breast cancer; FC, fold-change; mAb, monoclonal antibody; MMPi, matrix metalloproteinase inhibitor. they are not curative. Molecular targeted drug therapy is critically under-utilized in the clinic, partly because of deficiencies in our understanding of the molecular mechanisms involved in the seeding and subsequent proliferation of disseminated cells in the brain. Research in this area is now beginning to illuminate some of the mechanisms by which tumor cells exploit and remodel the local microenvironment to facilitate metastatic outgrowth [40][41][42][74][75][76].
Heregulin is critical for the normal development and function of the nervous system. It is expressed by neurons, glia and brain microvascular endothelia, where its functions include promoting glial cell survival and differentiation, neural precursor cell differentiation and migration, and endothelial cell protection from oxidative injury [18][19][20]. Consistent with the idea that HRG is a brain growth factor exploited by metastatic cells, HER3 is induced and activated in brain metastases compared to matching breast and lung cancers [23,24] and patients with HER2-positive breast cancer are at high risk of brain relapse [4,6]. Moreover, primary breast cancers over-expressing HER3 are more likely to relapse as isolated brain metastases than non-HER3-overexpressing tumors [22].
In this study, we found that exposure to HRG stimulated the transendothelial migration of HER2/3expressing breast cancer cell lines across a tight barrier of primary human brain microvascular endothelia and an associated matrigel layer, and that this was at least partly mediated by MMPs (Figure 7). Specifically, exposure to HRG increased the extracellular activity of MMP-9 in three cell lines, and MMP-2 in two of these lines ( Figure 6). Other studies have implicated MMP-2 and -9 in the development of brain metastases [31][32][33][34], and the current study now suggests that this could be at least partly due to enhancing vascular permeability. Since HRG is expressed by brain microvascular endothelia [20], these data raise the possibility that HRG-HER3-HER2 signaling is involved in extravasation from the brain microvasculature in vivo, particularly since we also found that HRG increases breast cancer cell adhesion potential ( Figure 4). Indeed, MMPs 2 and 9 can mediate vascular leakage in experimental models of cerebral ischemic injury by degrading endothelial tight junction complexes [77][78][79]. HRG-mediated vascular permeabilization could also be important in established metastases with increasing metabolic demands.
Adjuvant Herceptin ® therapy for HER2-positive breast cancer delays the onset of brain metastases [10], and this latency is further extended by the HER2 dimerization blocker Perjeta ® [8]. In this context, it is noteworthy that Herceptin ® and the humanized HER3 antibody EV20 [80] conferred additive suppression of transmigration in our blood-brain-barrier experiments (Figure 7). There are likely to be multiple mechanisms enabling endothelial transmigration and consequent establishment of micrometastases in vivo (for example, we also found that exposure to exogenous HRG reduces expression of KAI1 and RECK metastasis suppressor genes), however these in vitro experiments may provide some molecular insight into the aforementioned clinical observations.
The cell lines used in this study all migrated and invaded through extracellular matrix proteins towards an HRG chemotactic signal ( Figure 5), concomitant with increased activity of extracellular proteases ( Figure 6). Consistent with other reports [60,81], we observed cell proliferation in response to HRG exposure (Figure 3). Collectively these data suggest that HRG-HER3-HER2 signaling could be involved in several aspects of brain metastasis development. In vivo experiments modeling these steps with inhibition of HER2-HER3 dimer function are required in the future.
This study has potential implications in translational oncology, and warrants further investigation into the possibility of targeting the HRG-HER3-HER2 axis for management of brain metastases from HER2-positive breast cancer.
Breast cancer cell lines and HRG activation assay
The breast cancer cell lines used in this study were obtained from the American Type Culture Collection (ATCC). The panel of cell lines included three molecular subtypes previously defined by expression array profiling and unsupervised cluster analysis and/or surrogate immunohistochemical markers [44][45][46] commercially available breast cancer cell line that was derived from a metastatic brain tumor. All cell lines were authenticated by STR profiling (Cell ID™ system; Promega) and were routinely checked for mycoplasma infection (MycoAlert™; Lonza). Cell cultures were maintained at 37°C in 5% CO 2 in a humidified incubator and cultured according to ATCC recommendations.
For heregulin (HRG) activation experiments, cells were routinely seeded at predetermined densities in regular culture medium, then switched from the recommended amount of fetal bovine serum (FBS) to 0.1% FBS (serum-starved conditions) for 24 hours. Cultures were then supplemented with 50 ng/mL HRG-β1 (Sigma) and cultured for either 30 min (signaling analysis) or 48 h (functional experiments including assaying expression of downstream targets of HER2-HER3 signaling). www.impactjournals.com/oncotarget Analysis of gene expression by quantitative real time reverse transcription-PCR (qRT-PCR) Trizol (Invitrogen) was used to isolate total RNA from cultured cells. cDNA was prepared using 1 μg of RNA from each sample with the PrimeScript RT reagent kit (Takara Bio Inc.). qPCR was then performed in triplicate on a StepOnePlus instrument using SYBR green (Applied Biosystems) according to standard procedures (see Table 1 for PCR primers). Melt curve analysis was performed to verify amplification of single PCR products. Hypoxanthine phosphoribosyl transferase1 (HPRT1) was amplified as normalizer and fold change in expression of each target mRNA relative to HPRT1 was calculated according to the 2 -ΔΔct relative expression formula [82]. HER3 qPCR was performed using the ERBB3 TaqMan ® expression assay (Hs00951455_m1; Applied Biosystems).
Western blot analysis
Total protein extracts were prepared in RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1.0% NP-40, 0.5% sodium deoxycholate and 0.1% SDS) containing fresh protease and phosphatase inhibitors (Thermo Scientific) for 30 min at 4°C. Thirty to fifty μg of lysate was resolved by SDS-PAGE, transferred to PVDF membrane (Immobilin P, Millipore) then probed with primary antibodies ( Table 2) followed by horseradish peroxidase (HRP)-conjugated secondary antibodies (Sigma). Two different HER3 antibodies were used to confirm the increase in HER3 protein levels 30 min after HRG treatment. Blots were probed with α-tubulin and β-actin as loading controls.
Cell proliferation assay
A microculture tetrazolium test (MTT) was performed to determine cell proliferation after treatment of the cells with recombinant HRG. Briefly, cells were plated onto 96-well plates at a density of 4×10 4 /well for 24 h and then starved in 0.1% FBS for 24 h. The cells were then treated with 50 ng/mL of HRG for 24, 48 and 72 h. Untreated cells were used as the control group. 100 μL of MTT (0.5 mg/ml) (Sigma) was added to each well and the cultures were further incubated at 37°C for 2 h. After dissolving the precipitated formazan with 100 μL of dimethyl sulfoxide (DMSO), the optical density was measured at 570 nm.
Cell adhesion assay
Adhesion experiments were conducted as described [14]. Cells were seeded into 6-well plates and after 24 h, washed three times with PBS and starved in 0.1% FBS overnight. The starved cells were treated with 50 ng/mL HRG for 48 h and seeded on collagen I coated 60 mm dishes (Biocoat Cell Environments; Becton Dickinson). After 15 min, cells were washed with cold PBS, stained with 0.5% crystal violet, lysed with 30% acetic acid and the optical density was measured at 590 nm.
Gelatin zymography
Gelatin zymography was carried out as described [83]. Briefly, conditioned media from HRG-treated and untreated cells was collected and centrifuged at high speed for 10 min to pellet cell debris. Protein from the conditioned media was then concentrated as appropriate using a Vacufuge ® plus (Eppendorf). Five μg of secreted protein were applied to 10% polyacrylamide gels copolymerized with 1 mg/mL gelatin (Sigma). After electrophoresis, gels were rinsed in 2.5% Triton X-100 (3×30 min) to remove SDS, followed by incubation at 37°C overnight in incubation buffer (0.15 M NaCl, 10 mM CaCl 2 , 0.02% NaN 3 in 50 mM Tris-HCl, pH 7.5). Gels were then stained (0.5% Coomassie Brilliant Blue) and destained with 7% methanol and 5% acetic acid. Areas of enzymatic activity appeared as clear bands over the dark background.
Cell migration and invasion assays
Cell migration was assayed in 24-well, 6.5-mminternal-diameter transwell plates (8.0 μm pore size; Costar Corp.). Serum-starved cells were placed in the upper chambers, and the lower chamber was filled with media containing 0.1% FBS (control), with or without HRG supplementation (50 ng/mL). Media in both chambers was changed every 6 hours to maintain an HRG gradient. Cells were allowed to migrate for 48 h. After this time, cells on the upper surfaces of the filters were removed by wiping with a cotton swab and migrated cells on the undersides of the filters were fixed with methanol, stained with crystal violet, lysed with 30% acetic acid and the optical density was measured at 590 nm. For invasion, experiments were essentially conducted as above, except that transwell filters were pre-coated with matrigel (1:10 dilution in media; BD Biosciences).
Cathepsin B activity assay
To investigate the effect of HRG on the activity of secreted cathepsin B, conditioned media from HRGtreated and untreated cells was centrifuged at 10,000 rpm for 15 min to remove cell debris, then concentrated appropriately using the Vacufuge ® plus (Eppendorf). We used a fluorometric cathepsin B activity assay (Abcam) according to the manufacturer's instructions.
Blood-brain-barrier transendothelial migration assay
Primary human brain microvascular endothelial cells (HBMECs) and HBMEC culture reagents were purchased from Cell Systems and cells were routinely For transmigration assays, 2×10 4 HBMEC cells were seeded into matrigel-coated (BD Biosciences) 24well transwell inserts (8 μm pores; Costar Corp.). The cells were maintained for 4 d to allow the TJ formation, then media from upper and lower chambers was changed to 1% FBS-CSC media plus CultureBoost for 24 h (as above). Serum-starved breast cancer cell lines (1×10 5 for MCF7 and SKBr3, or 2×10 5 for MDA-MB-361) were seeded into the upper chamber in 100 μL of their regular media containing 0.1% FBS, then allowed to attach for 2 h. Cultures were then treated with HRG, (50 ng/mL; Sigma), GM6001 (20 μg/mL; Calbiochem), Herceptin ® (20 μg/mL; Experimental Pharmacology Oncology, Berlin) and/or EV20 (20 μg/mL; Mediapharma) for 48 h. The upper surfaces of the transwell filters were then gently wiped clean with a cotton swab to remove nonmigrating cells and cells on the undersides were fixed with methanol, stained with crystal violet, lysed with 30% acetic acid and the optical density was measured at 590 nm
Statistical analysis
Data are expressed as mean ± standard deviation (SD). All experiments were performed in triplicate. For statistical analysis, unpaired, two-tailed t-tests were applied. P values of less than 0.05 were considered significant. | 2015-09-23T00:31:53.000Z | 2015-01-09T00:00:00.000 | {
"year": 2015,
"sha1": "7fcb1a14b35beef7b8a5904cc23f19a162a601ec",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=2846&path[]=5817",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fcb1a14b35beef7b8a5904cc23f19a162a601ec",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250689858 | pes2o/s2orc | v3-fos-license | Studies of astrophysically interesting nucleus 23Al
We have studied the β-delayed proton decay of 23Al with a novel detector setup at the focal plane of the MARS separator at the Texas A&M University to resolve existing controversies about the proton branching of the IAS in 23Mg and to determine the absolute proton branchings by combining our results to the latest βγ-decay data. We have made also a high precision mass measurement of the ground state of 23Al to establish more accurate proton separation energy of 23Al. Here the description of the used techniques along with preliminary results of the experiments are given.
Introduction
Classical novae are relatively common events in our galaxy with a rate of a few per year detected. The present understanding is that novae occur in interacting binary systems where hydrogenrich material accretes on a white dwarf from its low-mass main-sequence companion. At some point in the accretion the hydrogen-rich matter compresses leading to a thermonuclear runaway [1]. Understanding the dynamics of the nova outbursts and the nucleosynthesis fueling it is crucial in understanding the chemical evolution of the galaxy.
The key parameters in determining the astrophysical reaction rates dominated by resonant proton capture, as is the case with many sd-shell nuclei, are the energies and decay widths of the associated nuclear states. Accurate determination of these energies can be achieved through traditional decay spectroscopy, accompanied with high-precision mass measurements of the ground states by using Penning trap mass spectrometery. One of the key reactions that possibly deplete 22 Na produced in the so called NeNa-cycle and for which the reaction rates are known only with large uncertainties is the radiative proton capture 22 Na(p,γ) 23 Mg [2,3]. This reaction rate is dominated by the capture through low-energy proton resonances which correspond to excited states in 23 Mg nucleus above the proton separation threshold.
The relevant states in 23 Mg can be studied via β-decay of 23 Al, populating the excited states of 23 Mg that are decaying by both proton and γ-emission. Earlier works on the β-decay of 23 Al show contradicting results for the lowest states above the 22 Na+p threshold [4,5] The scope of the present work is to solve this controversy and to deduce the absolute proton branchings from the excited states of 23 Mg by combining our data to existing decay data [3].
β-decay of 23 Al
The β-decay of 23 Al was studied at the Cyclotron Institute of the Texas A&M University. In this experiment the 23 Al beam was produced in inverse-kinematics reaction 1 H( 24 Mg, 23 Al)2n by bombarding a hydrogen gas target with 24 Mg beam at 48 MeV/u. The recoil products were separated with the Momentum Achromat Recoil Separator (MARS) [6], resulting a beam of 23 Al with typical intensity of 4000 pps and purity of better than 95%. The beam was taken into the detector setup, illustrated in Fig 1, consisting of a 65µm thick Double-Sided Silicon Strip Detector (DSSSD) with 16+16 3 × 50 mm 2 strips, a 1 mm thick Si-pad detector and a high-purity germanium detector (HPGe). The beam implantation depth was controlled by using a rotatable 300 µm Al degrader, allowing us to tune the beam into the center of the DSSSD. The beam was pulsed with an implantation period of 1 second and a decay period of 1 second. The data was collected only during the decay part of the cycle.
The particle detectors were calibrated online with beams of 20 Na, 21 Mg, 22 Mg and the germanium detector with 24 Al. Both, the DSSSD and the HPGe were gated with the β-spectrum from the Si-pad detector. As the DSSSD used had fairly large pixel size, the β-response extends up to several hundred keV even with a pure source. One way to extract out the meaningful data is to measure the actual β-response from the detector by using an implanted source that does not emit any other charged particles and try to deduct this contribution from the interesting data. In this case the β-response was measured with 22 Mg. The measured β-spectrum was smoothed to get rid of statistical fluctuations and then scaled so that it matched the 23 Al spectrum at 150 keV. The resulting spectrum is illustrated in figure 2. The background-reduced total decay energy spectrum from this work (black) compared with the proton spectrum from Ref. [5]. The old Jyväskylä data is multiplied by factor of 20 to make it more visible (red). Even from the raw spectrum, it is clear that there is no anomalously large proton branch from the IAS in 23 Mg, as reported in Ref. [4]. From the background reduced spectrum, illustrated in Fig. 2, it is clear that our result agree with Ref. [5]. From these proton and γ-data one can extract the relative proton intensities and then by using the existing absolute γ-branchings [3] one can assign the absolute proton branchings from the states studied.
Mass of 23 Al
A high-precision mass measurement of 23 Al was conducted by using the JYFLTRAP setup at the IGISOL facility in the Accelerator Laboratory of the University of Jyväskylä. The 23 Al beam was produced by using Ion Guide Isotope Separator On Line (IGISOL) method [7]. From the IGISOL facility, ions having the same A/q = 23 were sent into a gas-filled radiofrequency quadrupole (RFQ) cooler-buncher [8] to prepare the samples for injection into the JYFLTRAP Penning trap setup consisting of two identical, cylindrical traps inside the same superconducting 7 T magnet [9]. The first trap is filled with low-pressure helium gas and works as a purification trap with a mass resolving power of a few ×10 5 . The second trap is a precision trap where the mass of the ion is determined by using the time-of-flight ion cyclotron resonance (TOF-ICR) method [10,11]. The absolute mass of the ion of interest is determined from the ratio of the measured cyclotron frequencies of the sample and a well known reference case: Figure 3 illustrates typical TOF resonance curves from this experiment. In this experiment 23 Na was used as reference for determining the masses of 23 Al and 23 Mg. The full analysis and the implication for the Isobaric Multiplet Mass Equation are discussed in a separate article [17]. One can calculate a new value for the 23 Al proton separation energy, S p ( 23 Al) = 141.11(43) keV, by combining our result for the mass excess of 23 Al, 6748.07(34) keV, and the mass excesses of 22 Mg, -399.79(25) keV, [18] and 1 H, 7288.97050(11) keV, [12].
Conclusions and outlook
From the β-decay data, it is clear that there is no exceptionally large proton branching from the IAS in 23 Mg and thus strong isospin mixing as proposed by Tighe et al. in Ref. [4]. Our decay data allows us to extract the proton branchings from the excited states of 23 Mg that are close to 22 Na+p threshold and thus crucial to the 22 Na(p,γ) 23 Mg. reaction rate. The detailed analysis is to be finished and published in near future. The same setup has been improved and optimized for further experiments, which include β-decay of 31 Cl and similar studies under planning. Mass Evaluation (AME03 [12]) value. MSU74 refers here to [13], MSU01 refers to [14] and JYFL08 is from the work presented in this paper. The AME value for 23 Mg is a combination of two spectrometric studies [15,16].
The S p value resulting from our mass measurement is higher than the previous value indicating a reduced halo nature. This new value has influence on the calculated astrophysical S-factor for the proton capture reaction 22 Mg(p,γ) 23 Al and its corresponding reaction rate in the stellar enviroments. It also shows 23 Al to be more resilent to destruction through photodissociation, making this isotope a more important player in the reaction networks in the explosive H-burning processes, like novae and X-ray bursts. | 2022-06-28T03:17:00.512Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "28fed35ddc5b3d6c50c5714f9d8a2ddafa9b0d26",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/202/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "28fed35ddc5b3d6c50c5714f9d8a2ddafa9b0d26",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
26321552 | pes2o/s2orc | v3-fos-license | Scania Swedish - A Basis for Multilingual Translation
This paper describes ongoing work on the definition and checking of a controlled language for technical text at Scania. The controlled language, Scania Swedish, will serve as a basis for machine-translation into Scania's market languages. Machine translation will be handled by a modular, transfer-based MT-system, Multra, taking Swedish as its source language. The analyser of Multra serves as the engine of ScanCheck, a language checker embodying a specification of Scania Swedish, as the language is being defined. A first version of ScanCheck was installed for evaluation at Scania in October 1997. Current states of Scania Swedish, and ScanCheck, and the operation of Multra are briefly described.
INTRODUCTION
The documentation of truck and bus maintenance at Scania is extensive. In 1997 the production of text amounted to approx. 10,000 pages. To this should be added the already existing documentation, which consists of some 7,000 pages. The handling of documentation and translation processes involves both controlling the quality of new texts and a continuous updating of old texts.
The documentation is written by technical writers at Scania in Swedish and is translated in its full versions into nine languages at the moment: English, German, Dutch, French, Italian, Spanish, Finnish, Polish and PortoBras (basically, Brazilian on Portuguese foundation, developed at Scania). Parts are also translated into Norwegian and Danish. The major part of the texts is translated by an external translation office. Scania has decided to use Swedish, the mother tongue of the technical writers, as the source language in the translation process. In doing so, Scania strongly believes that the quality of the translation is firmly grounded.
The quantity of source language text steadily increases as well as the amount of target languages. Furthermore, Scania provides the multilingual documentation simultaneously on the global market. This combined with shorter production cycles increases the demands of a consistent, comprehensive and controlled source language as a means to speed up the documentation and translation processes while meeting the demands of quality assurance. Efforts are being made, one of them is the development of a controlled language with a language checker to focus on the translatability of the documentation language. Thus Scania Swedish aims to be a full-fledged language that easily translates and therefore enables an easy localisation.
BASIC APPROACH
According to our approach, multilingual translation should be based on a controlled source language maintained by means of a language checker 1 . The checker should fully cover the controlled language and guarantee a text in conformity with the specification of the controlled language. It should base its work on full parsing, generating grammatical structures that can be forwarded to the transfer and generation components. With this approach, the first, and heaviest, step of the translation process will be taken by the language checker, and there will be a firm ground for translation. We base the implementation of this approach on the Multra machine translation system. DEFINING SCANIA SWEDISH Scania Swedish will be defined with regard to vocabulary, phraseology, grammar, punctuation, and general writing conventions. It will be based on an examination of the unrestricted Swedish used in a corpus of maintenance text from 1995. The corpus comprises 80 documents (15,000 pages, 206,990 tokens). On this language, systematic restrictions will be imposed, which aim at eliminating unnecessary linguistic variation while keeping the required expressive power.
Vocabulary
The vocabulary of the corpus was analysed and 9,184 lemmas were identified and approved for the first version of Scania Swedish, see Almqvist and Sågvall Hein (1), Sågvall Hein (11). Among the approved words we find not only single words such as växellåda [gearbox] but also phrasal words such as Electronic Diesel Control, i förhållande till [in relation to], så gott som [almost], and ta bort [remove]. A dictionary of stems and indeclinable words and phrases covering this set of words was established, the Scania plus dictionary. 135 lemmas, e.g. AChäfte [AC-booklet] instead of AC-häfte, and 940 inflectional forms, e.g. medbringarn [the driver] instead of medbringaren, were not approved for Scania Swedish but referred to a dictionary of minus words, the Scania minus dictionary.
As new documents have been analysed, more words have been added to the Scania vocabulary. Currently, it comprises 13,273 lemmas, 9,483 of which are domain specific or Scania specific, whereas 3,790 belong to the general language. The minus dictionary comprises 364 lemmas, and 1,231 minus forms. 289 of the minus lemmas and 579 of the inflectional forms have recommendations for replacement. Among the minus lemmas there are a few (27) that are approved in certain contexts only. An example of such a word is bränslematartryck [fuel feed pressure]. It is approved in sentence fragments only. In full sentences matartryck för bränsle is recommended. Words of this kind are marked with an asterisk in the dictionary, and henceforth referred to as asterisk words.
Phraseology and grammar
Phraseology is an important aspect of a controlled language. As mentioned above, indeclinable phrases, phrasal names and phrasal verbs are included in the Scania vocabulary. In addition to these phrasal types, we also have to care about valency. Currently, the verb valencies in the corpus are being systematically investigated. Concerning noun valencies, so far, only one type of restriction has been imposed. It concerns the choice of preposition in post attributes in those cases where an error/violation may be foreseen and a replacement may be recommended. For instance, the preposition för 'to' is proposed as the preferred alternative to till 'to' in noun phrases such as specialverktyg till cylinderfoder 'special tool for cylinder liner'.
As a preparation for a systematic study of the grammatical structure in the corpus at sentence level, the text was segmented into sentences and sentence-like segments that are to function as translation segments in the translation process. The most typical translation segment is the sentence as it can be distinguished in the text by means of signs of punctuation and capital letters. However, also headers (major and minor), list elements, list element labels, and table cells have a fairly independent status in the text and should be treated as translation segments, In order to recognise them, we have to use typographic information in the documents. Consequently, a software has been developed that converts the framemaker version of the documents into TEI Lite SGML, see Tjong Kim Sang (14). The SGML version of the documents is marked in such a way as to allow for the segmentation into sentences and sentence fragments. Based on this segmentation, statistics about sentence lengths in the corpus have been calculated.
In order to facilitate systematic studies of phraseology and other aspects of grammatical structure, the corpus is being tagged. The tagger assigns not only part of speech information to the words but also lemma information. Tagging is performed by means of a Brill tagger, see Brill (4,5), trained for Swedish and highly structured technical text. It bases its work on the Scania dictionary and thus has full coverage of the vocabulary.
In a future work to define the grammar of Scania Swedish it is obvious that some syntactic constructions or features will have to be restricted: CONDITIONAL CLAUSES In Swedish the conditional subordinate clause either begins with the subjunction om 'if' (a), or it begins with the predicate verb followed by the subject, i.e. the word order is inverted (b). The atype with the overt conditional trigger will be the preferred one. TEMPORAL CLAUSES There are also two equivalent temporal conjunctions: då, när 'when'. när is clearly the most frequent one, and, furthermore, it is distinctly temporal, and will therefore be the preferred one. The conjunction då has both a temporal and a causal interpretation, which may cause inconsistencies.
Då kärnan och spolen inte är i mekanisk kontakt med varandra kommer det inte att bli någon mekanisk nötning mellan dessa två delar. [When / since the core and the coil are not in mechanic contact with each other, there will be no mechanic wear between these.] ELLIPSES Ellipses often cause unnecessary uncertainty in a technical text. They are also difficult to handle in machine translation. The deletion of e.g. a coreferential NP will therefore not be allowed: The Scania language checker, ScanCheck, should cover all the aspects characterising Scania Swedish. It must be capable of handling deviations from Scania Swedish at the lexical, the morphological, and the syntactic level, respectively. So far, we have a full specification of Scania Swedish only with regard o its vocabulary (incl. morphology, spelling, and abbreviation standard), and this specification has been built into the checker. Thus the checker has a dictionary of approved words, plus words, and a dictionary of non-approved words, minus words; most of the minus words have recommendations for replacement. The checker also has a morphological grammar that knows about approved and non-approved inflection. The coverage of the syntactic grammar is, so far, limited to the recognition of phrase constituents; the NP rules account for the detection of agreement errors and foreseen non-approved use of preposition in post attributes. Errors found in the NP are propagated to the PP. It is a characteristic feature of the ScanCheck parser that its language description embodies both approved and unapproved language.
Architecture and Basic Operation
ScanCheck has two basic modules, a chart parser, Ucp, and an error reporting program, CheckChart, see Starbäck (12).
Ucp is a chart parser generating grammatical descriptions in terms of attribute value structures. It uses a procedural formalism, and rule invocation is triggered from the grammar and the dictionaries. The same formalism is used both in the dictionary and in the grammar. See further Carlsson (6), Dahllöf and Sågvall Hein (7), Sågvall Hein (9). This allows for the implementation of a flexible rule invocation strategy mixing top-down and bottom-up rule invocation.
Dictionary-search, morphological analysis, and syntactic analysis are handled in a common chart framework, and processing proceeds task by task. A unique start rule in the grammar specifies (for each application) what rules should be applied to get the process going. The inclusion of a dictionary search rule in the start rule will lead to the recognition of words and phrases. For instance, at the recognition of a nominal stem, a noun rule is triggered, which in its turn invokes an NP rule, if the morphological analysis of the noun succeeds. Basically, phrase constituents are invoked bottom-up and sentence rules are invoked top-down.
The Ucp parsing machinery is also used by the Multra analyser, the main difference being that the Multra analyser performs full parsing, whereas the ScanCheck parser has to rely on partial parsing, until the grammar of Scania Swedish has been fully defined. Partial parsing can be readily implemented in the Ucp framework due to the procedural nature of the Ucp formalism, and the option of specifying different start rules, see http://stp.ling.uu.se/~starback/checker.html [in Swedish].
The ScanCheck parser makes å partial analysis of the input, building as much structure as the grammar allows. Typically, it builds representations of words and phrase constituents, some of them correct, some of them with foreseen violations/errors. Word recognition is based on morphological analysis and the Scania stem dictionary with its plus words and minus words, and the morphological grammar accounts for the detection of non-approved inflectional forms. When an unknown string appears, i.e. a string that is not found in any of the dictionaries, the parser goes on to find the next word.
CheckChart checks the chart for errors and uncovered character sequences, and generates error messages to be presented to the human user. ;; An example of a gender error: in unrestricted Swedish, the word test and its compounds may ;; be used as neuter or non-neuter nouns. In Scania Swedish the neuter gender has been fixed.
;; The error is identified as an agreement error between the article (quantifier) and the noun. WORD.CAT = NOUN)))))
Integration and evaluation at Scania
A first version of ScanCheck was installed in October at Scania and it is currently being evaluated. It is developed for UNIX, but it will be transferred to NT for PC:s. The programs are written in Commonlisp and Perl/Tk. The checker processes on SGML versions of the original frame maker documents.
Basically, it offers the technical writer assistance in two ways: specific word check and language checking of a completed document. While writing he may want to look up a specific word to check whether or not it is accepted in Scania Swedish. This decreases the risk of writing a complete document using the wrong terminology. After having completed a document the writer should activate ScanCheck for a complete language check. The checker produces a protocol, where the deviations from the Scania Swedish are listed together with the proposed corrections. Nothing is corrected automatically in the original file, since we feel it is important, that the technical writer make the final decision about the text. As illustrated above, the deviations are listed under the heading of the section where they were found in the original document. This makes it possible to trace them and make the corrections. In App. I we present the interface to the word control program and to ScanCheck.
To implement the use of a language checker in a production environment that is already described as technically complex and hectic we feel it is vital that the writers have confidence in the linguistic competence of the checker. Therefore, we have opened up for a discussion about linguistic matters in the internal "Language News", where decisions taken about Scania Swedish standards have been distributed. The general opinion seems to be that it is good not to have to hesitate about how to write abbreviations, when to use the hyphen, what word to use when having a choice etc. Our over-all intention is to support the writer in the process of writing without restricting his possibilities to use a natural and expressive language.
Updating the vocabulary ScanCheck also includes a tool, DefineLex, for updating the vocabulary, see Tiedemann (13). It is important that the words that are not identified when the checker goes through a document are registered and checked by an authorised person. When it is decided that a word should belong to the Scania Swedish, this function in ScanCheck makes it possible to define the word morphologically in agreement with the existing system. An illustration of the interface to DefineLex is given in App. I b.
MULTRA Multra is a transfer-based machine-translation system, with three main components, an analyser, a transfer component, and a generation component, see Sågvall Hein (10). In addition, there is a separate component ordering the analysis alternatives by preference before passing them on to the transfer component. Preferences are expressed by means of linguistic rules defined over feature structures. As regards the Multra analyser, see the description of ScanCheck above.
Transfer is implemented as unification of feature structures. Generation, in addition, involves concatenation. Also in the analysis, unification plays an important role. Thus we may say Multra is a unification-based machine-translation system. Transfer rules are expressed in a PATR-like formalism, and there is no formal difference between lexical and structural transfer rules, see Beskow (3). Also for the formulation of syntactic generation rules a PATR-like formalism was defined. Morphological generation rules are formulated in a PROLOG like style.
Alternative transfer rules are applied according to specificity; a specific rule takes precedence over a general one. The specificity principle also governs the application of alternative generation rules. The linguistic preference rules along with the specificity principle of the transfer and generation processes constitute the Multra preference machinery. The MT system as a whole, as well as its constituent components, can be tuned to present the best alternative only, or the complete set of alternatives in the preferred order.
For the design and testing of translation rules, a special environment, Multra Developer's Tool, MDT, see Beskow (2), was developed. In this environment each component can be tested independently. In specific, MDT provides rich tracing facilities. In App. II we present an example of the operation of Multra, using the MDT interface. The sentence to be translated is Fyll på olja i växellådan. [Fill the gearbox with oil.]. The example illustrates a case where a shift of argument structure takes place during translation (in accordance with the model translation that was found in the Scania multilingual database; as regards this database, see further http://strindberg.ling.uu.se/~corpora/scania/). In addition to the MDT interface, there is an interface for supervised translation of full texts or parts of them.
CONCLUSIONS
A full implementation of our approach implies, that once a source document has been produced and checked, the first and heaviest step in the machine translation process has been taken, the analysis step. As a result of the operation of the grammar checker, the text will be available, not only as a text document in agreement with the specification of the controlled language, but also as a sequence of grammatical structures that can readily be forwarded to the transfer and generation components.
Defining transfer and generation rules for the target languages implies a standardisation of them too.
Multra provides an adequate basis for the implementation of our approach.
APPENDIX I a: The interface to ScanCheck
Comment. During the writing of a document individual words can be looked up in the lexical database by entering their main forms. In the illustration egentest is looked up, and is reported to exist in one particular dictionary domain.
When the document is ready the grammar checker is started from the File menu and the whole file is checked for lexical, morphological and syntactic errors. | 2017-08-03T21:15:07.773Z | 1997-01-01T00:00:00.000 | {
"year": 1997,
"sha1": "e5d4ab618ebf25208eef6e9ff66b292298e2d0bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "e5d4ab618ebf25208eef6e9ff66b292298e2d0bc",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
265850461 | pes2o/s2orc | v3-fos-license | Polymyalgia rheumatica: An update (Review)
Polymyalgia rheumatica (PMR) is a chronic inflammatory disease which affects the connective vascular tissue, characterized by pain accompanied by morning stiffness, predominantly of the neck muscles, hip and shoulder girdle. Usually, patients with this disease are >50 years of age and biological inflammatory syndrome is present with an increase in both the erythrocyte sedimentation rate and C-reactive protein levels, aspects similar to giant cell arteritis. The aim of the present review was to depict the current pathogenic hypothesis, diagnostic and treatment approach for patients with PMR, and novelties since the development of the currently used 2012 European League Against Rheumatism and American College of Rheumatology provisional classification criteria. PMR is a prevalent disease that can occasionally prove difficult to diagnose and treat. Possibly, the most abundant type of evidence and data revealed over the past decade have been acquired through musculoskeletal imaging, with implications in diagnosis, disease monitoring and relapse, prognosis and changes with treatment. Further research on pathophysiology is required to gain a deeper understanding of the underlying processes, which will serve as the foundation for future personalized treatments. In addition, there is an increasing demand for improved diagnostic techniques, which should include a further development of various imaging modalities, in order to provide accurate diagnosis and appropriate therapy.
Introduction
Polymyalgia rheumatica (PMR) is a chronic inflammatory disease which affects the connective vascular tissue, characterized by pain and accompanied by morning stiffness, predominantly of the neck muscles, hip and shoulder girdle.The main characteristics included in the majority of definitions are pain and morning stiffness of the hip and shoulder girdle and/or the neck muscles, lasting for >30 min, with a disease onset of >1 month.Usually, patients are aged >50 years and the biological inflammatory syndrome is present, with an increase in both the erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) levels, aspects similar to giant cell arteritis (GCA) (1).
PMR predominantly affects the elderly, and the median age of disease onset is 73 years.The prevalence is estimated at 700/100.000 individuals aged >50 years.The incidence increases with age and varies depending on the geographical region, with an increased incidence observed in Scandinavian countries.The disease affects females 2-3-fold more frequently than males, as well as individuals of Caucasian ethnicity, as compared with Asian, Latin-American and African-American populations (2).
PMR is frequently associated with GCA in ~30% of cases.From a clinical point of view, 40-60% of patients with GCA can present with symptoms of PMR at the time of diagnosis.PMR and GCA bear multiple similarities, including age at disease onset, an increased prevalence among females and geographical distribution, suggesting that these clinical entities may represent subtypes of the same pathology (3).
The aim of the present review was to depict the current pathogenic hypothesis, the diagnostic and treatment approaches for PMR patients, and novelties since the development of the currently used 2012 European League Against Rheumatism (EULAR) and American College of Rheumatology (ACR) provisional classification criteria.
Pathogenesis
To date, the etiology and pathogenesis of PMR are not clearly understood.This can be attributed to earlier studies, which were conducted in mixed cohorts presenting with both PMR and GCA, impeding the successful evaluation of the patterns involved in the pathogenesis of isolated PMR, as reviewed by Guggino et al (4).
HLA-DRB1 * 04 allele is usually associated with PMR in conjunction with GCA.However, when assessing genotypes and susceptibility to PMR alone, the data presented in literature is controversial.Salvarani et al (5) revealed a high incidence of HLA-DRB1 * 04 alleles in a cohort of patients of Italian descent with 'pure' PMR.Furthermore, Gonzalez-Gay et al (6) described an association of HLA-DRB1 * 04 with more severe disease activity and increased synovial inflammation in patients with PMR from a patient cohort of Spanish descent.
Since PMR is associated with inflammation of the bursae, the cytokines implicated in the inflammatory process may be responsible for some of the pathogenic traits of this disease.It has been suggested that PMR is associated with various TNF polymorphisms.Also, higher levels of interleukin (IL)-1, IL-6 and intercellular adhesion molecule-1 (ICAM-1) have been associated with increased risk of disease development or increased disease severity in PMR (7).
The role of infectious and environmental factors has been postulated in PMR pathogenesis.Among several of the investigated infectious factors, Mycoplasma pneumoniae, parvovirus B19 and Chlamydia pneumoniae have been more frequently incriminated in the development of PMR (8)(9)(10).Additionally, Cimmino and Zaccaria (11) indicated that antibodies to adenovirus and respiratory syncytial virus may also trigger PMR, due to their high prevalence in the bloodstream of PMR patients.
Another factor involved in the development of PMR is the use of immune check-point inhibitors in cancer patients, due to their antagonizing effect on cytotoxic T lymphocyte associated antigen-4 (CTLA-4) and programmed death cell protein 1 (PD-1).The therapeutics that have been implicated in the pathogenesis of PMR are ipilimumab, nivolumab and pembrolizumab (12).
The pathophysiology of PMR may entail an abnormal immune response, particularly one involving T cells.An increase in T-helper 17 (Th17) cells was observed in a group of individuals with PMR and/or GCA, and a concurrent decrease was also discovered in regulatory T cells.Also, an increase of memory-effector T cells was noted, revealing an alteration in T-cell subpopulation no longer expressing the co-stimulatory molecule (CD4 + CD28 -and CD8 + CD28 -).These subtypes of T cells are known to be increased in elderl; however, as compared to sex and age-matched controls, the observed levels of T cells were increased in those with PMR/GCA.Memory-effector T lymphocytes contribute to the pro-inflammatory cascade in PMR, due to the ability to produce interferon (IFN)-γ and tumor necrosis factor (TNF)-α in large quantities (13,14) (Fig. 1).
IL-17 has also been recently linked to PMR and GCA, due to the subsequently incited activation of Th17 responses.Additionally, there is a correlation between higher IL-6 levels and PMR disease activity.IL-6 inhibitors are presently being trialed for the treatment of PMR after demonstrating effectiveness in GCA (15).
van der Geest et al ( 16) also demonstrated a decrease in the numbers of B lymphocytes which presented an inverse association with ESR, CRP, and B-cell activating factor (BAFF) levels.
Pro-inflammatory cytokines could be markedly implicated in PMR pathogenesis as well.When comparing symptomatic vastus lateralis and trapezius muscles of PMR patients to healthy individuals included as a control, higher interstitial concentrations of IL-1α, IL-1β, IL-1 receptor antagonist, IL-6, IL-8, TNF-α, and monocyte chemoattractant protein-1 have been detected in the serum of the PMR population.The etiology of the illness may thus be influenced by the elevated interstitial concentrations of pro-inflammatory cytokines in the affected muscles (17,18).JAK/STAT signaling pathway has been studied in GCA and PMR.The inhibition of JAK 1 and JAK2 may lead to the downregulation of Th1 and Th17 pathways and also IL-6 (19).
The clinical symptoms of PMR may be attributed to immune cell infiltration in the muscles and periarticular areas.Patients with PMR and GCA were demonstrated to present immune complexes in their muscles.PMR has also been associated with synovitis.In comparison to healthy individual controls, deltoid muscle biopsies from patients with PMR exhibited increased microvascularization.The evaluation of arthroscopic samples used to study synovitis of the shoulder, it was revealed that only macrophages and T cells, infiltrate the extracted fragments, whereas B cells, NK cells, or γ/δ T cells were not detected (20,21).A strong adhesion molecule expression, including vascular cell adhesion molecule (VCAM)-1 and ICAM-1, has been observed in PMR subjects with synovitis and may be important for the recruitment of several immune system components in PMR synovial infiltration (22).
Also, the process of endocrine senescence that produces decreased levels of dehydropiandrosterone and alterations of hypothalamic-pituitary-gonadal axis with adrenal cortex insufficiency and decreased cortisol secretion in response to the inflammatory status was incriminated as an etiopathogenically important mechanism (23).
Deregulation of the immune system may result in a vicious cycle, with the immune system remaining activated and in a permanent state of inflammation, as is frequently observed in inflammatory autoimmune diseases.However, it should also be considered that chronic inflammation gradually causes dysregulation of the immune system (24).
The immune system dysregulation may lead to a higher risk of cancer occurrence in PMR patients.Nevertheless, the data in literature regarding the development of cancer in PMR patients is controversial.According to a follow-up study in Sweden, there was discovered a link between malignant diseases and PMR.Furthermore, some specific types of cancers such as skin cancers and hematologic malignancies such as acute myeloid leukemia, multiple myeloma or myeloproliferative diseases have been associated with PMR (25).A more recent study on 80 patients diagnosed with PMR, which were observed for >40 weeks and screened using positron emission tomography/computed tomography (PET/CT) revealed a higher prevalence of cancer in PMR patients, in comparison to the general population (26).Also, the treatment for cancer such as immune checkpoint inhibitors may trigger PMR (27).
Clinical manifestations
PMR manifests in patients >50 years of age, leading to discomfort, a reduced range of motion and stiffness of the shoulder girdle may, which is a fundamental clinical hallmark of PMR.Furthermore, neck, hip girdle and thigh symptoms may also occur.Patients also frequently complain about difficulties in movement, with the symptoms being bilateral in most cases (28).
In total, up to 40-50% of patients may exhibit established symptoms, including low-grade fever, lethargy, asthenia, anorexia and weight loss.In some cases, the first sign of isolated PMR is a fever >38˚C (3).
The onset of symptoms is frequently unforeseen, occurring typically within a few days; however, in rare cases, symptoms may develop suddenly overnight.Aching and early morning stiffness lasting for >30 min are two of the most common symptoms occurring in the musculoskeletal areas that are involved in the inflammatory process.The symptoms of inflammatory pain and stiffness are often most aggravated in the morning, gradually improving during the course of the day, and then relapsing to their baseline level after the patient has rested or has been inactive (also known as 'gelling') for an extended period of time.
Hip girdle symptoms are described as pain in the groin area and lateral sides of the hip and often radiate to the posterior thigh region (29).
Performing tasks necessary for the activities of daily living, including clothing, combing hair, getting out of bed, or getting up from a chair are examples of actions that become challenging and are often coupled with debilitating pain.Nocturnal pain is also common, and patients frequently face difficulties in falling or staying asleep (30).
At disease onset, symptoms may be unilateral, rapidly becoming symmetrical and bilateral.During a physical examination, the active mobility of a patient, particularly concerning the abduction of the shoulders, may be restricted due to tenderness.In addition, there is no clinically apparent joint swelling.A passive range of motion that is facilitated by the examiner may, in certain cases, approximate a healthy phenotype.The discomfort in the shoulder is widespread, and it is not localized in specific shoulder structures (31).Under normal circumstances, a painful limitation in the active range of motion of the neck and hips also occurs.Even though pain in the muscles manifests, it is not typical for the muscles to exhibit any weakness, despite the existence of muscular discomfort (32).
There is a possibility that other joint symptoms may also be present.Clinical signs of peripheral synovial inflammation can be observed in various cases, approximating to ≤23-39% of patients.Arthritis is characterized by asymmetrical presentation, a non-erosive character, mainly affecting the knees and wrists.Following the initially administered therapy with glucocorticoids (GCs), the symptoms appear to subside in the majority of patients.Inflammation of the periarticular structures, including tendons and bursae, may also be present in patients with peripheral synovitis.Tenosynovitis and bursitis can be evidenced by musculoskeletal ultrasound (MSUS) and other imaging techniques, including magnetic resonance imaging (MRI).It has been reported that ~15% of patients with PMR exhibit ultrasonographic evidence of carpal tunnel syndrome, and 3% of patients have been reported to exhibit distal tenosynovitis (33).
PMR may, in certain instances, manifest clinically as distal swelling and edema, which may be analogous to the symptoms experienced by individuals diagnosed with remitting seronegative symmetrical synovitis with pitting edema syndrome (34).
Laboratory features
Laboratory analyses are non-specific.The increase in acute phase reactants is dominant from a paraclinical point of view, with values of ESR that vary from moderate to high, often >100 mm/h, with <20% of patients presenting with values below 40 mm/h.By contrast, CRP levels are constantly increased, representing a reliable inflammatory monitoring marker, normal values being incompatible with the diagnosis of PMR (35,36).The study by Cantini et al which evaluated 177 patients with PMR, revealed that 6% of the patients presented with normal ESR values at the time of diagnosis, while CRP levels were normal in only 1% of cases.Even in several cases of relapse, ESR levels were normal in 68% of cases, whereas CRP levels were elevated in 62% of cases (37).
Blood count changes indicate an inflammatory biological profile with the presence of mild or moderate normocytic normochromic anemia, reactive leukocytosis or thrombocytosis (38).
Rheumatoid factors (RFs), anticitrullinated protein antibodies and antinuclear antibodies are usually absent.However, a weak positivity of RFs must be considered in ~10% of elderly population, without any clinical significance (39).
On occasion, anticardiolipin antibodies may be detected in increased titers as an independent predictive marker for the risk of vascular complications (40).
Indications of hepatic damage are often present with increases in the levels of alkaline phosphatase, γ-glutamyl transpeptidase, 5'-nucleotidase, and occasionally, moderate increases in transaminase levels.Serum levels of creatine kinase and lactate dehydrogenase are within a normal range and exclude myositis-type involvement.IL-6 and von Willebrand factor levels are increased, with significant decreases following treatment administration (36,41).
The examination of the synovial fluid may reveal mild inflammation, including an increase in the total number of leukocytes with levels ≤20,000/mm 3 , 40-50% of which being polymorphonuclear (42).Also, neuropeptides such as vasoactive intestinal peptide were found in the synovial fluid of PMR patients which may contribute to the immunomodulation of synovial fluid inflammation, but also extraarticular manifestations such as cardiac rhythm dysregulations (43,44).
Imaging
Probably the most abundantly available data and type of evidence in the literature over the past decade have been acquired through musculoskeletal imaging, with applications in diagnosis, disease monitoring and relapse, prognosis and change with treatment.Currently, none of the various sets of classification criteria for PMR are fully validated in clinical practice.The simultaneous presence of inflammation in articular and periarticular structures of the bilateral shoulder or in one shoulder and the hips, as identified by ultrasound, aided in the improvement of the sensitivity and specificity of the clinical criteria in 2012, with the introduction of the ultrasound criteria.Multiple imaging techniques, having advantages and disadvantages, from conventional radiology, scintigraphy to MSUS, MRI and 18-FDG PET/CT have improved the diagnosis of PMR and have made it possible to differentiate this particular pathology from other similar disease, such as elderly onset rheumatoid arthritis (RA; EORA), and to provide a prompt therapeutic intervention (1,33).
Conventional radiology.The use of conventional radiology is considered outdated for the diagnosis of PMR.Due to the inflammatory features of joint and periarticular structures characteristic of the disease and to the non-erosive aspect of the arthritis, the use of this method does not provide useful information.In this setting, it could be used only for differentially diagnosing PMR from other inflammatory, erosive or degenerative joint disease or concomitant diseases.The last guidelines of the British Society for Rheumatology and the Health Professionals in Rheumatology for the management of PMR include a chest X-ray as the bare minimum for the establishment of the diagnosis, being useful for the exclusion of alternative conditions that may mimic the disease (45).
Scintigraphy.The advances in nuclear medicine imaging techniques over the past decade have surpassed the capabilities of conventional scintigraphy.The lack of the high specificity of the method and the use of new nuclear medicine imaging modalities justify the absence of recent publications on this topic over the past decade.Gallium-67 scintigraphy reports in PMR demonstrate intense uptake in both shoulders (46).The high sensitivity of the Technetium pertechnetate scintigraphy was reported by O'Duffy et al (47) since 1976.That study, reported that 24 out of 25 patients exhibited positive PMR characteristics with abnormal uptake in both shoulders, as compared with the lack of the PMR characteristics in 26 controls.Nevertheless, the lack of discriminative power currently justifies the absence of recent data regarding the use of the method.
Ultrasound.MSUS has recently become a preferred technique, mainly due to its capacity to visualize in a multi-planar and dynamic way both articular and extra-articular synovial structures, with a relative low-cost and wide availability.Using standardized scanning techniques and defined ultrasound pathology, together with the addition of power-and colour-Doppler, MSUS has improved the ability to detect and assess inflammatory activity in PMR with excellent reliability.In addition, MSUS has been demonstrated to have high intraobserver (k=0.96) and interobserver (k=0.99)reproducibility (48).
Diagnostic accuracy.As a result of several ultrasound studies performed in Europe, with regard to the detection of inflammatory lesions in PMR mostly using B-mode, and to a lesser extent, power Doppler examination, the most frequent ultrasound abnormalities described are bursitis of the subacromial/subdeltoid (SASD) bursae and tenosynovitis of the long head of the biceps tendon (LHBT), ranging from 6.2 to 100% at the shoulder level, with a higher prevalence of SASD bursitis, and less frequently, trochanteric bursitis and synovitis at the hip level (49,50).The importance of this data determined the inclusion of an ultrasound criteria for the first time in rheumatology in the 2012 EULAR/ACR Provisional Classification Criteria for PMR, increasing the specificity of the clinical diagnosis to 81% (51).Subsequently, Macchioni et al (52) revealed that the addition of ultrasound to clinical criteria increased the diagnostic performance from 81.5 to 91.3% in patients with PMR, while comparing PMR to other types of inflammatory arthritis, including RA.The diagnostic specificity in this case increased from 79.9 to 89.9% (Figs. 2 and 3).The images were obtained by examining a patient with PMR at the Emergency Clinical County Hospital of Craiova.
A recent study by Kobayashi et al (53) demonstrated that ultrasound of the shoulder and knee, improves the accuracy of the 2012 EULAR/ACR Provisional Classification Criteria for PMR; however, this does not apply for the hip.Considering that the assessment of the hip joint by ultrasound is not a patient or physician-friendly procedure, due to limited sensitivity in the detection of abnormalities, in comparison to MRI and that inflammatory knee lesions are frequently detected in PMR using MRI and PET/CT, particularly in tendons and ligaments besides bursas and synovia, it was concluded that bilateral involvement of the shoulder (LHBT, supraspinatus or subscapularis tendon) and the bilateral involvement of the knee [popliteus tendon (PopT) or medial or lateral collateral ligament] provided numerically increased sensitivity (90 vs. 87%), specificity (83 vs. 68%), positive predictive value (79 vs. 67%) and negative predictive value (92 vs. 87%) compared with the 2012 EULAR/ACR criteria without ultrasound.In the PMR-definite group the dominant ultrasound lesions were the tenosynovitis of LHBT and that of PopT, with 85% exhibiting both abnormalities (53).
In a systematic review by Mackie et al (54) in 2015 regarding the accuracy of musculoskeletal imaging for the diagnosis of PMR, the use of ultrasound was associated with several strengths.It is worth mentioning that according to that review, control patients with other inflammatory diseases were included in order to estimate the diagnostic accuracy compared to MRI and PET/CT studies only in ultrasound-related studies.Bilateral SASD bursitis had the most discriminative value for PMR diagnosis, with a specificity of 89% and sensitivity of 66%, superior to glenohumeral synovitis, according to data from four ultrasound-related studies.The ultrasound detection of trochanteric bursitis demonstrated a sensitivity ranging from 21 to 100% (54).
Ultrasound in PMR may be of particular assistance in establishing positive diagnosis in cases with normal ESR, as recorded in 7-22% of patients at time of diagnosis (3).Manzo et al (55) suggested a 4-point guidance on how to investigate a suspicion of PMR, ultrasound being of real positive value when bilateral SASD bursitis, LHBT tenosynovitis or trochanteric bursitis are present.
Differential diagnosis.The role of ultrasound in differential diagnosis of PMR is supported by several studies.When analyzing the diagnostic outcome in patients with polymyalgic symptoms, Falsetti et al (56) suggested the importance of ultrasound in the identification of the most predictive ultrasound model for PMR.This particular model is represented by the detection of the presence of bilateral SASD bursitis, a low frequency of wrist, metacarpophalangeal and metatarsophalangeal effusion/synovitis, a low frequency of knee menisci chondrocalcinosis or tendinous calcaneal calcifications, Achilles enthesitis and low-power Doppler ultrasound (PDUS) scores at wrist level.
Ruta et al (49) compared shoulder ultrasound abnormalities in patients with PMR and RA and detected bilateral SASD bursitis in 36% of patients with PMR and only in 3% of patients with RA, with a similar difference noted for LHBT tenosynovitis, which was observed in 30% of patients with PMR and was not observed in the RA control group.
Furthermore, the presence of moderate to severe proliferative synovitis of the shoulder bursae, particularly in the subacromial bursa, is a key ultrasound feature for discriminating EORA from PMR-like onset EORA (pm-EORA) from PMR.Higher scores of gray scale and the power Doppler evaluation of synovitis were obtained by Suzuki et al (57) in 2017 in patients with PMR compared to those with pm-EORA.The same authors further extended the comparison between pm-EORA and PMR by proposing a semi-quantitative PD scoring system for the hyperemia on the subscapularis tendon, with good intraobserver and interobserver reproducibility, demonstrating that inflammation in PMR is predominantly localized in extrasynovial soft tissue or shoulder bursa, as compared to pm-EORA (58).
In a recent study by Ottaviani et al (59) which analyzed 94 patients with polymyalgic syndrome, it was concluded that the screening of the acromioclavicular joint may help distinguish PMR from calcium pyrophosphate disease (CPPD), as patients with CPPD demonstrated humeral bone erosions, synovitis and CPPD of the AC joint more frequently, with a sensitivity of 85.2% and specificity of 97.1%.By contrast, despite a low specificity, the most sensitive US features for PMR diagnosis were subacromial-subdeltoid bursitis (96.3%) and biceps tenosynovitis (85.2%).
Treatment efficacy.Consistent information to support the role of ultrasound in monitoring the response to treatment in PMR is still lacking.Jiménez-Palop et al (48) performed a prospective study in a cohort of 53 patients with PMR treated with corticosteroids, assessing as main objective the ultrasound inflammatory changes at the shoulder and hip level.Their study concluded that an ultrasound may be a useful additional tool for monitoring the response to corticosteroid treatment, due to the significant decrease in the ultrasound inflammatory parameters having been detected at week 4, whereas after 4 and 12 weeks of treatment were more prone to the alteration of their levels in comparison with clinical and laboratory markers of the disease activity (48).However, according to another study by Miceli et al (60) in 2017 on 66 patients with PMR that underwent ultrasound evaluation at baseline and after 12 months of GC therapy, the presence of subdeltoid bursitis and/or biceps tenosynovitis at baseline was not a predictive marker either for GC response or for the requirement for the administration of an increased GC dose to maintain remission at 12 months.Nevertheless, in the prospective open-label outcomes and treatment regimens (TENOR) study that included 18 patients with PMR treated with tocilizumab infusions without corticosteroids, ultrasound and MRI demonstrated notable improvements in inflammatory lesions.At week 12, ultrasound examinations proved that bursitis improved significantly in all four joints (P= 0.029), although intra-articular effusions/synovitis exhibited less improvement (P=0.001).By the end of week 12, 37% of ultrasound-detected abnormalities improved (61).
MRI. MRI has extensive applications in rheumatology and its use in PMR is not an exception to this.Due to the accurate visualization of deep structures, including spine, peripheral joints, tendons, bursae and periarticular tissue, several studies over the past decade have provided novel insight into the anatomical origin of inflammation in PMR, with emphasis on extra-articular involvement of enthesis, bursae or periarticular tissues.
Diagnostic accuracy.Several MRI studies have facilitated the diagnosis of PMR.Fruth et al (62) in 2018 investigated the presence of disease-specific patterns in 40 patients with PMR using contrast-enhanced MRI (ceMRI) of the pelvis.The predominantly occurring characteristic for all patients with PMR was the peritendinous enhancement of pelvic girdle tendons.All cases exhibited bilateral involvement of the common ischiocrural tendon, gluteus medius and minimus tendons, proximal rectus femoris origin and in 90% of cases enhancement of the adductor muscles at the inferior pubic bone.Therefore, bilateral involvement of at least four extracapsular sites in the pelvic region detected in patients with PMR by using ceMRI suggests that it may be relevant for diagnostic purposes (62).The same authors performed in 2020 pelvis ceMRI in 40 patients with confirmed diagnosis of PMR, including 80 individual healthy controls.That study confirmed a distinct pattern of extracapsular inflammation including bilateral peritendinitis and pericapsulitis of the proximal origins of the rectus femoris muscle and adductor longus muscle, characteristic for PMR, with significant diagnostic capability of the method, an excellent sensitivity of 95.8% and a specificity of 97.1% (63).
MRI has been proven to be useful for the diagnosis and the identification of inflammatory sites difficult to evaluate, including lumbar interspinous bursae in patients with PMR, as demonstrated by Salvarani et al (64).The authors of that study reported evidence of interspinous lumbar bursitis found in 9/10 patients with PMR and that lumbar pain may be supportive of predominantly extra-articular synovial involvement (64).Although the use of MRI aids in identifying additional areas of inflammation in the spine and pelvis, the number of controls with inflammatory disease was insufficient for precise specificity estimates, as demonstrated by Mackie et al (65) in a systematic review of the literature regarding the accuracy of musculoskeletal imaging for the diagnosis of PMR.Although MRI appears to be of particular interest in identifying deep structures with a limited acoustic window for ultrasound examination, including the spine and pelvis, its use may be limited by increased costs and limited availability, particularly for repeated evaluations in patients with symptom resolution following GC treatment, a limited area of imaging and a longer examination time, as well as limited access to whole-body MRI.
Instead, according to Mackie et al (65), whole-body MRI in PMR can identify a distinct subset of patients who are more likely to respond to GC therapy according to the MRI pattern of extracapsular inflammation and high IL-6 and CRP levels.The same study was designed for distinguishing PMR from RA according to the patterns of inflammation.In patients with PMR, extracapsular features of inflammation, including periacetabular inflammation without the involvement of the hip joint, extended from the anterior hip capsule, medial to the gluteus muscle and lateral to the iliac bone, distinct from iliopectineal bursitis, may help distinguishing between PMR and RA.Additionally, it is considered a predictor of response to glucocorticoid therapy.In this particular subset of patients with PMR, the entheseal involvement resembled a seronegative spondyloarthropathy (65).
An MRI study by Cimmino et al (66) regarding hand involvement in PMR also demonstrated the prevalent inflammation of extra-articular structures, presenting with extensor and flexor tendons tenosynovitis rather than joint synovitis.Of note, the authors of that study did not identify an association between clinical presentation and MRI, supporting the presence of extensor tenosynovitis as an epiphenomenon suggestive for subclinical disease (66).
Differential diagnosis.In support of the use of MRI in differential diagnosis, Ochi et al (67) evaluated shoulder and hip joints in PMR and RA patients.The MRI parameters analyzed were thickness and abnormalities of the supraspinatus tendon, effusion around the glenohumeral joint, subacromial-subdeltoid bursa, the biceps tendon in the shoulder and effusion around the acetabulofemoral joint, iliopsoas bursa and trochanteric bursa in the hip (67).The supraspinatus tendon was significantly thicker in PMR patients than in RA and control patients (P<0.05).Patients with PMR exhibited increased scores for effusions (joint, bursa, and tendon sheath in the shoulder and bursa in the hip), as well as more frequent periarticular soft tissue edema (P<0.05) as compared with RA cases.
A recent article by Nakamura et al (68) analyzed whether gadolinium-enhanced MRI in shoulders of patients with PMR could increase the diagnostic value and predict recurrence.Supporting the findings of extra-synovial involvement detected at hip level by Ochi et al (67), MRI abnormalities, including capsulitis, rotator cuff tendinitis and focal bone edema in the shoulder improved diagnostic accuracy in PMR with 76% sensitivity and 85% specificity.In addition, in patients with recurrence of the disease, rotator cuff tendinitis and synovial hypertrophy were predictive signs (68).
Treatment efficacy.In a previous study, the response to treatment with tocilizumab was evaluated in a post hoc MRI analysis of the data from the TENOR study, at baseline, following 2 and 12 weeks of treatment.Myofascial lesions were characteristic for recent onset PMR in the shoulder and hip.Resolution of inflammatory lesions was observed at week 12 in 41.7% of the 103 groups of muscles studied, while improvements were depicted in 64.1% of the examined muscle groups (69).
PET/CT.PET/CT scans using an analogue of glucose known as 2-[fluorine-18]-fluoro-2-deoxy-D-glucose ( 18 F-FDG-PET/CT), are a type of imaging technique that use a radioactive isotope, often implemented in the diagnosis and monitoring of oncological patients.By contrast, other clinical applications excluding cancer diagnosis are currently being used in clinical practice, since FDG accumulates in tissues that are not exclusively malignant.FDG also accumulates in inflammatory areas of the tissues, due to elevated activity levels in cells involved in inflammation, including lymphocytes, neutrophils and macrophages (70).In 2018, Slart et al (71) established recommendations for the application of PET/CT in improving the diagnostic and monitorization process in individuals with large vessel vasculitis (LVV), as well as PMR.
PET/CT can be used for the detection of mural inflammation and/or luminal changes in extra-cranial arteries to support the diagnosis of large vessel-GCA, as stated in the EULAR recommendations for the use of imaging in LVV, also revealing PMR lesions that remain elusive when other techniques are used.Even though it is not routinely used in PMR, PET/CT can reveal PMR lesions that are difficult to detect using other methods (72).
Numerous studies have been conducted in an effort to define a particular pattern of 18F-FDG uptake that may aid in the diagnostic process.Yuge et al (73) conducted a study on 60 individuals who initially diagnosed with PMR, enthesitis, arthritis, or myopathy.However, after applying the criteria established by the ACR/EULAR in 2012, the total number of patients diagnosed with PMR was limited to 16 individuals.In the final PMR group, the highest incidence of 18F-FDG was detected in the glenohumeral and sternoclavicular joints (88%), followed by the spinous processes and greater trochanters, ischial tuberosities and the last acromioclavicular joints, wrists and elbows.An enhanced 'Y-shaped' uptake along the interspinous bursae was a characteristic pattern for patients with PMR (73,74).In the study by Kaneko et al (75), 20 patients with PMR were enrolled, detecting isotope accumulation specifically in the proximal joint structures (glenohumeral, coxofemural and sternoclavicular joints) and in the extra-articular synovial structures (greater trochanter, ischial tuberosity, and the area anterolateral to the rim of acetabulum).Furthermore, another study conducted by Rehak et al (76) discovered an accumulation of the isotope in the prepubic region in specific individuals.This finding was most likely the result of pectineus and adductor longus enthesitis.In addition to this, the authors of that study demonstrated that the areas with high accumulation of the tracer revealed no uptake after PMR therapy (76).This provides support to the utilization of 18F-FDG-PET/CT in the management of PMR not only for the diagnosis but also for the monitoring of treatment.
Sondag et al (77) demonstrated that considerable absorption in three or more sites in the joints, bursae, or entheses (acromioclavicular, sternoclavicular, glenohumeral; ischial, trochanteric, iliopectineal, and interspinous; pubic symphysis, respectively) was related to the diagnosis of PMR with a sensitivity of 74%.This method also assists in the process of differentially diagnosing PMR and RA, particularly EORA (77).
In a previous study by Takahashi et al (78), a typical pattern for PMR and EORA was established.In patients with PMR, a high sensitivity (92.6%) and a high specificity (90%) was observed when three out of five characteristic regions exhibited either an increased or an absent 18 F-FDG accumulation.An increase in uptake was detected in the ischial tuberosities, vertebral spinous processes, glenohumeral joints, and iliopectineal bursitis, and was not observed in the wrists (78).
Moreover, a retrospective study was conducted by Wendling et al (79) at a single center on patients diagnosed with PMR according to the criteria established by the ACR and EULAR in 2012.A control group of individuals who did not present with rheumatological symptoms, but were tested as part of neoplastic research, or patients with neoplastic disorders who were followed-up were also analyzed.A total number of 201 cases were investigated, including 101 patients with PMR and 100 healthy individual controls.Overall, PET muscle injury was observed in 34% of patients with PMR, as compared with 10% of the individuals in the control group.In total, 19, 14, 13 and six afflicted muscle sites were detected in the spinal region, the scapular girdle, the pelvic girdle and the thighs, respectively.On three occasions, fasciitis was also observed.In individuals diagnosed with PMR, age, CRP levels, or an overall PMR PET score were not linked to muscle involvement detected by PET (79).
In conclusion, although PET/CT is not a routine investigation as this imaging method exposes patients to increased levels of radiation, PET/CT may prove to be a useful diagnostic and monitoring tool for patients with PMR.
Role of imaging in PMR.
The use of modern imaging techniques provides novel information regarding the anatomical and pathophysiological basis of PMR.Novel sites of inflammation were discovered with the use of MRI and PET/CT as compared to the use of MSUS alone.Thus, in addition to SASD bursitis and biceps tenosynovitis, inflammation of the peritendon of muscle insertions at the hip and interspinsous bursae are findings that may aid clinicians in differentially diagnosing PMR from other elderly-onset inflammatory diseases.
Additional studies on larger patient cohorts are required; however, these imaging techniques may be valuable for the diagnosis and monitoring the response to treatment in patients with PMR.
Diagnosis
When common signs and symptoms, as well as increased levels of inflammatory markers occur, the diagnosis of PMR is not a difficult process for a clinician with an extensive knowledge in this research field.However, there is a certain risk for less experienced clinicians to over-or underdiagnose PMR, particularly in situations involving illnesses that mimic PMR or in patients with many comorbidities, due to the absence of a diagnostic gold standard and the lack of specificity of the signs, symptoms, and laboratory data associated with PMR.
Over the years, several classification criteria have been proposed for PMR, the latest being the 2012 European League Against Rheumatism and American College of Rheumatology provisional classification criteria (Table I) (80).
The required inclusion criteria are the following: An age ≥50 years, bilateral shoulder pain and abnormal CRP and/or ESR levels.A score ≥4 strongly indicates PMR manifesting without MSUS, whereas a score ≥5 is indicates the presence of PMR with MSUS.
Prior to the development of the aforementioned criteria, other four research groups developed classification criteria for PMR, as follows: i) Bird criteria in 1979; ii) Jones and Hazleman criteria in 1981; iii) Chuang and Hunder criteria in 1982; and iv) Healey criteria in 1984 (Table II) (80)(81)(82)(83).
Differential diagnosis
Conditions that afflict adults aged >50 years and are linked with bilateral shoulder pain should be considered for the differential diagnosis of PMR, since it is also a condition that causes discomfort in the neck and shoulders.This is important, considering the fact that there are no specific diagnostic tests for PMR.A misinterpretation of any disease as PMR may lead to inappropriate exposure to GCs for extended periods of time.Rheumatic diseases and non-rheumatic diseases should also be included in the differential diagnosis.With the emerging of new diagnostic criteria and the use of MSUS, PMR is easier to detect, making the differential diagnosis less complicated (84).
Treatment
The treatment of PMR is currently based on the 2015 EULAR/ACR recommendations.There is no validated definition of remission and/or relapse for patients with PMR.However, the majority of definitions encountered in the literature comprise a combination of the absence of clinical symptoms/myalgias/improvement of clinical symptoms with ESR levels <20-40 mm/h and CRP levels <0.5-1 mg/dl.Regarding the therapy, the patients should have discontinued the GCs or these should be administered at a reduced dose (87).
Thus, the use of GCs is recommended instead of non-steroidal anti-inflammatory drugs (NSAIDs) in patients with PMR, with the exception of the short-term use of NSAIDs and/or analgesics for the improvement of the symptoms of other associated pathologies, including coexisting osteoarthritis.According to the guidelines, a minimum effective dose of equivalent of prednisone ranging from 12.5 to 25 mg/day is recommended.Dose tapering is required to be individualized, according to the clinical and biological profile of each patient.The following principles for dose tapering are recommended to be administered: i) initial tapering ≤10 mg/day equivalent of prednisone in 4-8 weeks; ii) for relapse therapy, the dose of GCs is increased to the previous dose before the relapse, followed by its gradual tapering in 4-8 weeks up to the dose at which the relapse occurred; and iii) when tapering the dose in the case of remission, the dose of prednisone should be decreased by 1 mg every 4 weeks until the discontinuation of therapy, as long as remission is maintained.
The administration of intramuscular methylpredisolone should be considered as an alternative to administering GCs orally; however, this decision remains at the discretion of the attending physician (88).Concerning recently diagnosed patients, Dejaco et al (88) compared the efficacy of the oral administration of prednisolone (initial dose of 15 mg/day gradually reduced to 10 mg/day) with the administration of intramuscular methylprednisolone acetate (120 mg every 2 weeks for 12 weeks followed by injections every month, with dose tapering by 20 mg every 3 months).The prednisolone dosage was gradually decreased at levels <10 mg per day at a rate of 1 mg every 8 weeks.Both courses of treatment successfully induced and maintained the patients with PMR in remission.By contrast, oral prednisolone administration trended towards managing symptoms more rapidly and effectively than intramuscular injections of methylprednisolone (88).
Patients to whom oral prednisolone was administered received a larger cumulative GC dose, being prone to more GC-related adverse events than those who were administered injectable methylprednisolone; however, higher rates of stopping the medication were observed (88).
A single dose of prednisone per day is recommended, except for cases in which nocturnal pain is severe following the reduction the dose of GCs administered to <5 mg/day equivalent of prednisone (88).
The early introduction of synthetic disease modifying therapy with methotrexate (MTX) in doses of 7.5-10 mg/week is conditionally recommended, particularly in patients with an increased risk of relapse, as well as in cases with risk factors, comorbidities and/or with concomitant treatments that predispose to adverse reactions in combination with GCs (78).In the study by Ruediger et al (89) in 2020 conducted on 70 patients with PMR, out of which 31% were prescribed MTX in combination with GCs, MTX was associated with a reduction in steroidal anti-inflammatory drugs use and an improvement in inflammatory biological profile.A multicenter randomized, double-blind, placebo-controlled trial performed by Caproali et al (90) on 72 patients newly diagnosed with PMR proved that the administration of 10 mg/week of MTX in combination with GCs compared to GCs alone was associated with the earlier cessation of prednisone therapy, rendering it useful in patients at a high risk for steroid use.Furthermore, an ongoing multicenter double-blind placebo-controlled clinical trial is currently conducted by Marsman et al (91) aiming to evaluate the efficacy of the administration of 25 mg/week MTX in patients with PMR in an early disease phase.
Studies on other conventional synthetic immunosuppressive drugs are limited, usually based on small study groups or case series.Hydroxychloroquine and azathioprine have been tested in patients with PMR.The study by de Silva et al (92) involving 31 patients with PMR and/or GCA tested the efficacy of azathioprine, suggesting that patients who received azathioprine required a reduced GC dosage.However, the majority of the patients fulfilled the criteria for GCA and the number of patients was limited (92); thus further extensive studies are required in order to attest the efficacy of azathioprine.Hydroxychloroquine was also tested in a retrospective study performed by Lee et al (93), demonstrating no benefits for patients with PMR.
The use of anti-TNFα biological therapy is not recommended for the treatment of PMR as it has not proven to be beneficial to the patients.The administration of the antagonist of the receptor for IL-6, tocilizumab, has been demonstrated to improve symptoms and attenuate the inflammatory syndrome in patients with PMR in several series of cases and retrospective studies.In the study performed by Lally et al (94) on 10 patients with PMR, with only 9 patients having been assessed at the time of the primary endpoint, it was concluded that tocilizumab may be an efficient, well-tolerated drug, with a good safety profile and a great steroid-sparing effect.All the patients did not present relapse without GC therapy at the primary endpoint (94).Overall, 20 patients with active PMR of recent onset were included in a prospective open-label study performed by Devauchelle-Pensec et al (95).These patients received three tocilizumab infusions at 4-week intervals, without receiving GC therapy, followed by the administration of oral prednisone.At the end of the 12th week, all of the patients reported clinical improvement in their PMR symptoms (95).Furthermore, in a more recent randomized, double-blind, placebo-controlled trial on 101 patients with PMR, steroid-dependent patients were treated with tocilizumab.GC therapy was terminated by week 24 in 49% of patients in the tocilizumab group as compared to the placebo group, of which only 9% terminated the GC (96).In a series of cases presented by Mori and Koga (97), three patients presenting with GC-resistant PMR were administered tocilizumab in addition to GCs, with all of the patients achieving remission.A phase 2/3 randomized controlled trial on 36 patients with new-onset PMR conducted by Bonelli et al (98) proved that tocilizumab was superior to the placebo when attesting to the sustained GC-free remission, time to relapse and the cumulative GC dose.Out of the 36 patients enrolled in that study, 19 received subcutaneous tocilizumab in doses of 162 mg per week, while 17 were administered the placebo.All the patients received prednisone doses tapered from 20 mg to 0 mg over the course of 11 weeks (98).
Limited research has been conducted on the administration of other biological therapeutics in individuals diagnosed with PMR.In a proof-of-concept, single-blind, three-arm study, 16 patients with PMR were administered either secukinumab or canakinumab, as a single dose of 3 mg/kg/body weight, or oral prednisone at a dose of 20 mg per day (99).Patients were randomly assigned 1:1:1 to receive either secukinumab or canakinumab or GCs.Patients who were administered GCs demonstrated significant reductions in their levels of pain, whereas those who were treated with secukinumab and canakinumab only exhibited a slight improvement in their range of motion.On day 15, none of the patients who were receiving biological treatment and only one of the patients who were receiving GCs obtained a full response.In the group that received secukinumab, in 4 patients, this was replaced by GCs.A dose of GCs that was 40% lower on a monthly basis was then required, as compared with individuals who were not treated with biological therapeutics.Additionally, this also applied Table II for 3 patients who were treated with canakinumab prior to changing the treatment to GCs.Overall, it was suggested that the application of these biological therapeutics in patients with PMR requires further investigation (99).
A prospective open-label 52-week pilot study investigated the efficacy of baricitinib, which is a JAK1 and JAK2 inhibitor, in treating relapsing forms of GCA in patients (100).Baricitinib was well-tolerated and the majority of patients were able to terminate GCd administration as a consequence.It is probable that JAK inhibition may be also important for the treatment of PMR (100).
The BRIDGE-PMR, a double-blind, randomized, placebo-controlled, proof-of-concept trial included 47 patients with PMR randomized 1:1 to a single intravenous infusion of rituximab 1000 mg or the placebo (101).All the patients received a 17-week GC tapering scheme.That study revealed that rituximab was more efficient in combination with GCs than the placebo and GCs (101).In extension of that study, the 47 patients included in the original study were followed-up from 2019 to 2021, and it was proven that the patients treated with rituximab were in GC-free remission at 1 year after the infusion (92).Thus, rituximab may be considered a valid treatment option for PMR, although studies on larger groups of patients are required (102).
Sarilumab, a recently approved drug for the treatment of PMR, was studied the SAPHYR trial which compared sarilumab and 14-week GC tapering with placebo and 52-week GC tapering.The arm treated with sarilumab demonstrated improved clinical status than the GC arm (103).
There are several ongoing studies evaluating the efficacy of certain conventional synthetic and/or biological agents for PMR treatment (Table III).The website https://clinicaltrials. gov/ was used as research for the ongoing studies evaluating treatment options in PMR.
The optimization of the benefit-to-risk ratio of GCs in order to achieve durable remission, while minimizing the occurrence of side-effects is an ongoing issue.Subsequently, the creation of novel GC preparations and/or GC receptor ligands may be able to improve the benefit-to-risk ratio of GCs.Accordingly, selective GC receptor agonists and modulators may be potential therapeutics targeted at selectively enhancing anti-inflammatory cellular pathways.As consequence, the undesirable effects associated with these medications would not be triggered by the pathways that these pharmaceuticals could prevent from being activated (104).
Conclusions and future perspectives
Although the present review was a narrative one, which could be considered a limitation, it provides important insight into the new diagnostic techniques and treatment options for PMR.In conclusion, PMR is a prevalent disease that can occasionally impose marked diagnostic and therapeutic difficulty.Further research into its pathophysiology is required in order to elucidate the underlying processes further, which will serve as the foundation for future tailored treatments.In addition, there is a demand for improved techniques of diagnosis, which should include the further improvement of various imaging modalities, in order to assist in accurate diagnosis and appropriate therapy.Other potential therapeutic agents including JAK inhibitors have to be further evaluated in PMR.Table III adverse events (safety and tolerability); following of the cumulative dosages of glucocorticoids; ultrasound of synovitis and tenosynovitis; levels of biological markers (interleukin, cytokines, immune cells); following of the quality of life using SF36; following of the quality of life according to HAD; following of the quality of life according to the scale EuroQol 5 dimensions.
Figure 1 .
Figure 1.Schematic diagram of the pathogenesis of polymyalgia rheumatica.APC, antigen presenting cell; Th cell, T helper cell; HLA, human leukocyte antigen.
Figure 2 .
Figure 2. Transverse (A) and longitudinal (B) scan in gray scale of the long head of the biceps tendon, demonstrating anechoic moderate collection in the subacromial/subdeltoid bursa, with the presence of villonodular synovial proliferation in a 79-year old male patient (performed on a MyLabSix Ultrasound machine; Esaote SpA).mT, small tuberosity of the humerus; MT, big tuberosity of the humerus; CLTB, long head of the biceps tendon; CB, bicipital groove; *, collection; d, deltoid muscle; ↓, synovial proliferation.
Figure 3 .
Figure 3. Transverse (A) and longitudinal (B) scan in gray scale of the long head of the biceps tendon illustrating a hypo/anechoic collection at the level of the long head of the biceps tendon in a 79-year old male patient (performed on a MyLabSix Ultrasound machine; Esaote SpA). 1, humerus; 2, long head of the bicep brachialis tendon; 3, transverse humeral ligament; 4, deltoid muscle; *, hypo/anechoic collection.
Table I .
EULAR/ACR 2012 provisional classification criteria for polymyalgia rheumatica.Adapted from the Dasgupta et al 2012 (51) provisional classification criteria for polymyalgia rheumatica: A European League Against Rheumatism/American College of Rheumatology collaborative initiative.EULAR, European League Against Rheumatism; ACR, American College of Rheumatology; RF, rheumatoid factor; ACPA, anti-citrullinated protein antibodies; MSUS, musculoskeletal ultrasound.
. Classification criteria for PMR. | 2023-10-07T15:11:18.399Z | 2023-10-05T00:00:00.000 | {
"year": 2023,
"sha1": "58da54881c2ad91982314a7df7d885991a9fcbe6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2023.12242/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd93191f92555db830be437b41ed4451869df814",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245755378 | pes2o/s2orc | v3-fos-license | Perspective Chapter: Using Feed Additives to Eliminate Harmful Effects of Heat Stress in Broiler Nutrition
Global warming is one of the major challenges for mankind, with animal breeding one of the most affected sectors in the agricultural industry. High ambient temperatures negatively affect all domestic animals. While it is true that pork and dairy production suffer the consequences of heat waves, it is actually the poultry industry which is hit the hardest by the heat stress poultry must endure due to hotter weather. Consequently, we have a fundamental interest in reducing and/or eliminating the negative effects of climate change, i.e. prolonged high ambient temperatures. The aim of this chapter is to present the adverse effects of heat stress on energy metabolism, anti- and pro-oxidant capacity and production in birds. A further goal is to show how various feed additives (e.g. vitamin A, C and E, selenium, zinc, betaine, plant extract, and probiotics) can reduce the negative effects of heat stress. Based on the large number of recent scientific findings, the following conclusions were drawn: Using fat in the diet (up to 5%) can reduce heat production in livestock. Vitamins (e.g. A, E and C) are capable of reacting with free radicals. Vitamin E and Vitamin C, Zn, and Se supplementation improved antioxidant parameters. Antioxidant potential of vitamins and micro minerals is more efficient in combination under heat stress in poultry nutrition. Plant extracts (e.g. oregano) could decrease the negative effects of heat stress on antioxidant enzyme activity due to its antioxidant constituents. Betaine reduces heat production in animals at high ambient temperatures. While acute heat stress induces a drop in feed intake, with the resulting increased nutrient demand leading to weight loss, if heat stress is prolonged, adaptation will occur. Probiotics and vitamins (C and E) seem to be the most effective means to reduce the negative effects of heat stress.
Introduction
Global warming is one of the major challenges for mankind, with animal breeding one of the most affected sectors in the agricultural industry. The impacts of increasing environmental temperatures on livestock will most likely differ from place to place, depending on latitude, geographical features and local farming systems [1][2][3].
High ambient temperatures negatively affect all domestic animals, but in addition to pork and dairy production, perhaps the poultry industry is hit the hardest. In 2020, the world's broiler meat production amounted to about 100.81 million metric tons, and is forecasted to increase to about 101.02 million metric tons by 2021 [4]. According to FAO data [5], total egg production in the world was 1.528 billion units in 2018. In 2019, this figure reached 1.577 billion.
These statistics clearly show that broiler meat and egg production play a crucial role in the global supply of animal origin foodstuffs.
Thus, we have a fundamental interest in reducing and/or eliminating the negative effects of climate change, i.e. prolonged high ambient temperature. The main question is, what tools do we have to reduce the harmful effects of high environmental temperatures-especially in the case of heat stress? A solution for prevention of heat stress in animals includes biological (e.g. genetics, thermal conditioning, nutrition) [6,7] or keeping technology devices (e.g. air conditioning, intensive ventilation, humidification) [8]. However, housing methods are expensive and the service costs are high. Therefore, reducing the biochemical and physiological negative effects of heat stress with different nutritional tools is one of the primary interests for the economical production of food produced from animals.
According to Babinszky et al. [9], basically the following nutritional possibilities are available to eliminate the harmful effects of the heat stress: • reduce animal's own heat production (e.g. feeding more dietary fat); • compensate for the lower nutrient supply; (e.g. feeding more concentrated diets); and • mitigate heat stress induced metabolic changes (e.g. using different feed additives: vitamins, micro minerals).
It should, however, be noted that during severe heat stress, these methods should be used in combination in order to maintain the production performance of the farm animals and the quality of their products [9]. While this chapter focuses on the third option, i.e. the use of feed additives, we would like to emphasize that whatever feeding method we use, we need to be aware of the changes in the intermediate metabolism of farm animals caused by heat stress, because without this knowledge, there is no effective defense against high ambient temperatures.
Therefore, the aim of this chapter is to summarize the adverse effects of heat stress on energy metabolism, anti-and pro-oxidant capacity, and production in birds. A further goal is to show how various feed additives (vitamin A, C and E, selenium, zinc, betaine, plant extract, and probiotics) can reduce the negative effects of heat stress.
Methodology of the literature review
The methodology of the literature review was basically the same as the internationally applied methodology used in animal science. Firstly relevant literature was searched. This follows by evaluation of sources. The third step was identifying the database and gaps in the published scientific findings, than setup the outline structure. Finally the literature review was written.
The literature searching was based on the keywords, using university database, own department data collection on the research field of heat stress, and different international scientific databases of life sciences, animal science and Google Scholar.
In each of the studied paper or book chapter, we asked the same questions as, for example: What was the aim and methodology of the particular publication (in this case: what kind of heat stress was applied, how many animals were included in the experiment per treatment, whether there were repetitions, what dietary treatments (type of feed additives and their concentration in the diet) were used, what parameters were measured, what was the statistical analysis applied, etc.), furthermore, whether experimental data were correctly evaluated, what results were presented by the authors and what main conclusions were drawn from the data.
To have more clear information on effectiveness of various feed supplements in case of production parameters: daily gain (g/d), average daily gain (g/d) and feed conversion ratio (kg diet/kg gain), the so called mitigation capacity was calculated using the following formula: (1) where: HS = heat stress; TN = thermoneutral. All collected information (data) was placed in a large work database. This information formed the basis of the subchapter titles of our review chapter and of the chapter outline. Based on this information, the evaluation of research data from more than 90 publications started. The writing of the review chapter then began, including the drawing of main conclusions as well. The investigated and systematized research findings are summarized in tables.
Heat production of animals and heat stress
It is well known that heat production of animals is the sum total of nonproductive energy utilized by the animal and of the energy lost in the course of transformation dietary nutrients [10]. Animals use this so called non-productive energy for maintenance (i.e. satisfy the energy requirement for the maintenance of body temperature, the functioning of the nervous system, the organs, for minimal activity, etc.) [10]. The extra heat produced in the course of digestion, excretion and metabolism of nutrients is called the heat increment. It is also well known that within a certain range of ambient temperature -with unvarying feed and nutrient intake -the total heat production of the animal remains constant. This temperature range is called the thermoneutral zone. The general scheme of the relationship between ambient temperature and heat production of livestock can be seen in Figure 1 [10].
In a thermoneutral environment, the heat production of the animal is at the minimum, and thus the dietary energy can be used for production (growth, egg and milk production) efficiently [9,10]. Therefore, whenever the daily amount of energy intake changes, the temperature range of the thermoneutral zone is changed, too. So, if for some reason the animal leaves the thermoneutral zone, this result in an increased heat production by the animal. This means that there is more loss of energy, and in consequence, less energy remains for production and moreover the efficiency of energy utilization deteriorates too. The upper and lower critical temperatures for poultry are summarized in Table 1 [11].
The general scheme of the relationship between broiler behavior and the increasing ambient temperature is shown in Figure 2 [12].
As can be seen in Figure 2, in the thermoneutral zone, birds can lose heat at a controlled rate using normal behavior [12]. Between the lower and upper temperatures, there is no heat stress and body temperature remains constant. If the environmental temperature exceeds the upper critical temperature, birds must lose heat actively by panting. However, it should be noted that panting is a normal response Relationship between ambient temperature and heat production of livestock [10].
Recommended thermal conditions for poultry [11]. to heat and is not initially considered a welfare problem [12]. However, as temperatures increase, the rate of panting increases. If heat production is greater than maximum heat loss, birds may die due to heat stress. In other words, heat stress occurs when the body cannot get rid of excess heat. It is well established that heat stress increases the energy cost of maintenance and adversely affects productive and reproductive performance. In a hot environment, the respiration rate in birds can increase 10-20 times, causing increased CO 2 loss through the lungs [13]. This loss results in an increase in blood pH and this can upset the acid-base balance, which can impair the health and performance of birds [14][15][16].
There are usually two types of heat stress, acute and chronic heat stress. Acute heat stress refers to a short and rapid increase in environmental temperature (a few hours), whereas under chronic heat stress, high temperatures persist for more extended periods (several days) [17]. Heat stress exposed animals can use different ways to maintain thermoregulation and homeostasis. They can increase radiant, convective and evaporative heat loss by vasodilatation and perspiration [18]. However, birds have an extra mechanism which is promote heat exchange between their bodies and the environment. These are the air sacs. Air sacs are very useful especially during panting, as they promote air circulation on surfaces and consequently, the evaporative loss of heat [19,20].
Unfortunately, there are only few scientific papers that report on the heat production and the heat loss of heat-exposed birds. Consequently, there is only a limited number of scientific publications that report on nutritional possibilities for reducing the heat production of birds under heat stress.
Syafwan et al. [21] concluded in their excellent review that the heat production of broilers is particularly high due to the high growth rate and the high daily feed intake. Developments in the genetic selection of meat-type birds has led to rapid growth and a high metabolic rate, which is accompanied by a higher heat production level due to increased feed intake [22]. Therefore, it can be stated that high genetic capacity hybrid broilers (so called "improved chicken") are much more sensitive to a hot environment than their unimproved counterparts.
Summarizing the relevant scientific findings, it can be stated that in practical animal agriculture, and especially in factory farming, it is particularly difficult to keep animals in a thermoneutral zone. Therefore, in order to reduce the negative effect of heat stress, it is important to use nutritional tools in addition to technical devices.
Using fat in the diets
It is well known that if more fat is used in pig diets in high ambient temperature, the total heat production of the animals reduces significantly. Babinszky et al. [23] concluded from their study that lactating sows fed a high level of dietary fat (125 g fat/kg diet) produce significantly less heat than those fed a carbohydrate rich (lowfat level) diet. Babinszky [10] is also concluded, that the energetic efficiency of milk production was improved, when sows received high dietary fat diet (125 g/kg diet). This phenomenon can be explained by the fact that synthesizing milk fat from dietary fat is more efficient than it's synthesizing from dietary carbohydrates.
In poultry nutrition, relative limited literature data are available on fat feeding against heat stress and its effect on heat production of birds. Das et al. [24], in an excellent review, stated that heat stress may be combated by adding fat and reducing crude protein in poultry diets. Higher energy diets were effective in partially mitigating the effects of heat stress in poultry. This can be explained by the fact that during metabolism, fat produces a lower heat increment than protein and carbohydrates [25].
In other studies, it was concluded that supplementation of fat in the poultry diet increase the nutrient utilization in the gastrointestinal tract by lowering the rate of food passage [26] and also helps increase the energy value of the other feed constituents [27,28]. Feeding a high fat diet (up to 5%) to heat-exposed broilers reduces heat production. This result occurs because the heat increment of fat is lower than that of either proteins or carbohydrates [21,25,29,30].
Using vitamin C in chicken diet to change energy metabolism
Because the animal body derives all its energy from oxidation, the magnitude of energy metabolism can be determined from the amount of carbon-dioxide produced and oxygen consumed. The ratio of the volumes of carbon dioxide produced to oxygen consumed is called the respiratory quotient (RQ) [31]. The respiratory quotients are: for protein: 0.809; for fat: 0.711; for starch: 1.000; for sugar: 1.000; and for glucose: 1.000, respectively [32]. If the RQ value is equal to 1.00, this means that e.g. burning 1 g of starch produces as much carbon dioxide as oxygen is needed to burn it (0.829 liter CO 2 /0.829 liter O 2 ).
However, it should also be noted that RQ values significantly higher than 1 can be achieved if the animals convert the carbohydrate to fat, since in this case oxygenpoor fat is formed from oxygen-rich glucose. During starvation, the RQ value is less than 0.7 [31].
As it can be seen above, the RQ may provide valuable information about the metabolic processes in the body. Therefore, RQ values are very often determined in respiratory studies.
McKee et al. [33] investigated the effect of vitamin C on different variables of energy metabolism of young heat exposed chickens in indirect calorimeters. The experiment started at day 9 and lasted until day 17 posthatch. In this study, CO 2 production and O 2 consumption were measured in the thermoneutral zone (27.7°C) and in a hot environment (34°C). On the basis of these values, RQ and heat production were calculated daily through day 17 of the experiment. The basal diet was supplemented by a 150 mg/kg diet of ascorbic acid (vitamin C). They found that heat exposure lowered (P < 0.001) the respiratory quotient. Heat-exposed birds consuming the ascorbic acid supplemented diet expressed lower respiratory quotients than their unsupplemented counterparts. The authors concluded that this effect resulted from a nonsignificant increase in O 2 consumption and decrease in CO 2 production. They also concluded that further investigations are needed to determine whether the ascorbic acid-induced change in the RQ value towards 0.70 reflects an increase in protein or lipid catabolism or both. In further study, the effect of ascorbic acid on the energy metabolism and heat production of domestic animals should also be elucidated.
Despite the many open questions, based on the findings made by McKee et al. in their study, it seems that supplemental ascorbic acid may influence the body energy stores during periods of reduced energy intake (during heat stress).
Using betaine as feed additive in the diet
Betaine chemical structure (C 5 H 11 NO 2 ) contains three methyl groups which play a role in transmethylation reactions [34]. Betaine (trimethylglycine) is an intermediate metabolite in the catabolism of choline which can modify osmolarity, act as a methyl donor, and has potential lipotropic effects [9]. As a by-product of sugar beet processing, betaine is commercially available as a feed additive [35]. Currently, betaine is available in several purified forms (anhydrous, monophosphate and hydrochloride betaine) [36].
Betaine mainly functions as an osmolyte and a methyl-group donor [37]. Under heat stress, betaine plays an important role in cellular osmotic regulation, preventing dehydration by increasing the water-holding capacity of cells. It helps in maintaining the protective osmolytic activity in birds under heat stress. Betaine may promote various intestinal microbes against osmotic variations and this results in improve microbial fermentation activity [38]. Furthermore, betaine is also found to have anti-inflammatory properties and improves intestinal function [39].
Because betaine influences fat and protein deposition, it can also be used to improve carcass quality and reduce fatty livers. Schrama et al. [40] showed that energy retention in pigs improves over time following the supplementation of betaine to the diet. They also found that under thermoneutral conditions, dietary betaine supplementation (1.23g/kg diet) reduced the total heat production of pigs. They suggest the same concentration of betaine in poultry diet, as well.
These scientific findings suggest that betaine may be suitable for reducing heat production in livestock (e.g. in poultry) at high ambient temperatures. However, only few research results have been published in this area to date. Therefore, further studies are needed to determine the impact of betaine on heat production in animals under high ambient temperature.
Another problem is that many of published papers are not clear on the source of betaine used (natural, extracted betaine anhydrous or synthetic betaine hydrochloride). The source is important, as it is likely that the efficacies of the different betaine sources differ.
Effects of heat stress on anti-and prooxidant status in birds.
Mitigation using different feed additives
Impacts of heat stress
Increased environmental temperature caused increased lipid peroxidation (as well as induced formation of malondialdehyde (MDA), which is an indicator for lipid peroxidation). Therefore, the antioxidant defense system is altered [41][42][43].
According to the latest research, the elimination of the free radicals activates three level antioxidant systems (Figure 3, based on Babinszky et al. [10]).
Elimination is done by the first level of the antioxidant system which functions at the same time as the detoxification and regeneration pathways of the second level. The third level starts working after damage has been done, to repair and eliminate damaged cells. This first level (direct enzymatic pathway) includes the neutralization of the oxygen and nitrogen centred free radicals by enzymes. The second level includes the detoxification and regeneration reactions of the small molecule antioxidants. The third level is activated when damaged systems (proteins, DNA) have to be repaired and/or removed from the cells by chaperones and DNA-repair enzymes.
In general, it can be concluded that a large amount of Reactive Oxygen Species (ROS) causes disruption of mitochondrial function, increased lipid peroxidation, and decreased the concentration of so called antioxidant vitamins, furthermore induce stress gene expression, and finally it leads to dysfunction in antioxidant enzymes and also causes DNA damage.
According to Yang et al. [44] in heat stressed broilers (35°C for 3h/day), the activity of the mitochondrial respiratory chain is reduced, which led to overproduction of ROS. This situation results in lipid peroxidation and oxidative stress in the birds.
In another study [7] lipid peroxidation and superoxide dismutase (SOD) activity was measured in broilers under heat stress (32°C for 6 h/day). The results showed that high temperature disturbed the equilibrium between the synthesis and catabolism of ROS production. Glutathione peroxidase (GPx) and SOD activity increased and catalyze (CAT) activity decreased under heat stress (34°C 5 h/day from d28 to d38) [45].
ROS production reduced Vitamin A and E levels. Vitamin C concentration decreased under heat stress in poultry [46]. It has been reported that heat stress increased zinc (Zn) mobilization from tissues, and thus may cause marginal Zn deficiency and increase requirements [47]. According to Zeng et al. [48,49], SOD, MDA, CAT activity and the total antioxidant capacity (T-AOC) in Muscovy duck liver increased under short term heat stress (39°C for 1 hour then 3-hour recovery at20°C). The same results were found in broilers [50]. During heat stress in broilers, the serum concentrations of Vitamin C, E, A, iron (Fe), and Zn decreased, while the copper (Cu) concentration increased [51].
Vitamin supplementation
High environmental temperature decreases the concentrations of vitamins and micro minerals in serum and increases excretion [52]; therefore, supplementation of direct or indirect antioxidant compounds (e.g. vitamins and micro nutrients) at higher levels is commonly recommended. These additives support mechanisms against lipid peroxidation, improve immune status and performance.
Vitamin E
Vitamin E functions as a fat-soluble antioxidant which protects cellular and membrane lipids from peroxidation-catalyzed free radicals due to heat stress. In cell membranes and lipoproteins, the essential antioxidant function of Vitamin E is to trap ROO À and to break the chain reaction of lipid peroxidation. While it cannot prevent their formation, it can reduce the formation of secondary radicals [53]. Vitamin E is known as the first line of defense against lipid peroxidation caused by heat stress. It has free radical quenching activity and attacks free radicals in an early stage. When feed was supplemented with Vitamin E (200-250 mg/kg feed), the serum concentration of Vitamins E and A increased in serum and MDA concentration decreased under long term heat stress [51]. Maini et al. [54] reported that CAT, GR, GSH, MDA and SOD level decreased under heat stress due to Vitamin E supplementation. Short term heat stress increased the concentration of Zn in serum when the diet was supplemented with Vitamin E [55] (
Vitamin C
Vitamin C protects against oxidative stress-induced cellular damage in the presence of scavenging ROS, and is capable itself of inhibiting lipid peroxidation in plasma. Ascorbic acid can directly scavenge radicals in the aqueous compartment. Ascorbate can scavenge O 2 À , H 2 O 2 , the OH, hypochlorous acid, aqueous ROO À , and singlet oxygen. Under its antioxidant activity, ascorbate has a two-electron reduction [53]. Although chickens are known to synthesize ascorbic acid in the kidney, increased supplementation has proved beneficial effects in broilers reared under heat stress [59]. Ascorbic acid is actively absorbed. This active transport is supported by the sodium electrochemical gradient. However, the vitamin C requirements increase under heat stress. According to different studies, ascorbic acid supplementation (200 mg/kg feed) caused a significant increase in plasma ascorbic acid levels [59,60] in broilers under heat stress. This indicates that the higher Vitamin C concentrations in the broiler diet could be used against heat stress successfully.
Zinc
Zn is a "member" of the antioxidant network because it is a cofactor of a very important antioxidant enzyme: Cu/Zn-SOD. Zinc plays a role in depressing the free radicals and inhibiting lipid peroxidation and GSH depletion. Zn can have direct antioxidant function and it is necessary for the prevention of free radical formation. However, it does not act directly against them [53]. Zinc supplementation has positive effects on antioxidant status of birds [61][62][63]. Zinc may play an important role in suppressing free radicals because it works as a cofactor (Cu/Zn-SOD) and inhibits NADPH-dependent lipid peroxidation [64], thus improving antioxidant status: increased serum Vitamin C and E concentrations [65] and decreased MDA levels [57,66] (Table 3, [56]).
Selenium
Organoselenium compounds are essential micronutrients and are required for cellular defense against oxidative stress and optimal immune function. Selenium is necessary for cellular function and is a component of antioxidant enzymes: an important part (cofactor) of GPx, which works as an important antioxidant enzyme, protecting cells against free radical damage and oxidative stress [53]. Selenium supplementation improved antioxidant status in poultry under heat stress [51,55,58]. It is suggested that the metabolic role of Se is to protect cells against oxidation and tissue damage. Rapid oxidation of GSH to GSSH is necessary to compensate the heat stress caused ROS production. However, Se supplementation increases the level of available NADPH to promote the activation of GR, leading to increased GSSH reduction to GSH [67]. Therefore, Se supplementation affected GPx activity and the GPx/GSH ratio ( Table 3).
Results of studies done with separated supplementation of Vitamin A (9000-15000 IU/kg diet), Vitamin E (150-500 mg/kg diet), Vitamin C (150-500 mg/kg diet), Zn (30 or 60 mg/kg diet) and Se (0,1-1 mg/kg diet), show that antioxidant status improved in poultry under heat stress. Antioxidant potential has been reported to be more efficient and important in combination than single antioxidant nutrients [68]. The latest research studies show that interactions between vitaminvitamin and vitamin-minerals used in combination have more improved effects on the antioxidant status and performance of poultry under heat stress than they do separately. Literature data on combinations of vitamin and mineral supplementation can be seen in Table 4 [56].
Plant extracts
Dried oregano powder (0.5% and 1%) can be supplemented for ducks. Oregano (Origanum vulgare L.) is an herb extract used as an additive in poultry nutrition. It is an aromatic plant, containing more than 30 phenolic antioxidants constituents, including also anti-inflammatory and anti-microbial activity. It can also have
Supplementation Author(s) Spices Duration of the study
Environmental temperature [56]. beneficial effects on production, mortality, microflora, and the immune system [69]. Antioxidant enzyme activity (SOD, GPx) was improved in poultry [69]. These results suggest that dried oregano powder addition could decrease the changes in antioxidant enzymes under heat stress.
Probiotics
Probiotics are non-digestive alternative growth promoters used in poultry nutrition. Probiotics can improve animal performance, and it can manipulate and maintain beneficial microflora in the gut. Several studies prove that probiotics supplementation to feed improves production parameters in poultry [70,71]. Supplementation of probiotics (Bacillus Subtilis: 1x108 CFU/kg feed) decreased MDA activity and uric acid concentration, and also improved antioxidant response in ducks [72,73].
The effect of heat stress on the performance of broilers
After reviewing the relevant research, we identified three different types of heat stress (HS) that have been applied in experiments: acute, cyclic and chronic. In the case of acute HS, the elevated temperature lasts from several hours up to 24 hours. After exposure to HS, sample and data collection occurs. This type of arrangement is suitable to study the immediate effect of heat stress. However, in temperate countries, even in the case of cooled stables, the actual barn temperature shows a daily cycle which can be mimicked by the cyclic HS environment (4-10 hours per day in the range of three days per week to daily up to 10 days long). In tropical countries, this kind of fluctuation is much less pronounced. Therefore, the environmental conditions can be best modeled with a chronic HS (continuous HS environment usually during the second half of fattening) model [74].
The most often claimed effect of heat stress is a reduction of feed intake. As an immediate effect, acute heat stress reduces feed intake by about 25% ( Table 5).
Applying the heat stress repeatedly but allowing regeneration at thermoneutral (TN) temperature (cyclic HS-simulating the day-night temperature fluctuation) results in an adaptation, as during this period the lowest decline (in between 5 and 15%) in performance data (feed intake, daily gain and feed conversion ratio) can be observed ( Table 5). Chronic heat stress will approximately double the negative effect compared to cyclic heat stress (7-11% point), but still some adaptation can be seen compared to acute heat stress. Acute HS has a dramatic effect on daily gain, as even negative values (weight loss) can occur, which makes it impracticable to calculate the feed conversion ratio. Therefore, researchers did not publish such data. The nutrient content of the unconsumed feed itself does not justify the negative energy balance; therefore, one can assume that the energy and nutrient needs of the HS response are high. When adaptation can occur; cyclic HS has a less adverse effect than chronic HS on both daily gain and the feed conversion ratio ( Table 5).
Mitigation capacity of various feed additives on HS in broilers
One long term aim of researchers is to be able to mitigate the negative effects of heat stress. Supplementation of effective feed additives could be useful for improving intestinal absorption and minimizing the adverse effects of HS [92]. more clear information on effectiveness of various feed supplements Ortega and Szabó [93] suggested the calculation of mitigation capacity (see subsection 2 in the present chapter). However, researchers [93] also point out that contrary to the numerous publications in the field, only a limited number of studies are suitable for using this calculation, as it requires at least three treatment groups. HS effect mitigating supplements are usually vitamins expressing antioxidant capacity, probiotics and plant extracts (Tables 6-8).
It can be seen that -overall -the most difficult parameter to improve is that of feed intake (Table 7), as the feed additives studied have quite variable mitigation capacities. The overall mitigation percentage is only about 5-15%.Bettervalueswere obtained in the case of chronic heat stress compared to cyclic heat stress, but different feed additives were used. The highest improvement (above 85%) was achieved with a probiotic [88]. However, other probiotics were much less effective.
The adversely affected daily gain can be improved by about 30-35%, and the feed conversion ratio increased up to 60-70%. Observing the mitigation capacity of different feed additives in this regard, it seems that probiotics and vitamins can be the most effective mitigators, especially when they are applied in combination [84]. However, further research is needed to determine the most effective microbe combination(s), and the most effective levels of vitamins, as well as their interactive effects. Only one study reported results with fumaric acid supplementation which also seems promising, but more research is still needed [86]. Quite a few authors have tested the feed supplements in combination, which is in line with feed industry trends. Therefore, we calculated the average % mitigation value for combined and single applications. Data shows that combined applications are more effective in cyclic heat stress conditions, while that benefit cannot be observed in chronic heat stress.
Conclusions
Based on the scientific findings presented in this chapter, the following important conclusions can be drawn: • Using fat in the diet (up to 5%) can reduce heat production in livestock.
• Vitamins (e.g. A, E and C) are capable of reacting with free radicals, thereby reducing their amounts and lipid peroxidation in the poultry. However, micro minerals (e.g. Zn, Se) are not directly capable of preventing or reducing ROSformation, but they are essential cofactors for those enzymes which are reacting with free radicals.
• Vitamin E and Vitamin C supplementation improved antioxidant parameters (CAT, GR, GSH, MDA, SOD) due to their essential antioxidant function. Both Zn and Se are also improving antioxidant parameters (GR, GSH, GPx, and MDA).
• Antioxidant potential of vitamins and micro minerals is more efficient in combination under heat stress in poultry nutrition.
• Plant extracts (e.g. oregano) could decrease the negative effects of heat stress on antioxidant enzyme activity due its antioxidant constituents.
• Betaine reduces heat production in animals at high ambient temperatures.
• Acute heat stress induced a drop in feed intake and increased nutrient demand will result even in weight loss. However, if heat stress is prolonged, adaptation will occur.
• Probiotics and vitamins (C and E) seem to be the most effective means of reducing the negative effects of heat stress.
• Main conclusion for the practice: Different feed additives and supplementation strategies (single vs. combined) can be more effective in temperate and tropical countries. Therefore, in order to decide which feed additive to use and in what form (single or combined) the most effectively, it is recommended that farmers carry out a pre-study in the given climatic and feeding conditions.
heat production and energy retention (Chapter 11.3 | 2021-11-26T16:46:45.450Z | 2021-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "2af9e3e08964320460a4cb0678bb5903e69631e0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/intechopen.101030",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9c6f1091e6e01f2fbed926de5063c2849d3aa7c3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
2749648 | pes2o/s2orc | v3-fos-license | Accuracy of CT cerebral perfusion in predicting infarct in the emergency department: lesion characterization on CT perfusion based on commercially available software
This study aims to assess the diagnostic accuracy of a single vendor commercially available CT perfusion (CTP) software in predicting stroke. A retrospective analysis on patients presenting with stroke-like symptoms within 6 h with CTP and diffusion-weighted imaging (DWI) was performed. Lesion maps, which overlays areas of computer-detected abnormally elevated mean transit time (MTT) and decreased cerebral blood volume (CBV), were assessed from a commercially available software package and compared to qualitative interpretation of color maps. Using DWI as the gold standard, parameters of diagnostic accuracy were calculated. Point biserial correlation was performed to assess for relationship of lesion size to a true positive result. Sixty-five patients (41 females and 24 males, age range 22–92 years, mean 57) were included in the study. Twenty-two (34 %) had infarcts on DWI. Sensitivity (83 vs. 70 %), specificity (21 vs. 69 %), negative predictive value (77 vs. 84 %), and positive predictive value (29 vs. 50 %) for lesion maps were contrasted to qualitative interpretation of perfusion color maps, respectively. By using the lesion maps to exclude lesions detected qualitatively on color maps, specificity improved (80 %). Point biserial correlation for computer-generated lesions (R pb = 0.46, p < 0.0001) and lesions detected qualitatively (R pb = 0.32, p = 0.0016) demonstrated positive correlation between size and infarction. Seventy-three percent (p = 0.018) of lesions which demonstrated an increasing size from CBV, cerebral blood flow, to MTT/time to peak were true positive. Used in isolation, computer-generated lesion maps in CTP provide limited diagnostic utility in predicting infarct, due to their inherently low specificity. However, when used in conjunction with qualitative perfusion color map assessment, the lesion maps can help improve specificity.
revascularization in the setting of acute stroke [1,2] and are becoming more widely available in emergency settings [3,4]. Previous studies focused primarily on the use of CTP for thrombolytic treatment assessment by evaluating for the existence of an ischemic penumbra [5,6]. Only a few studies are available in the literature to assess the accuracy of CTP in diagnosing an acute infarct [7,8], and none with quantitative lesion analysis for the purposes of diagnostic accuracy. Previous papers comparing CTP to diffusionweighted imaging (DWI) only included evaluation of the core infarct and surrounding potentially salvageable ischemic penumbra in patients with known middle cerebral artery (MCA) infarcts or dense hemispheric stroke symptoms, leading to a study population including only acute stroke patients [9,10]. To our knowledge, only one recent study in the literature assesses the practical use of CTP as a diagnostic study among patients who presented to the ED with stroke-like symptoms [4].
It has been shown that addition of CTP and CTA to NCCT does not adversely increase the time to tPA treatment for acute stroke patients in the ED [11]. However, before widespread utilization of this technique is feasible, further studies are needed in assessing the practical use of CTP with commercially available software as a diagnostic tool for patients presenting with stroke-like symptoms. Rapid assessment provided by simple postprocessing of automated lesion maps generated from the software will help in timely management for potential thrombolytic therapy.
Our study will assess the diagnostic accuracy of CTP as a result of a single vendor commercially available software in predicting infarct by correlation with follow-up DWI. Individual lesion characteristics such as ischemia, infarct, or mixed (containing both infarct and ischemia) as well as lesion size will be assessed for correlation with DWI abnormality. The diagnostic accuracy of lesion maps generated by the software will be compared to a qualitative evaluation of the four major perfusion parameters: cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and time to peak (TTP).
Materials and methods
This HIPAA-compliant study was approved by the Indiana University-Purdue University at Indianapolis Institutional Review Board with waiver of informed consent.
Patient selection
At our institution, a primary stroke center, an acute stroke workup for patients presenting within 6 h of symptom onset consists of a NCCT followed by CTP and CTA of the head and neck. A retrospective analysis was performed to identify patients who had received an acute stroke workup and a follow-up MRI/DWI within 6 h of the initial CTP from August 2008 to August 2010. Because of a lack of a standardized protocol for MR imaging, the time between the initial CTP and follow-up MRI varied widely among patients. A 6-h interval was arbitrarily chosen for this study to decrease the chance of an interval development of acute infarction between exams while ensuring adequate sample size. Our exclusion criteria included patients who received intravascular thrombolysis between CTP and DWI, presence of intracranial arteriovenous shunting, and inadequate CTP due to technical difficulties (e.g. excessive motion, suboptimal bolus timing, and insufficient post-processing).
CTP parameters
All studies were performed on a 64-slice CT scanner (Philips Brilliance 64, Philips Healthcare, Andover, MA). Eight contiguous slices at 5 mm thickness for a total of 40 mm of coverage were obtained with cine scanning for a total of 40 s. Iodinated non-ionic contrast material of 40 ml (Isovue-370; Bracco Diagnostics, Princeton, NJ) was injected via an Infarct lesions (red, small arrow) representing areas of elevated MTT and reduced CBV. b Mixed lesions contained both areas of ischemia and infarct, the so-called "infarct core and ischemic penumbra" (large arrow) Fig. 3 Four contiguous axial images demonstrating a large region of infarct core with ischemic penumbra. The areas of these contiguous lesions are summated into one larger lesion curves were used to calculate CBV. Determination of ischemic lesions (MTT>7 s or 145 % of the normal contralateral side) and infarct (CBV<2.0 ml/100 g) has been previously described in the literature [12]. Post-processing parameters and technical factors including region of interest placement, the resulting signal to time intensity curves, midline selection, and excessive patient motion were reviewed by a neuroradiologist to assess for a satisfactory CTP study. Computer-generated lesion maps were reconstructed at 5 mm slice thickness.
CTP analysis
Computer-generated lesion maps were evaluated for consensus by two neuroradiologists [7 years (CH) and 2 years (SH) of experience] blinded to color maps and DWI results. Analysis was performed on anatomic images with computer-generated superimposed colored lesions which met criteria for ischemia (elevated MTT with normal CBV, green) and infarct (elevated MTT with decreased CBV, red). Inclusion criteria for the computer-generated lesions were as follows: greater than 10 mm 2 in area, lesions corresponding to brain parenchyma, and lesions not within areas of beam hardening adjacent to bone or chronic infarct on NCCT. All lesions not meeting these criteria were considered artifact and excluded ( Fig. 1). Lesions were divided into three categories by perfusion parameters: ischemia, infarct, and mixed (containing both ischemic and infarct lesions) (Fig. 2). Lesion size was also measured by cross-sectional area. Contiguous lesions which persisted on the axial slice above or below another lesion and within the same vascular territory were summated into one larger lesion (Fig. 3). Volume was calculated by multiplying cross-sectional area by slice thickness to obtain milliliter volume units. CTP studies without any computer-generated lesions that met inclusion criteria were considered a negative study.
Separate qualitative color map analysis of the four different parameters was performed at a later time blinded to both computer-generated lesion maps and DWI results. The four perfusion maps were displayed in a 16-scale color map. Observers were dependent on symmetry as well as a perceived change from the surrounding cortex (Fig. 4). When a perfusion deficit was suspected, correlation with source images were performed to exclude those caused by beamhardening artifact or chronic infarct. If the perfusion deficit was included, the cross-sectional area of the lesion was measured and summated with contiguous perfusion deficits in the same vascular territory similar to the method as described for computer-generated lesions. Volume conversion was also calculated.
DWI and statistical analysis DWI and apparent diffusion coefficient (ADC) maps were assessed for areas of restricted diffusion, which were defined as areas hyperintense on DWI and hypointense on ADC map relative to adjacent normal brain parenchyma. Areas of restricted diffusion were then compared with the initial CTP study for anatomic correlation. To be considered as a true positive (TP) lesion for acute infarct on the CTP, it must have an anatomically corresponding area of restricted diffusion, regardless of the size of lesion on DWI. A false positive (FP) lesion had no corresponding restricted diffusion. A false negative (FN) lesion was a lesion with restricted diffusion on the DWI that was not identified on the initial CTP.
A true negative (TN) study was one that had no CTP lesions by inclusion criteria and no areas of restricted diffusion on DWI. Studies with areas of restricted diffusion outside of the corresponding imaged anatomy on CTP were included in the statistical calculations as FN studies. A TP study was any CTP study with a TP lesion, even if there were other coexisting FP lesions on the same study. CTP studies with both FP and FN lesions were counted as FP studies. This methodology was selected due to the increased likelihood of treatment with a positive result on CTP whether true or false.
Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for each CTP study as a whole. PPV for each category of lesion were also calculated. For each lesion, an x value of 1 was assigned for TP lesions and 0 for FP lesions. Point biserial correlation coefficient was calculated to determine the relationship between the size of the lesion and agreement with DWI.
For the qualitative color map analysis, the study as a whole was counted as TP if any of the four color maps demonstrated a perfusion deficit correlating with DWI, even if other FP or FN lesions were present on the same study. Again, infarcts on DWI not included in the scanned anatomy on CTP were counted as FN. For studies with both FP and FN areas, the study was counted as a FP. TP, FP, TN, and FN results were also assessed for each individual parameter of CBV, CBF, MTT, and TTP for each lesion, as a few studies were determined to have more than one lesion which were not in the same vascular territory. Point biserial correlation was also performed for lesion size for all color maps and separately for each of the four perfusion parameters.
Finally, the lesions perceived on the qualitative analysis of the color maps were compared to the computer-generated maps to assess whether a computer-generated lesion was present in the same anatomical region. These were then considered a negative area, and adjusted diagnostic accuracy was calculated for the study and for each of the four perfusion parameters.
Results
A total of 73 patients were identified meeting inclusion criteria. Seven patients were excluded (three patients for
Computer-generated lesion maps
There were 15 TP studies, 10 TN, 33 FP, and 3 FN. Four studies had both FP and FN lesions and were counted as FP studies for statistical analysis. Only one study had an area of restricted diffusion that was not anatomically included on the initial CTP study. This study also had FP lesions and was therefore counted as a FP study. Parameters of diagnostic accuracy are summarized within the following chart ( Table 2).
Of the abnormal CTPs, there were 215 separate lesions (0.7-861 ml, mean 25 ml). Lesion categorization and PPV results are summarized below (Table 3). Point biserial coefficient was 0.46 (p<0.0001), demonstrating a positive correlation of increasing size with the likelihood of a true positive lesion (Fig. 5, Table 5). (Table 2). There were five studies for which there were two noncontiguous lesions, resulting in n=70 for the calculation of diagnostic accuracy for the four perfusion parameters. The results are summarized below (Table 4). Point biserial correlation demonstrated positive correlation of increasing size with increasing likelihood of a true positive lesion for all detected lesions as well as the four individual perfusion parameters; however, this was only statistically significant when calculated for all lesions regardless of the individual perfusion parameter ( Table 5, Fig. 6). Interestingly, all detected lesions had at least MTT and TTP abnormality, which demonstrated similar cross-sectional areas for these two perfusion parameters. In those lesions with at least a CBF abnormality, the pattern of increasing lesion sizes from CBV, CBF, to MTT/TTP were associated with a PPV of 73 % (Fisher exact test p=0.018) (N=15, TP=11, FP=4). By contrast, all of the four lesions that did not have this increasing pattern were FP lesions. Lesions with only MTT/TTP abnormalities had a 27 % PPV (N=11, TP=3, FP=8). Five lesions that were detected qualitatively on perfusion analysis were not detected by the computer-generated lesion map. None of these lesions were TP results. By weighing the negative results of the automated lesion maps over the positive qualitative interpretation and considering these studies as negative results, four became TN, and one, FN because this study failed to detect an area which was positive on the follow-up DWI. Recalculation of diagnostic accuracy of the study as a whole as well as for each of the four perfusion parameters after correcting FP to TN for these five lesions resulted in an increase in specificity and PPV (Tables 2 and 4).
Qualitative analysis had two FN results which were detected as TP lesions by the computer-generated lesion maps. All seven FN areas on the computer-generated lesion maps could not be detected by qualitative evaluation. Four of these had other FP lesions. By contrast, of the nine FN areas on qualitative analysis, only three had positive lesions elsewhere. Only one FN was outside of the covered anatomy. Of the eight FN areas which were covered by CTP, all were small lacunar or cortical infarcts with a mean volume of 7.7 ml (0.85-34.4).
Discussion
Based on our study, the evaluation of computer-generated lesion maps from a single vendor for CTP for predicting acute infarct is a sensitive diagnostic tool but is limited by a lack of specificity and positive predictive value. Categorizing lesions based on perfusion parameters does not seem to increase the positive predictive value of the lesions. Table 4 Diagnostic accuracy for perfusion parameters for qualitative interpretation before and after excluding lesions not found on the computergenerated lesion maps
Qualitative evaluation only (%) Qualitative analysis and computer lesion maps (%)
However, there is a significant positive correlation with large lesion sizes and infarct outcome. This corroborates the use of CTP as a tool for decision-making in treatment of infarcts with large ischemic penumbras. By contrast, qualitative interpretation of the four different perfusion parameters (CBV, CBF, MTT, and TTP) improves diagnostic accuracy with respect to specificity and negative and positive predictive values but decreases the overall sensitivity. When computer-generated lesion maps are used to exclude perceived lesions from qualitative color map analysis, specificity and positive predictive value are further improved.
As previously described in the literature, CBV has the highest specificity of the perfusion parameters for infarct core but suffers from lower sensitivity, while MTT and TTP have higher sensitivities, but lower specificity [6,7]. CBF is consistently in between CBV and MTT/TTP in diagnostic accuracy for both sensitivity and specificity [18]. Interestingly, 73 % of our lesions with increasing size in order from CBV, CBF, to MTT/TTP were true positive, suggesting that this pattern is helpful in diagnosing acute infarcts. It is also consistent with the hypothesis described previously that most acute infarcts have some region of ischemic penumbra larger than the infarcted tissue [13,14].
Given the NPV of both the qualitative evaluation and computer-generated lesion maps, perhaps the most useful aspect of CTP as a diagnostic tool is as a screening exam to exclude stroke when the CTP is considered a negative study based on our criteria. DWI, although highly accurate, is not 100 % sensitive in diagnosing acute strokes [15]. Our use of DWI as the gold standard in our study reflects the common practice of using this modality as the imaging gold standard in confirming acute stroke. The overwhelming majority of the false positive lesions in our study was ischemic lesions, specifically, lesions that demonstrate elevated MTT and normal CBV. These lesions may not be truly false positive as areas of chronic hypoperfusion, and reversible ischemia may not always progress to infarct and, therefore, may not be detectable on DWI. This may explain our low specificity. A more accurate comparison of the ability of CTP to evaluate potentially reversible ischemic lesions would be to compare the technique to other perfusion studies such as dynamic susceptibility contrast or arterial spin labeling MR perfusion [16]. However, this is not within the scope of our study. Furthermore, comparing CTP to DWI represents a more accurate measure of risk for progression to acute infarct [17].
Our results contrast significantly with previous studies which reported high sensitivity and specificity for CTP in the diagnosis of acute stroke [4]. Study population may account for some of the discrepancy as previous studies included a higher percentage of stroke patients with larger volume strokes due to their inclusion criteria [10]. Our study included a cross-sectional representation of patients with stroke-like symptoms and subtypes presenting to the ED, including small lacunar infarcts, which contributed to the majority of our false negative results, and for which the treatment with thombolytics remain controversial [18].
Another factor contributing to discrepancies are possible technical differences between imaging parameters and postprocessing algorithm. Recent literature demonstrates significant differences in CTP maps when using different commercially available software with different post-processing algorithms [19]. This underscores the necessity for standardization of CTP imaging parameters and software prior to practical mainstream utilization within the ED for both diagnosis and treatment of stroke [20].
Study limitations include the use of 6 h as the DWI time limit cutoff. It is possible that an interval infarct may occur within the 6 h of hospitalization, leading to a greater number of false negatives. However, with the false negative rate in our study (17 % for computer-generated maps and 33 % for qualitative analysis), this would only lead to modest gains in sensitivity and specificity. Newer 64 and 128 scanners have the capability of scanning the entire brain for CT perfusion, which may lead to less false negative studies [21]; however, only one infarct was outside of the CTP coverage in our study.
Conclusion
Used in isolation, computer-generated lesion maps from a single vendor for CTP provide limited diagnostic utility in predicting infarct, due to their inherently low specificity. However, when used in conjunction with qualitative perfusion color map assessment, the lesion maps can help improve specificity. CTP characteristics which best correlate with true infarcts are a large lesion size and a pattern of increasing size from CBV, CBF, to MTT/TTP, confirming that most acute infarcts possess a larger ischemic penumbra.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 2017-04-19T00:35:37.729Z | 2013-01-16T00:00:00.000 | {
"year": 2013,
"sha1": "beb90e8ee662295b9d55df63984d309eeac10e17",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10140-012-1102-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc10aed53b61b81ae50e5d66614819cb18085fed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90450467 | pes2o/s2orc | v3-fos-license | The Production of 1 , 8-Cineole , a Potential Biofuel , from an Endophytic Strain of Annulohypoxylon sp . FPYF 3050 When Grown on Agricultural Residues
An endophytic fungus producing 1,8-cineole from Neolitsea pulchella (Meissn.) Merr. was identified as Annulohypoxylon sp. by phylogenetic analyses of the sequence alignments of ITS rDNA, β-tubulin, Actin and EF1-α. This isolate produces an attractive spectrum of volatile organic compounds (VOCs) with only one dominant component, 1,8-cineole, as identified by gas chromatography-mass spectrometry (GC-MS). The fungus was able to grow in seven media with different carbon sources, and five raw agro-forest residues. The content of 1,8-cineole in the mixed VOCs via fungus reached up to 94.95% and 91.25% relative area in PDA and raw poplar sawdust, respectively. Under optimum test conditions, the fungus produced 1,8-cineole at the 0.764 ppmv in 50 mL head spaces in PDA. Interestingly, 1,8-cineole is an ideal fuel additive for both diesel and gasoline engines. Also, this is the first isolate, in this group of fungi, making cineole, which produces as its primary VOC product which makes it an ideal organism for strain improvement. Such as step will be critical for its ultimate use in biofuel production.
Introduction
It is estimated that less than 5% of the fungal species on the earth have been found and described [1].Endophytic fungi occupy a significant proportion of untold numbers of potential novel fungal genera [2].When morphological data are missing, one can use internal transcribed spacer (ITS) sequence data to aid in identification fungi [3] [4] [5].For difference strains, the reliability of ITS identification alone is questionable [4] [5].In recent years, multiple gene loci and even whole genome have been developed which greatly helps in organismal identification [3] [4] [5].Endophytic fungi are suitable for the discovery of new chemical entities including enzymes, and useful volatile organic compounds [6] [7] [8] [9] [10].There are many methods to identify these volatile organic compounds, such as stainless steel column carbotrap technology [11], proton transfer reaction mass spectrometry (PTR-MS) [12], nuclear magnetic resonance spectroscopy [13] and headspace solid phase microextraction combined with gas chromatography-mass spectrometry (HS-SPME-GC-MS) [14].Among them, HS-SPME-GC-MS is advanced in the analysis of volatile compounds in gas producing fungi because of its simplicity and speed.
The applications of endophytes producing novel bioactive products are gradually expanding in industry, medicine, food and other industries [7] [15] [16].
As an example, in recent years, some endophytic fungi, grown on waste wood fibers, produce some volatile organic compounds (VOCs) which are identical to compounds found in fossil fuels [16] [17].These kinds of VOCs could represent the next generation of biomass produced compounds energy for the entire world.1,8-Cineole, usually derived from plants [18] [19], has potential value as a fuel additive or even as a fuel.It improves the octane value of the ethanol gasoline blended fuel [20] [21], which solves the problem of poor energy density of ethanol that is also a fermentation product.Thus, 1,8-cineole has a great advantage for use in internal combustion and diesel run engines [17].1,8-Cineole is a monoterpenes and it is colorless liquid with an odor similar to that of camphor.1,8-Cineole is usually extracted from leaves and branches of Eucalyptus tree species by distillation.Recently, 1,8-cineole has been discovered from a number of endophytic fungi including Nodulisporium sp.[1], Hypoxylon sp.[9], Annulohypoxylon sp.[22] and Acremonium sp.[23].Compared with the previously reported fungi, a new strain FPYF3050 from Annulohypoxylon in this report produces 1,8-cineole as a single dominant volatile in experimental media and on several agro-forest residues substrates.It holds promise as the most potent microbial resource for 1,8 cineole production.
Endophytic Fungal Isolation
The endophytic fungus was isolated from branches of Neolitsea pulchella (Meissn.)Merr.(Lauraceae) growing in the Jianfengling tropical rain forest of Hainan province at E108˚83'; N18˚70', Fungal isolation procedures followed the methods described by Ezra [24] and Arnold [25].Briefly, external tissues were thoroughly exposed to 95% ethanol for 10 seconds prior to excision of internal tissues and 10% Clorox agitated for 2 minutes.Then the draining tissues were agitated in 70% ethanol for 2 minutes.The excised small internal tissues were agitated in sterile water for 15 seconds and cultured on water agar of standard Petri dishes and further purified on potato dextrose agar media.The pure isolate was stored by barley seeds for supporting mycelia growth in the sterile water at 4˚C.The fungus of interest was labeled ID FPYFF3050 and stored at Yan's laboratory and China General Microbiological Culture Collection (CGMCC) with the number of 12771.
Phylogenetic Analysis of Endophytic Fungus
The FPYF3050 strain is in a sterile (not producing spores) stage under laboratory conditions.Its taxonomic information was determined on phylogenetic inference using molecular techniques.The mycelium from FPYF3050 colony in PDA for 6 days was harvested and the genomic DNA extracted using modified CTAB [26].Fungal universal primers for ITS1/4 [27], EF1-983F/EF1-2218R [28], β-tubulin [29], Actin-512F/783R [30] seen in Supplementary Table S1.PCR amplification on the sequences was performed with the 25 μL reaction system containing 0.5 μL of each primer (10 μM), 3.0 μL of DNA template, 12.5 μL of 2× Taq PCR MasterMix (TIANGEN BIOTECH), 8.5 μL of double distilled water.The ITS thermal cycling program was as follows: 94˚C for 3 min, followed by 35 amplification cycles of 94˚C for 30 s, 52˚C for 45 s and 72˚C for 1 min, and a final extension step of 72˚C for 10 min.Only the annealing temperature was changed in this program for Actin, β-tubulin and EF1-α amplification with at 61˚C, 55˚C and 63˚C, respectively.
PCR products were sequenced by the Sunbiotech Company in Beijing, and sequences were submitted to GenBank.Sequences obtained in this study were compared to the National Center for Biotechnology Information (NCBI) database using the BLASTn software.According to the results of BLASTn, DNA reference sequence data were chosen for phylogenetic analysis, which were from three genera, Annulohypoxylon, Hypoxylon and Daldinia.Bayesian inference [31] was used for the phylogenetic analyses of our DNA sequence data.The sequences were aligned by MAFFT 7.304 [32].Phylogenetic analyses of the aligned sequences were performed with MrBayes3.2.2.For Bayesian analyses, the settings were "invgamma shape", "one substitution" and "NY98".The number of generations was set to 1,000,000, and one tree was saved per 100 generations.The first 20% of the trees were excluded from construction of the consensus tree.The cladogram and posterior credibility values for the clades found were based on the outcome of the last 0.8 million generations.All the phylogenetic trees were rooted using Biscogniauxia atropunctata as outgroup.Evidence on the trees were combined and visualized by TreeGraph 2 [33].
Qualitative Analysis of FPYF3050 VOCs Grown on Selected Substrates
A variety of selected media were used to determine which substrates can facilitate 1,8-cineole production by FPYF3050.The media used for the testing were divided into three types.The first was common laboratory medium including potato dextrose agar (PDA), Czapek's agar (CA) [34], oatmeal agar (OA) [35] and malt extract agar (MEA) [36].The second was synthetic medium described by Mallette [37] including cellulose medium (CM), carboxymethyl cellulose (CMC) and glucose medium (GM) as carbon sources.The third type was agriculture and forestry residue medium including poplar sawdust, pine sawdust, corn straw, rice straw and wheat straw.The five kinds of raw agriculture and forestry residues were rinsed by tap water, and then cleanly washed with ddH2O, the last soaked in sterile water for 2 hours to fully absorb water.Then wet sawdust and straw without dropping water were autoclaved to be medium.The cultural conditions for the strain FPYF3050 was at 25˚C constant temperature for 6 days.All samples were examined in triplicate.
Analyses of gases in the air space above FPYF3050 colony in Petri plates were conducted according to the following protocol as described papers NIST compounds library with GC, and all chemical compounds were described in this report following the NIST database chemical terminology.
Quantification of 1,8-Cineole by GC-FID
Quantification of 1,8-cineole was done in the air space above cultures of FPYF3050 grown for 1 -6 days at 25˚C on PDA (Petri size 9 cm and head space volume is 50 ml).The GC analysis was executed using a Agilent 7980 equipped Parameters and conditions during the HS-SPME process and final quantification by GC-FID were same as HS-SPME-GC-MS previously.Triplicates were for each sample.The contents of 1,8-cineole in the sample were calculated according to the calibration curve.Then mycelium in the culture Petri dish after GC-FID quantification was dried at 60˚C to obtain the dry weight of mycelium.The deviations for dry weights for one colony were less than 0.0001 g among the three individual weightings.
Isolation and Identification of Endophytic Fungus
The isolate FPYF3050 fungus was recovered from the healthy branches of a Lauraceae tree, Neolitsea pulchella (Meissn.)Merr.The strain produces a white flocculent mycelium and forms round colonies in the early stage on PDA medium (Figure 1).Small pieces of black thin flakes occurred on the colonies at 4days old (Figure 1(A)).As the colonies mature, the flakes aggregate and cover the colony with the more blackish green pigments ultimately yielding a black coloration at the bottom of the colony.The pigment secretions gradually dispersed through the whole medium.Thus, the medium is stained black with the increasing culture time (Figure 1 Due to few EF1-α referring sequences in NCBI database, the phylogenetic tree of the EF1-α was not constructed.However, the best hits of the EF1-α of the strain queried in NCBI were Annulohypoxylon nitens and Annulohypoxylon sp.
isolate PK09007 with the identities of 93% and 92% by the BLAST program.The result of this molecular methodology placed the identity of the isolate FPYF3050 fungus squarely in the perfect stage genus of Annulohypoxylon.It is to be further noted that the imperfect stage of this fungus is Nodulisporium which can occur among all of the perfect stages of Annulohypoxylon, Hypoxylon, Xylaria and Daldinia.Thus, in this report we refer to it as Annulohypoxylon sp. FPYF3050.
Annulohypoxylon was created as a new genus from Hypoxylon with ostioles and ascospora morphology and molecular phylogenety in 2005 [5] [38].Annulohypoxylon is divided into two subclades based on the presence or absence of ostiolar disks [5].In this study, the strain, FPYF3050, isolated as an endophytic fungus without any spores or other fruiting structures to characterize it, was characterized using its operational taxonomic units (OTU) species by four phylogenetic loci.Although ITS sequencing has become an effective and important marker for fungal molecular evolution and phylogeny [3] [4] ITS sequences did not exclusively separate Annulohypoxylon and Hypoxylon [5].Therefore, protein-encoding sequences, β-tubulin, Actin and EF1-α, were also involved in the identification of the strain.The Bayes inference of phylogenetic analyses on βtubulin and Actin ascribed the strain into Annulohypoxylon genus, closely clustered with A. strugium and A. atroroseum (Figures 2-4) with the support of 100% posterior probability.The fungi in Daldinia and Hypoxylon clusters have greater genetic distances with the strain.Phylogenetic inference was not developed with EF1-α for the strain due to so few reference strains sequences in NCBI databases for Annulohypoxylon fungi.However, the results queried with EF1-α sequence of FPYF3050 also hit the highest homolog sequences from Annulohypoxylon species by the Blastn program.Therefore, based on all of the available data the strain was assigned as an Annulohypoxylon sp.
VOCs Qualitative Analyses of Annulohypoxylon sp. FPYF3050
The fungus produced diverse volatiles on different media were showed in Table 1 and Table 2, and their respective GC/MS profiles are showed on Supplementary Figure S1 and Figure S2.These volatiles mainly contained alkenes, alcohols and several unknown ingredients.The products appearing were relatively consistent when the fungus was grown on CA, CMC, CM and OA media (Table 1).
GM medium induced the strain to produce more VOCs (Table 1).In PDA and MEA media the fungus produced only 4 and 6 compounds, respectively (Table 1).On the five raw agro-forest residue media, poplar, pine, corn, wheat and rice (Figure 5), the endophytic fungal Annulohypoxylon sp.produced 5, 11, 17, 14 and 7 kinds of volatile compounds, respectively (Table 2).The significant difference of these five kinds of agro-forest residue media is their carbon sources.
The main carbon sources of poplar and pine are the lignocelluloses, and the Table 1.A GC/MS air-space analysis of the volatile compounds was produced by FPYF3050 on 7 medium using a SPME fiber.The diameter of initial inoculums plug is 5 mm.The strain FPYF3050 was at 25˚C constant temperature for 6 days.Compounds found in the control Petri plate are not included in this table.achieved at the sixth day in the incubation period of this fungus under the outlined test conditions, and the yield began to decrease at the seventh day.Therefore, we acquired data of colony diameter, dry weight of mycelium and quantity of 1,8-cineole from 1 to 6 days on PDA (Supplementary Table S2).The highest production of 1,8-cineole was a 6-day-old culture of Annulohypoxylon sp.making 0.764 ppmv in a 50 ml air space of a Petri plate with 9 cm diameter.Analysis of 1,8-cineole yield was related to two factors, dry weight of mycelium and colony diameter and Supplementary Table S2 showed that they were linear relationships.The R value was 0.9974 for the linear relationship between dry weight of mycelium and 1,8-cineole yield (Figure 6).
In the PDA medium, the fungus produced 4-5 detectable volatile compounds compared to 29 VOCs with Hypoxylon sp.CI4A [9], 32 VOCs with Nodulisporium sp. [1], 15VOCs with Annulohypoxylon sp.[22], 20 VOCs with Acremonium sp.R1, and 16 VOCs with Acremonium sp.R2 [23].Thus, the Annulohypoxylon is unique among all of these strains relative to its ability to make mostly 1,8-cineole in its mixture of VOCs.The content of 1,8-cineole in VOCs produced by FPYF3050 growing on PDA was up to 94.95% of the relative area (RA) according to GC-MS results (Table 1).However, the abundance of 1,8-cineole in VOCs produced by other endophytic fungi was lower than 30% RA [1] [9] [22] [23].In the case of Hypoxylon sp.CI4A, 1,8-cineole was represented as only 0.5% of the total VOCs [9].The capability to produce highly abundant 1,8-cineole by the fungus is maintained in different substrates, such as in MEA with Figure 6.Relationship between quantity of 1,8-cineole and dry weight of mycelium, colony diameter.1 and Table 2).However, starch, glucose, and cellobiose as a source of carbohydrates, were reported to facilitate higher concentrations of 1,8-cineole of Hypoxylon sp.CI4A [9].Therefore, the Annulohypoxylon sp.FPYF3050 would be a feasible organism tions that the substrate of poplar sawdust was used as medium to produce 1,8cineole of optimal yield.Moreover, the strain FPYF3050 seems to possess a stable production of 1,8-cineole with constant transfer to different media and different cultural conditions over the course of time (data not showed).Furthermore, it would be an ideal organism for studying 1,8-cineole or monoterpene metabolic pathways of fungi [40] [42].The genome of the strain FPYF3050 has just been sequenced with the size of ca.43M, and twelve terpene synthases have been found in the genome (data not show).This organism could nice serve for subsequent enzyme and bioengineering analyses.Fungal monoterpenes serve important functions in economic in the ecology of fungi but little knowledge exists about their biosynthesis and biological interactions.
Conclusion
This study illustrates that the endophytic fungus FPYF3050, which was identified as an Annulohypoxylon species by phylogenetic analysis, can grow on different substrates and produce 1,8-cinole.The fungus, especially on poplar sawdust, grows quickly and produces only five VOCs.Among them, 1,8-cineole accounted for 91.25%.1,8-Cineole, an octane derivative, has been tested as a fuel additive or even as a biofuel [20] [21].Compared with sourcing the plant as a means to acquire 1,8-cineole, FPYF3050 can not only reduce costs but also take full advantage of agro-forest residue to produce 1,8-cineole quickly.It appears that Annulohypoxylon sp.FPYF3050 has interesting potential applications in the fields of industry and biofuels since it is so capable of making 1,8-cineole from biomass residues.
[ 9 ]
[37].A baked fiber syringe of 50/30 divinylbenzene/carboxen on polydimethylsiloxane (Supelco) was exposed to the vapor phase inside Petri for 40 min through a small hole (0.6mm in diameter) drilled on the sides of the Petri plate.Then the syringe was inserted into the splitless injection port of a Thermo Finnigan gas chromatograph containing a 30.0 m × 0.25 mm HP-5MS capillary column with a film thickness of 0.25 mm.A 30 s injection time was used to introduce the sample fiber into the GC.The column was temperature programmed as follows: 33˚Cfor 2 min increased to 220˚C at 5˚C•min −1 .The MS (TRACE DSQ) was scanned at a rate of 5 scans per second over a mass range of 41 -560 amu.Control PDA Petri plates uninoculated with the strain were used to subtract compounds contributed by the medium.All treatments and checks were done in triplicate.Tentative identification of the compounds produced by FPYF3050 was made against with an FID detector, HP-5MS column (30.0 mm × 0.25 mm, film thickness 0.25 μm).The operating conditions were oven temperature 33˚C (2 min), 33˚C -220˚C (7˚C/min), 220˚C (7 min); injector temperature 240˚C, with the tail blowing gas N 2 (39 ml/min); detector temperature 250˚C, H 2 :O 2 is 40:400 ml/min.All compounds with a quality match of 60% and above were recorded.The standard 1,8-cineole (Fluka) was diluted 10 times in 5 concentrations with hexane ranging 0.005 to 50 μL/ml.Each standard 1,8-cineole concentration contained linalool (internal standard, Fluka) of the same concentration at 0.5 μL/ml.The calibration curve was found with As/Ai (standard peak area /internal standard peak area) as transverse coordinate and Vs/Vi (standard volume/internal standard volume) as longitudinal coordinate, and calculated relative mass calibration factor F (F = (Ai*Vs)/(As*Vi)).The calibration curve was linear and passed through the origin.
Figure 1 .
Figure1.Morphological characteristics of FPYF3050 on PDA.A is 4-day colony morphological characteristics on PDA.A1 is the obverse and A2 is the reverse.The mycelium is villiform.B is 30-day colony morphological characteristics on PDA.B1 is the obverse and B2 is the reverse.Small black thin flakes occur on colony and the medium is stained into black by pigment.
Figure 2 .
Figure 2. A phylogenetic tree generated by Bayesian analysis from ITS dataset.The phylogenetic tree was rooted using Biscogniauxia atropunctata as outgroup.Bayesian posterior probability values greater than 50% are shown above branches and species clustering is noted.
Figure 3 .
Figure 3.A phylogenetic tree generated by Bayesian analysis from β-tubulin dataset.The phylogenetic tree was rooted using Biscogniauxia atropunctata as outgroup.Bayesian posterior probability values greater than 50% are shown above branches and species clustering is noted.
Figure 4 .
Figure 4.A phylogenetic tree generated by Bayesian analysis from Actin dataset.The phylogenetic tree was rooted using Biscogniauxia atropunctata as outgroup.Bayesian posterior probability values greater than 50% are shown above branches and species clustering is noted.
to maximize 1 , 8 -
cineole production in industry.Interestingly, 1,8-cineole was abundantly maintained in the VOCs mixture with the lignocelluloses biomass, such as raw poplar and pine residues which are most abundant supply of agricultural and forestry waster in the world.The content of 1,8-cineole in fungal VOCs was quantified with internal stand method by HS-SPEM-GC-FID[39] [40].In this study, Annulohypoxylon sp.FPYF3050 on PDA in Petri with 9 cm diameter yielded 1,8-cineole with 0.764 ppmv.The alternative process for 1,8cineole in industrial scale will attenuate to dependence on Eucalyptus plants, which are raw materials for producing natural 1,8-cineole by distillation methods with best quality of 80% in fine oil[18] [41].Fungal 1,8-cineole production is a more economically and environmentally friendly method to acquire 1,8cineole than that of plant extraction.Also, this study has meaningful implica-
Table 2 .
A GC/MS air-space analysis of the volatile compounds was produced by FPYF3050 on 5 agro-forest residue medium using a SPME fiber.The diameter of initial inoculums plug is 5 mm.The strain FPYF3050 was at 25˚C constant temperature for 6 days.Compounds found in the control Petri plate are not included.
*unknown is the content with quality match lower than 60. | 2019-02-10T20:20:17.428Z | 2017-06-15T00:00:00.000 | {
"year": 2017,
"sha1": "1ded564ded35a81c82293eaa71917ed89f0ff65f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=77041",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1ded564ded35a81c82293eaa71917ed89f0ff65f",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
257279874 | pes2o/s2orc | v3-fos-license | Two-color soliton meta-atoms and molecules
We present a detailed overview of the physics of two-color soliton molecules in nonlinear waveguides, i.e. bound states of localized optical pulses which are held together due to an incoherent interaction mechanism. The mutual confinement, or trapping, of the subpulses, which leads to a stable propagation of the pulse compound, is enabled by the nonlinear Kerr effect. Special attention is paid to the description of the binding mechanism in terms of attractive potential wells, induced by the refractive index changes of the subpulses, exerted on one another through cross-phase modulation. Specifically, we discuss nonlinear-photonics meta atoms, given by pulse compounds consisting of a strong trapping pulse and a weak trapped pulse, for which trapped states of low intensity are determined by a Schr\"odinger-type eigenproblem. We discuss the rich dynamical behavior of such meta-atoms, demonstrating that an increase of the group-velocity mismatch of both subpulses leads to an ionization-like trapping-to-escape transition. We further demonstrate that if both constituent pulses are of similar amplitude, molecule-like bound-states are formed. We show that z-periodic amplitude variations permit a coupling of these pulse compound to dispersive waves, resulting in the resonant emission of Kushi-comb-like multi-frequency radiation.
Introduction
The confinement of two -and possibly more -quasi co-propagating optical pulses has been discussed in terms of various propagation settings since the 80's of the last century, with early accounts discussing the self-confinement multimode optical pulses in glass fibers [1], nonlinear pairing of light and dark optical solitons [2,3], and stability of solitons with different polarization components in birefringent fibers [4]. A very paradigmatic instance of self-confinement is supported by the standard nonlinear Schrödinger equation (NSE) [5,6]. In the integrable case, it features localized field pulses given by solitary waves [7]. When considering two or more quasi group-velocity matched pulses, their incoherent, cross-phase modulation (XPM) induced mutual interaction co-determines their dynamics [1,2,3,4,8,9,10,11]. For instance, in nonlinear waveguides with a single zero-dispersion point, a soliton induces a strong refractive index barrier that cannot be surpassed by quasi group-velocity matched waves located in a domain of normal dispersion [12], resulting in their mutual repulsion. The underlying interaction process is enabled by a general wave reflection mechanism originally reported in fluid dynamics [13]. In optics this process is referred to as push-broom effect [14], optical event horizon [15,16], or temporal reflection [17]. This interaction mechanism allows for a strong and efficient control of light pulses [18,19,20], and has been shown to appear naturally during the supercontinuum generation process [21,22,23]. When considering waveguides that support group-velocity matched propagation of pulses in separate domains of anomalous dispersion, their mutual interaction is expressed in a different way: the aforementioned XPM induces attractive potentials that hold the pulses together, enabling two-color soliton molecules through an incoherent binding mechanism [24]; the resulting pulse compound consists of two subpulses at vastly different center frequencies. Putting emphasis on the frequency-domain representation of these pulse compounds lead to observe that a soliton can in fact act as a localized trapping potential with a discrete level spectrum [24]. Let us emphasize that in order to achieve a strong attractive interaction between the subpulses of such pulse compounds, group-velocity matching is crucial [25]. In terms of a modified NSE with added fourth-order dispersion, these objects where identified as parts of a large family of generalized dispersion Kerr solitons that can be characterized using the concept of a meta-envelope [26]. Such pulses were recently verified experimentally in mode-locked laser cavities [27,28,29]. In a complementary approach to the multi-scales analysis presented in Ref. [26], modeling both subpulses in terms of coupled NSEs allowed to derive a special class of two-color soliton pairs and their meta-envelopes in closed form [30]. Let us note that the concept of soliton molecules has meanwhile been extended to pulse compounds with three frequency centers [31], and recently also to a number of J equally spaced frequency components [27,32]. Further, two-color soliton microcomb states with similar structure where also observed in the framework of the Lugiato-Lefever equation [33,34]. The underlying scheme is much more general and requires quasi group-velocity mathing between different optical pulses. This can be achieved in different settings, and can, e.g., already been found in an early work of Hasegawa [1], where a strong incoherent XPM interaction between different components of a multimode optical pulse has been considered. At this point, we would also like to emphasize that these pulse compounds are different from usual soliton molecules, which can be realized by dispersion engineering in the framework of a standard NSE [35], characterized by two pulses separated by a fixed temporal delay and stabilized by a phase relation between both pulses [36].
Here, we review the rich dynamical behavior of two-color pulse compounds, which consist of two group-velocity matched subpulses in distinct domains of anomalous dispersion, with frequency loci separated by a vast frequency gap. First, we will demonstrate paradigmatic propagation scenarios that demonstrate photonic meta-atoms, arising in the limiting case where the pulse compounds consist of an intense trapping pulse, given by a soliton, and a weak trapped pulse. Then, we will address the case where both subpulses have similar amplitudes, so that their mutual XPM induced confining action results in the formation of a narrow two-color soliton molecule. Finally, we show that non-stationary dynamics of the subpulses results in the emission of resonant radiation, and we show how the location of the newly generated frequencies depends on the z-periodic amplitude and width variations of the oscillating soliton molecule.
The article is organized as follows. In Sec. 2 we discuss the propagation model used for our theoretical investigations of two-color meta-atoms and soliton molecules, and detail the numerical methods employed for their simulation and analysis. In Sec. 3 we demonstrate the ability of solitons to act as attractive potential wells that can host trapped states, and probe the stability of the resulting photonic meta-atoms with respect to a group-velocity mismatch between the trapping soliton and the trapped state. In Sec. 4 we derive a simplified model that yields simultaneous solutions for the subpulses that make up a two-color soliton molecule and show that theses solutions entail the two-color soliton pairs derived in Ref. [30]. We perturb these pulse compounds by increasing their initial amplitude, which results in periodic amplitude and width oscillations, and triggers the generation of resonant multi-frequency radiation with a complex stucture that can be precisely predicted theoretically. Section 5 concludes with a summary.
Model and methods
Propagation model. In order to study the propagation dynamics of nonlinear photonic meta-atoms and two-color soliton molecules, we consider a modified nonlinear Schrödinger equation (NSE) of the form describing the single-mode propagation of a complex-valued field A ≡ A(z, t), on a periodic temporal domain of extent T for the boundary condition A(z, −T/2) = A(z, T/2). The linear part of Eq. (1) includes higher orders of dispersion, with β 2 > 0 (in units of fs 2 /µm) a positive-valued group-velocity dispersion coefficient, and β 4 < 0 (fs 4 /µm) a negative-valued fourth-order dispersion coefficient. The nonlinear part of Eq. (1) includes a positive-valued scalar nonlinear coefficient γ (W −1 /µm). Considering the discrete set of angular frequency detunings Ω ∈ 2π T Z, the transform-pair Propagation constant. Using the identity ∂ n t e −iΩt = (−iΩ) n e −iΩt of the spectral derivative, 1 the frequency-domain representation of the propagation constant is given by the polynomial expression The frequency-dependent inverse group-velocity of a mode at detuning Ω reads with group-velocity (GV) v g (Ω) = 1/β 1 (Ω), and the group-velocity dispersion (GVD) is given by Subsequently, we use the parameter values β 2 = 1 fs 2 /µm, and β 4 = −1 fs 4 /µm, resulting in the model dispersion characteristics shown in Fig. 1. For the nonlinear coefficient in Eq. (1) we use γ = 1 W −1 /µm. As evident from Fig. 1(c), the GVD profile Eq. (3c) has a concave downward shape with two zero-dispersion points, defined by the condition β 2 (Ω) ! = 0, located at Ω Z1,Z2 = ∓ 2β 2 /|β 4 | = ∓ √ 2 rad/fs. It exhibits anomalous dispersion for Ω < Ω Z1 as well as for Ω > Ω Z2 . The interjacent frequency range Ω Z1 < Ω < Ω Z2 exhibits normal dispersion. Inspecting the inverse group velocity shown in Fig. 1(b), it can be seen that two frequencies are GV matched to Ω = 0. Due to the symmetry of the propagation constant, these are given by the pair Ω 1 = −Ω 2 = − 6β 2 /|β 4 | ≈ −2.828 rad/fs, uniquely characterized by β(Ω 1 ) = β(Ω 2 ) and indicated by the open and filled circles in Fig. 1. In fact, for the considered propagation constant, GV matching of three distinct modes can be realized as long as the frequency loci in AD1 and AD2 lie within the range of frequencies shaded in red in Fig. 1(b). Let us note that while the type of GV matching for two optical pulses at vastly different center frequencies, supported by the propagation constant Eq. (3a), it is methodologically different from the type of GV matching that supports quasi co-propagation of different modes with similar frequencies [1]. Nevertheless, both allow for quasi co-propagation of optical pulses under different circumstances, supporting similar XPM induced propagation effects. In our case, quasi group-velocity matched propagation of optical pulses across a vast frequency gap is possible, enabled by a tailored propagation constant with multiple zero-dispersion points. Further, the considered mechanism of GV matching differs from that in Ref. [39], wherein two pulses at the same central frequency but different polarization states were assumed to be launched in the anomalous dispersion regime of a hollow-core photonic crystal fiber filled with a noble gas. The mathematical structure of Eq. (1) and the above choice of parameters yields a very basic setting supporting the stable propagation of nonlinear In (a-c), the domain of normal dispersion is shaded gray. Zerodispersion points are labeled Ω Z1 and Ω Z2 . In (b), the frequency range shaded in red allows for group-velocity matching of two modes with loci in AD1 and AD2. Open circle (labeled Ω 1 ) and filled circle (labeled Ω 2 ) indicate such a pair of group-velocity matched frequencies.
photonic meta-atoms and two-color soliton molecules. In fact, the two-parameter GVD curve shown in Fig. 1(c) is a simplified model of the dispersion considered earlier in Ref. [24], wherein two-color soliton molecules were first demonstrated, and is similar to the setting considered in Ref. [26], wherein generalized dispersion Kerr solitons were described comprehensively. However, let us note that the phenomena reported below are not limited to the particular choice of the above parameters and persist even in the presence of perturbations such as pulse self-steepening [25,30], which can be accounted for by replacing γ → γ(Ω) in the nonlinear part of Eq. (1), and -with some reservation -a self-frequency shift caused by the Raman effect [31].
Propagation algorithm. For our pulse propagation simulations in terms of Eq. (1), we employ the "Conservation quantity error" method (CQE) [40,41]. It maintains an adaptive z-propagation stepsize h, and uses a conservation law of the underlying propagation equation to guide stepsize selection. Specifically, we here use the relative error where E is the total energy, conserved by Eq. (1). Employing Parseval's identity for Eqs. (2) [42,43], the total energy in the time and frequency domains is given by with instantaneous power |A(z, t)| 2 (W = J/s), and power spectrum |A Ω (z)| 2 (W). The CQE method is designed to keep the relative error δ E within the goal error range (0.1 δ G , δ G ), for a preset local goal error δ G (throughout our numerical experiments we set δ G = 10 −10 ). This is accomplished by decreasing the stepsize h when necessary while increasing h when possible. To advance the field from position z to z + h, the CQE uses the "Fourth-order Runge-Kutta in the interaction picture" (RK4IP) method [44]. The ability of the algorithm to increase or decrease the stepsize is most valuable when the propagation of an initial condition results in a rapid change of the pulse intensities over short propagation distances. Nevertheless, if one is willing to accept an increased running time resulting from an integration scheme with fixed stepsize, usual split-step Fourier methods [45,43,46] will work similarly well.
Spectrograms. To assess the time-frequency interrelations within the field A(z, t) at a selected propagation distance z, we use the spectrogram [47,48,49] P S (t, Ω; z) = 1 2π To localize the field in time, we use a hyperbolic-secant window function h(x) = sech(x/σ) with width parameter σ.
Incoherently coupled pulse pairs. To facilitate a simplified description of two-color pulse compounds in the form in which two quasi group-velocity-matched subpulses A 1 ≡ A 1 (z, t) and A 2 ≡ A 2 (z, t) exist at the frequency gap Ω gap = |Ω 2 − Ω 1 |, it is convenient to consider the two coupled nonlinear Schrödinger equations (CNSEs) [6,10,11] The parameters in Eqs. (8) are related to Eqs.
, and, γ = γ = γ. The mismatch of inverse GV for both subpulses is given by For specific choices of the detunings Ω 1 and Ω 2 , exact GV matching, signaled by ∆β 1 = 0, can be achieved. In contrast to Eq. (1), the incoherently coupled Eqs. (8) neglect higher-orders of dispersion within their linear parts, as well as rapidly varying four-wave-mixing terms within their nonlinear parts. The mutual interaction of both subpulses is taken into account via XPM. As evident from Eq. (8a), pulse A 1 can be viewed as being exposed to a total potential field of the form V 1 ≡ γ (|A 1 | 2 +2|A 2 | 2 ), entailing the effects of SPM and XPM. Likewise, A 2 is exposed to the potential field V 2 ≡ γ (|A 2 | 2 + 2|A 1 | 2 ). As we will show in Sects. (3), (4), the potential fields V 1 and V 2 yield attractive potentials that enable the mutual trapping of both subpulses. Subsequently we take Ω 1 and Ω 2 as indicated in Fig. 1, so that the above parameters are given by β 0 = β 0 = 1.33 µm −1 , β 1 = β 1 = 0, β 2 = β 2 = −2 fs 2 /µm, and, γ = γ = 1 W −1 /µm. For a more general description of simultaneous solutions in the form of Eq. (7), we will continue to refer to the nonlinear coefficients in Eqs. (8) as γ [Eq. (8a)] and γ [Eq. (8b)]. In addition, the scalar factors β 0 = β 0 ≡ β 0 can be removed by a common linear transformation A 1,2 → A 1,2 e iβ 0 z , which does not affect the z-propagation dynamics of the interacting pulses. Let us note that, in general, higher-orders of dispersion within a modified NSE can cause a solitary wave to shed resonant radiation [50], and can result in a modification of its group-velocity [50,51]. These types of perturbations are neglected by Eqs. (8), which can be justified in the limit where the subpulse separation Ω gap is large and their spectra are sufficiently narrow. Moreover, in case of a frequency dependent coefficient function γ(Ω), γ = γ(Ω 1 ) and γ = γ(Ω 2 ) in Eqs. (8). Let us point out that, in the presence of a linear variation of γ, a solitary wave exhibits a further modification of its group-velocity [52], an effect neglected by Eqs. (8). It is important to bear these perturbation effects in mind when comparing results based on Eqs. (8) to numerical simulations in terms of the full model Eq. (1).
We can relate the above trapping mechanism for two-color pulse compounds to the mechanism enabling the selfconfinement of a multimode optical pulses in a multimode fiber, discussed by Hasegawa as early as 1980 [1]. Therein, Hasegawa considered a propagation equation of the nonlinear Schrödinger type for a multimodal pulse, where the nonlinear change of the refractive index, felt by an individual mode, depends on the total intensity of the multimodal pulse. This results in coupled equations for the different modes, wherein an individual mode perceives the intensity of the total pulse as a potential field. If the considered mode is subject to anomalous dispersion, the potential is attractive. Based on the expectation that if the velocity mismatch between a given mode and the potential is smaller than the escape velocity, the potential has the ability to trap the mode, he derived a condition for self-confinement of the multimode pulse. While the results in Ref. [1] are valid for multimodal optical pulses composed of possibly many modes, the simplified modeling approach given by Eqs. (8) considers only two subpulses. Meanwhile, an extension of the above approach to pulse compounds with three and more subpulses has been accomplished [31,53].
Given the ansatz for two-color pulse compounds in the form of Eq. (7), initial conditions A 0 (t) ≡ A(z = 0, t) that specify nonlinear photonic meta-atoms and two-color soliton molecules in terms of the subpulses A 1 and A 2 are different in some respects and are discussed separately in Sect. 3, and Sect. 4. Subsequently, we demonstrate the self-consistent z-propagation dynamics of these pulse compound, originally reported in Refs. [24,26,30,54,55], as well as their breakup in response to sufficiently large GV mismatches between both subpulses, originally reported in Ref. [25], in terms of numerical simulations governed by the full model Eq. (1). These numerical results demonstrate several theoretical findings reported by Hasegawa [1], applied to the concept of two-color pulse compounds.
Nonlinear-photonics meta-atoms
Description of stationary trapped states. Subsequently we look for stationary solutions in the form of Eq. (7) under the additional constraint max(|A 2 |) max(|A 1 |). This allow to decouple Eqs. (8) and enables direct optical analogues of quantum mechanical bound-states [63,24,54]. Therefore, we assume the resulting two-color pulse compounds to consist of a strong trapping pulse, given by a solitary wave (S) at detuning Ω S ≡ Ω 1 , and a weak trapped pulse (TR) at detuning Ω TR ≡ Ω 2 . For the solitary wave part of the total pulse we neglect the XPM contribution in the nonlinear part of Eq. (8a) and assume wherein P 0 = |β 2 |/(γ t 2 0 ), and κ = β 0 + γ P 0 /2. Neglecting the SPM contribution in the nonlinear part of Eq. (8b) and making the ansatz the envelope φ(t) of a weak stationary trapped state is determined by the Schrödinger type eigenvalue problem Therein, the solitary wave enters as a stationary attractive potential well V S (t) = −2γ P 0 sech 2 (t/t 0 ). Hence, as pointed out above and discussed in the context of multimode optical pulses in glass fibers in Ref. [1], a weak pulse can be attracted by the intensity of the entire pulse if it exists in a domain of anomalous dispersion. Due to β 2 < 0, this condition is met in the considered case. In analogy to the sech 2 -potential in one-dimensional quantum scattering theory we may equivalently write the solitary-wave induced potential as [63] Moreover, due to the particular shape of the trapping potential, the eigenvalue problem Eq. (11) can even be solved exactly [63,64]. The number of trapped states of the potential in Eq. (12) is given by N TR = ν + 1, where ν is the integer part of the strength-parameter ν. From the analogy to the quantum mechanical scattering problem [64], the real-valued wavenumber eigenvalues can directly be stated as For a given value of n, they are related to Eq. (10) through κ = β 0 − κ n . To each eigenvalue corresponds an eigenfunction φ n with n zeros, specifying the (n + 1)-th fundamental solution of the eigenvalue problem Eq. (11). These solutions constitute the weak trapped states of the potential V S . Referring to the Gaussian hypergeometric function as 2 F 1 [65], and abbreviating a n = 1 2 (1 + n) and b n = 1 2 (2ν + 1 − n), they can be stated in closed form as [64] φ n (t) = cosh ν+1 t t 0 2 F 1 a n , b n ; 1 2 ; − sinh 2 t t 0 , for even n, cosh ν+1 t t 0 sinh t t 0 2 F 1 a n + 1 2 , b n + 1 2 ; 3 2 ; − sinh 2 t t 0 , for odd n.
Let us note that, as evident from the potential strength parameter ν in Eq. (12), the number N TR of trapped states is uniquely defined by the four parameters β 2 , β 2 , γ , and γ . It is not affected by the duration t 0 of the trapping potential, which, according to Eq. (13), codetermines the value of the wavenumber eigenvalue of a fundamental solution.
Analogy to quantum mechanics. The eigenvalue problem Eq. (11) suggests an analogy to quantum mechanics, wherein a fundamental solution φ n represents the wavefunction of a fictitious particle of mass m = |β 2 | −1 , confined to a localized, sech 2 -shaped trapping potential V S . The discrete variable n = 0, . . . , ν resembles a principal quantum number that labels solutions with distinct wavenumbers, and the number of trapped state N TR is similar to an atomic number. Consequently, a bare soliton, with none of its trapped states occupied, resembles the nucleus of an one-dimensional atom. By this analogy, a soliton along with its trapped states represents a nonlinear-photonics meta-atom.
Stable propagation of trapped states
Subsequently, we discuss the propagation dynamics of a nonlinear-photonics meta-atom with the ability to host two trapped states. More precisely, we consider an example for Ω S = −2.828 rad/fs and t 0 = 8 fs, with Ω TR = 2.828 rad/fs and ν ≈ 1.566. The resulting trapping potential and both its trapped states are shown in Fig. 2(a). In this case, the wavenumber eigenvalues are (κ 0 , κ 1 ) = (−0.0382, −0.0050) µm −1 , and the corresponding fundamental solutions take the simple form φ 0 (t) = sech ν t t 0 , and, (15a) As evident in Fig. 2(b), in the vicinity of Ω TR and due to κ > 0 [Eq. (10) (1), using an initial condition of the form of Eq. (7) with A 1 as in Eq. (9), and A 2 as in Eq. (10) with φ(t) = 10 −7 P 0 φ 0 (t). In the time-domain propagation dynamics, shown in Fig. 2(c), a small drift of the soliton, caused by higher orders of dispersion at Ω S [see Fig. 1], is accounted for by shifting to a moving frame of reference with time coordinate τ = t −β 1 z andβ 1 = 0.00637 fs/µm. In Fig. 2(d), the vast frequency gap between the soliton and the trapped state is clearly visible. By means of an inverse Fourier transform of the frequency components belonging to the trapped state [box labeled A in Fig. 2(d)], an unhindered "filtered view" of the timedomain propagation dynamics of the trapped state is possible [box labeled A in Fig. 2(c)]. A spectrogram, providing a time-frequency view of the field at z/z 0 = 45, is shown in Fig. 2(d). The stable propagation of a trapped state with n = 1 for φ(t) = 10 −7 P 0 φ 1 (t), is detailed in Figs. 2(f-h). Finally, the simultaneous propagation of a superposition of both trapped states in the form φ(t) = 10 −7 P 0 [φ 0 (t) + 5φ 1 (t)] is shown in Figs. 2(i-k). The z-periodicity of the beating pattern visible in the time-domain propagation dynamics in Fig. 2(i), is a result of the different wavenumber eigenvalues of the trapped states, and is determined by z p = 2π/|κ 1 − κ 0 | ≈ 189 µm (z p /z 0 ≈ 3.8). Thus, the coherent superposition of trapped states exhibits Rabi-type oscillations, similar to bound state dependent revival times in the quantum recurrence of wave packets [67,68]. Let us note that, bearing in mind that the number of bound states N TR is determined by the potential strength parameter ν in Eq. (12), a setup with a different number of bound states can be obtained as well. This is possible by fixing Ω S at some other feasible value, resulting in a different group-velocity matched detuning Ω TR , implying different values of the parameters β 2 , β 2 , γ , and γ . For example, keeping t 0 = 8 fs but choosing Ω S = −2.75 rad/fs yields ν ≈ 3.1, resulting in a potential well with the ability to host N TR = 4 trapped states. In such a case, however, phase-matched transfer of energy from the trapped states to dispersive waves within the domain of normal dispersion can be efficient [69].
Trapping-to-escape transition caused by a group-velocity mismatch
In the context of multimodal pulses in glass fibers in Ref. [1], the attraction of a wave packet by a potential well, created by the total pulse, was illustrated in terms of the kinetic equations of a fictitious particle associated with the wave packet. From a classical mechanics point of view, in order to ensure trapping of the wave packet by the total pulse, the velocity mismatch between the particle and the potential needs to be smaller than the escape velocity of the potential. Based on this view, and for a given velocity mismatch, the critical value of the total pulse intensity, required to achieve self-confinement, was determined [1]. In the presented work, pulse propagation simulations, such as those reported in Fig. 2, comprise a complementary approach to study the considered XPM induced attraction effect. Specifically, by keeping the detuning of the soliton fixed at Ω S = Ω 1 , but shifting the detuning of the trapped pulse to Ω TR = Ω 2 + ∆Ω, we can enforce a group-velocity mismatch between both pulses and probe the stability of the meta-atom. For ∆Ω > 0 it is β 1 (Ω S ) > β 1 (Ω TR ), see Fig. 1(b). Thus, in a reference frame in which the soliton is stationary, the trapped state will initially have the propensity to move towards smaller times. This is demonstrated in As evident from Fig. 3(e), at ∆Ω = 0.05 rad/fs, the trapped state is kept almost entirely within the well, i.e. e TR ≈ 1.
In contrast, at ∆Ω = 0.25 rad/fs, a major share of the trapped pulse escapes the well during the initial propagation stage [Figs. 1(c,d)], indicated by the small value e TR ≈ 0.3 [ Fig. 3(e)]. Let us note that, when viewing the considered pulse compounds as meta-atoms, the quantity 1 − e TR (z) specifies the fraction of trapped energy that is radiated away, resembling an ionization probability for quantum mechanical atoms. A parameter study, detailing the dependence of e TR as function of the center frequency shift ∆Ω is summarized in Fig. 3(f). The transition from trapping to escape can be supplemented by an entirely classical picture similar as in Ref. [1]: from a classical point of view we might expect that a particle, initially located at the center of the well, remains confined to the well if its "classical" kinetic energy T kin = 1 2 m∆β 2 1 = 1 2 |β 2 | −1 β 1 (Ω S ) − β 1 (Ω TR ) 2 does not exceed the well depth V 0 = 2γ P 0 . As evident from Fig. 3, the findings based on this classical picture complement the results obtained in terms of direct simulations of the modified NSE (1) very well. The above results clearly demonstrate the limits of stability of nonlinear photonics meta-atoms with respect to a group-velocity mismatch between the trapping soliton and the trapped state. These findings are consistent with our previous results on the break-up dynamics of two-color pulse compounds [25].
Two-color soliton molecules
Seeding of tightly bound two-color pulse compounds. When considering initial conditions of the form of Eq. by increasing the parameter r. This is demonstrated in Figs. 4(a-c), where pulse propagation simulations in terms of the modified NSE (1) are shown for different values of r, significantly larger than those considered in the preceding section. Especially for larger values of r [ Figs. 4(b,c)], the intensity exhibits the following dynamics: the mutual confining action of XPM results in a contraction of both subpulses, prompting the formation of a narrow localized pulse compound. A similar effect has previously been suggested by Hasegawa for multimode optical pulses in glass fibers in Ref. [1], where he writes "[. . . ] as many modes are trapped, the peak intensity of the packet increases quite analogously to a gravitational instability, resulting in a further contraction of the packet." (Ref. [1], p. 417). The results shown in Figs. 4(a-c) demonstrate this effect in the context of two-color pulse compounds in nonlinear fibers or waveguides with two zero-dispersion points. Let us note that, for r ≈ 1, initial conditions as pointed out above directly generate tightly bound, mutually confined two-color pulse compounds. They are accompanied by radiation, emanating from the localized state upon propagation, and can exhibit internal dynamics reminiscent molecular vibrations [24,25,70,31]. However, such a seeding procedure generates two-color pulse compounds in a largely uncontrolled manner. For completeness, we have observed the formation of similar localized pulse compounds when taking trapped state initial conditions of the form φ(t) = r √ P 0 φ 1 (t) for large enough r.
Simultaneous solutions of the coupled equations. We can surpass the above seeding approach by directly searching for simultaneous solitary-wave solutions of the coupled nonlinear Eqs. (8) beyond the linear limit discussed in Sect. 3. Substituting an ansatz for two subpulses, labeled m = 1, 2, in the form of into Eqs. (8), yields two coupled ordinary differential equations (ODEs) of second order for two real-valued envelopes U m ≡ U m (t), m = 1, 2, with dots denoting derivatives with respect to time. Under suitable conditions, solitary-wave solutions for the coupled nonlinear Eqs. (18) can be specified analytically [71,72,60,73,30]. Approximate solutions based on parameterized trial functions can be found, e.g., in terms of a variational approach [74]. In order to obtain simultaneous solutions U 1 (t), and U 2 (t) under more general conditions, Eqs. (18) need to be solved numerically. This can be achieved, e.g., by spectral renormalization methods [75,76,77,78], shooting methods [8,9], squared operator methods [79], conjugate gradient methods [80,81], z-propagation adapted imaginary-time evolution methods [82,83], or Newton-type methods [84]. Here, in order to solve for simultaneous solutions of the ODEs (18), we employ a Newton method that is based on a boundary value Runge-Kutta algorithm [85]. So as to systematically obtain solutions U 1 (t) and U 2 (t), we keep five of the six parameters that enter Eqs. (18) fixed. Therefore we set β 2 , β 2 , γ , and γ to the values considered througout the preceding section, and preset the wavenumber κ 1 = |β 2 |(2t 2 0 ) −1 ≈ 0.0156 µm −1 of a fundamental nonlinear Schrödinger soliton with t 0 = 8 fs in Eq. (18a). We then sweep the remaining paramter κ 2 over the wavenumber range (0.002, 0.05) µm −1 , enclosing the value of κ 1 . We start the parameter sweep at κ 2 = 0.05 µm −1 , which vastly exceeds the wavenumber eigenvalue of the lowest lying trapped state solution at 0.0382 µm −1 . Above this value, we expect U 2 to vanish, and U 1 to yield a fundamental soliton U 1 (t) = √ P 0 sech(t/t 0 ) with P 0 = |β 2 |(γ t 2 0 ) −1 . We set initial trial functions for U 1 and U 2 with parity similar to the soliton and the lowest lying trapped state, and continue the obtained solutions to smaller values of In agreement with the results reported in sect. 3.1, we find that a weak nonzero solution U 2 with t 2 = 8 fs and ν 2 ≈ 1.55 originates at κ 2 ≈ 0.038 µm −1 . For κ 2 < 0.038 µm −1 , the peak amplitude of the subpulse m = 1 continuously decreases while that for m = 2 increases. Below κ 2 ≈ 0.007µm −1 , subpulse U 1 vanishes and U 2 describes a fundamental soliton with pulse shape paramter ν 2 = 1 and wavenumber κ 2 . To facilitate intuition, we included the amplitude of a free soliton with wavenumber κ 2 , i.e. peak amplitudẽ U 0,2 = 2κ 2 /γ , in Fig. 4(d). Let us note that the intermediate parameter range 0.007 µm −1 < κ 2 < 0.038 µm −1 bears tightly coupled pulse compounds, characterized by subulse amplitudes with similar peak heights, see Fig. 4(d).
Two-color soliton pairs
Upon closely assessing the results shown in Figs. 4(d-f), we find that at κ 2 = 0.0156 µm −1 , a pair of matching solutions with plain hyperbolic-secant shape U m (t) = U 0,m sech(t/t 0 ), m = 1, 2, is attained. This can be traced back to the uniformity of Eqs. (18a) and (18b) for the considered set of parameters. Formally, by assuming κ ≡ κ 1 = κ 2 and U ≡ U 1 = U 2 , both equations take the form of a standard NSE with modified paramters where, for convenience only, we used the parameters of Eq. (18a). The real-valued pulse envelope U should therefore be identified by the peak intensityP 0 = |β 2 |(3γ t 2 0 ) −1 , and thus u 1 = u 2 = √ 1/3 ≈ 0.57 in Fig. 4(d). Hence, at κ 2 = 0.0156 µm −1 , both subpulses resemble true two-color soliton pairs: the pulse envelopes U 1 and U 2 both specify a fundamental NSE soliton; for each pulse, its binding partner modifies the nonlinear coefficient of the underlying NSE through XPM, helping the pulse sustain its shape. Consequently, both pulses can only persist conjointly as a bonding unit. This special case is consistent with a description of two-color pulse compounds in terms of incoherently coupled pulses [30]. By considering the ansatz Eq. (7), we can plug in the obtained pulse envelopes for U 1 and U 2 and resubstitute the parameters that define the propagation constant in sect. 2 to obtain Let us note that F is equivalent to the fundamental meta-soliton obtained in Ref. [26], which becomes evident when substituting = t −1 0 [3β 2 /(2|β 4 |)] −1/2 and µ 0 2 = β 2 /t 2 0 . This fundamental meta-soliton was first formulated by Tam et al., when studying stationary solutions for the modified NSE (1) by putting emphasis on the time-domain representation of the field in terms of a multi-scales analysis [26]. This unveiled a large superfamily of solitons, now referred to as generalized dispersion Kerr solitons. We would like to point out that within the presented approach, i.e. by putting emphasis on the frequency-domain representation of two-color pulse compounds, the fundamental metasoliton is derived with great ease. Furthermore, both approaches complement each other very well. We should note that the above two-color soliton pairs resemble vector solitons studied in the context of birefringent optical fibers [86,87,88,61,89,90]. The stationary propagation of the two-color soliton pair defined by Eq. (20) in terms of the modified NSE (1) is demonstrated in Figs. 5(a,b). The inset in Fig. 5(a) provides a close-up view onto the localized pulse, indicating interference fringes with period ∆t ≈ |β 4 |/(6β 2 π 2 ) ≈ 1.3 fs that are due to the cosine in Eq. (20). These interference fringes appear stationary since the propagation scenarion exhibits the symmetry β(Ω 1 ) = β(Ω 2 ) and κ 1 = κ 2 . A spectrogram of the propagation scenario at z/z 0 = 29.17 is shown in Fig. 6(a). A small amount of residual radiation can be seen to lie right on the curve β 1 (Ω)z, given by the short-dashed line in Fig. 6(a). It was emitted by the pulse compound during the initial propagation stage and is caused by the presence of higher orders of dispersion at the individual subpulse loci, which were neglected in the simplified description leading to Eqs. (20).
Kushi-comb-like multi-frequency radiation
Previously, it was shown that z-periodic amplitude and width oscillations of two-color soliton molecules can be excited in a systematic manner by increasing their initial peak amplitude by some factor N according to F(z, t) ← NF(z, t) [26,55]. In analogy to usual nonlinear Schrödinger solitons, values N > 1 define higher order metasolitons. Recently, we have performed a comprehensive analysis of the amplitude oscillations of such higher order metasolitons, indicating that with increasing N, the number of spatial Fourier-modes needed to characterize their periodic peak-intensity variation, increases [55]. In other words, with increasing strength of perturbation of a soliton molecule, its dynamics changes from harmonic to nonlinear oscillations.
Degenerate multi-frequency radiation. To demonstrate amplitude and width oscillations, we show the propagation dynamics of a symmetric soliton molecule of order N = 1.8, based on the two-color soliton pair (20), in Figs. 5(c,d).
As can be seen from the time-domain dynamics in Fig. 5(c), the localized pulse exhibits periodic amplitude and width variations [close-up view in Fig. 5(c)], and emits radiation along either direction along the coordinate t in a symmetric fashion. Quite similar dynamics where obtained using the seeding approach in Figs. 4(b,c). The oscillation of the soliton molecule is also clearly visible in the spectrum shown in Fig. 5(d). As evident from Fig. 5(f), at z/z 0 ≈ 29.17 it exhibits comb-like bands of frequencies in the vicinity of the subpulse loci Ω 1 and Ω 2 . The location of these newly generated frequencies can be understood by extending existing approaches for the derivation of resonance conditions [91,92,93,94,95] to two-color pulse compounds [70,55]. Below, we summarize these resonance conditions, which where obtained by assuming a dynamically evolving pulse compound of the form [70] In Eq.
Non-degenerate multi-frequency radiation. Let us note that, due to the wide variety of two-color pulse compounds with different substructure, their emission spectra manifest in various forms. For example, considering a pair of groupvelocity matched detunings different from the one considered above, the degeneracy among Eqs. (22) can be lifted. Subsequently we take Ω 1 = −2.674 rad/fs and Ω 2 = 2.134 rad/fs, for which β 1 = 0.514 fs/µm, β 1 = 0.514 fs/µm, β 2 = −2.576 fs 2 /µm, and, β 2 = −1.278 fs 2 /µm. In terms of the coupled ODEs (18) we then determine a pair of simultaneous solutions which specify the initial condition with parameters U 0,1 = 0.050 √ W, U 0,2 = 0.141 √ W, t 1 = 7.207 fs, t 2 = 7.271 fs, ν 1 = 0.901, and ν 2 = 1.022. The stationary propagation of this soliton molecule with non-identical subpulses is shown in Fig. 4(g,h). As a consequence of the broken subpulse-symmetry, the interference fringes that characterize the pulse compound are not stationary any more [close-up view in Fig. 5(g)]. The fact that the pulse compound remains localized, despite its envelope exhibiting a non-stationary profile, might be the reason why no such objects could be found using a time-domain based Newton conjugate-gradient method [26]. Next, we increase the order of this soliton molecule to N = 1.6, resulting in the propagation dynamics with z-oscillation period Λ ≈ 106 µm ≈ 3.3z 0 shown in Figs. 5(i,j). In this case, a pronounced mulit-peaked spectral band of frequencies within the domain of normal dispersion is excited [see Figs. 5(j,l)]. These newly generated frequencies can be linked to multi-frequency Cherenkov radiation emitted by the subpulse at Ω 2 , as can be seen from the graphical solution of the resonance conditions (22a), shown in Fig. 5(k). Let us note that similar coupling phenomena of localized states to the continuum have have earlier been observed for solitons in periodic dispersion profiles [93], oscillating bound solitons in twin-core fibers [94], and dissipative solitons in nonlinear microring resonators [95]. A further band of frequencies, excited in the vicinity of Ω ≈ 3.5 rad/fs can be attributed to FWM-resonances described by Eq. (22b). A spectrogram of the propagation scenario at z/z 0 = 28.6 is shown in Fig. 6(c), unveiling that the resonant radiation emanates from the oscillating soliton molecule in a pulse-wise fashion.
Summary and conclusions
In summary, we have discussed several aspects of the z-propagation of two-color pulse compounds in a modified NSE with positive group-velocity dispersion coefficient and negative fourth-order dispersion coefficient. Therefore, we considered the interaction dynamics of two pulses in distinct domains of anomalous dispersion, group-velocity matched despite a large frequency gap.
We have demonstrated that their mutual confining action can manifest itself in different forms, depending on the relative strength of SPM and XPM felt by each pulse. In the limiting case where the resulting bound states consist of a strong trapping pulse, given by a soliton, and a weak trapped pulse, we have shown that optical analogues of quantum mechanical bound states can be realized that are determined by a Schrödinger-type eigenvalue problem [24]. The resulting photonic meta-atoms even support Rabi-type oscillations of its trapped states, similar to the recurrence dynamics of wave packets in quantum wells [67]. We further probed the limits of stability of these meta-atoms by imposing a group-velocity mismatch between the trapping soliton and the trapped pulse. With increasing strength of perturbation, parts of the trapped state escapes the soliton, similar in effect to the ionization of quantum mechanical atoms. These findings complement our earlier results on the break-up dynamics of two-color pulse compounds [95].
For the more general case where the mutual confining action between the pulses is dominated by XPM, we have discussed a simplified modeling approach, allowing to determine simultaneous solutions for the bound pair of pulses. The resulting solutions feature the above meta-atoms as limiting cases when the disparity of the subpulse amplitudes is large. Further, by exploiting symmetries of the underlying propagation model, a special class of solutions, forming true two-color soliton pairs [30], was characterized in closed form. This special class of solutions, referred to as generalized dispersion Kerr solitons, has also been derived in Ref. [26]. We have presented numerical results demonstrating the complex propagation dynamics of such pulse compounds, which we here referred to as two-color soliton molecules. Specifically, we have shown that soliton molecules exhibit highly robust vibrational characteristics, a behavior that is difficult to achieve in a conservative NSE system. These non-stationary, z-periodic dynamics of the subpulses triggers the emission of resonant radiation. The location of the resulting multi-peaked spectral bands can be precisely predicted by means of phase-matching conditions [70,55]. Due to the manifold of soliton molecules with different substructure, their emission spectra manifest in various complex forms. Most notably, if the oscillating soliton molecule consists of a pair of identical subpulses, inherent symmetries lead to degeneracies in the resonance spectrum, causing their spectrogram trace to resemble the shape of Japanese Kushi combs. Additional perturbations lift existing degeneracies and result in more complex emission spectra which are characterized by distinct spectral bands that can be separately linked to resonant Cherenkov radiation and additional four-wave mixing processes. The occurrence of such multifrequency radiation, especially in the degenerate form, comprises a fundamental phenomenon in nonlinear waveguides with multiple zero-dispersion points and sheds light onto the puzzling propagation dynamics of two-frequency pulse compounds, resembling the generation of radiation by vibrating molecules.
Finally, let us note that we recently extended the range of systems in which such two-color pulse compounds are expected to exist. Therefore, we considered waveguides with a single zero-dispersion point and frequency dependent nolinearity with a zero-nonlinearity point [96,97]. In such waveguides, soliton dynamics in a domain of normal dispersion can be achieved by a negative nonlinearity [98,99]. In the corresponding description of pulse compounds in terms of the simplified model (8), having β 2 < 0 and β 2 > 0 then requires γ > 0 and γ < 0, and the potential well in the eigenproblem corresponding to Eq. (11) is ensured by γ < 0 [54]. We studied the above binding mechanism for incoherently coupled two-color pulse compounds in such waveguides, demonstrating meta-atoms and molecule-like bound states of pulses that persist in the presence of the Raman effect [31,54], allowing to understand the complex propagation dynamics observed in a recent study on higher-order soliton evolution in a photonic crystal fiber with one zero-dispersion point and frequency dependent nonlinearity [100]. | 2023-03-03T02:16:18.393Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "4ec2907cbf543531e2128a330d553bb062a09ce2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4ec2907cbf543531e2128a330d553bb062a09ce2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265180539 | pes2o/s2orc | v3-fos-license | Machine Learning-Based Self-Interference Cancellation for Full-Duplex Radio: Approaches, Open Challenges, and Future Research Directions
In contrast to the long-held belief that wireless systems can only work in half-duplex mode, full-duplex (FD) systems are able to concurrently transmit and receive information over the same frequency bands to theoretically enable a twofold increase in spectral efficiency. Despite their significant potential, FD systems suffer from an inherent self-interference (SI) due to a coupling of the transmit signal to its own FD receive chain. Self-interference cancellation (SIC) techniques are the key enablers for realizing the FD operation, and they could be implemented in the propagation, analog, and/or digital domains. Particularly, digital domain cancellation is typically performed using model-driven approaches, which have proven to be insufficient to seize the growing complexity of forthcoming communication systems. For the time being, machine learning (ML) data-driven approaches have been introduced for digital SIC to overcome the complexity hurdles of traditional methods. This article reviews and summarizes the recent advances in applying ML to SIC in FD systems. Further, it analyzes the performance of various ML approaches using different performance metrics, such as the achieved SIC, training overhead, memory storage, and computational complexity. Finally, this article discusses the challenges of applying ML-based techniques to SIC, highlights their potential solutions, and provides a guide for future research directions.
In the past few decades, researchers have drawn attention to canceling the SI in IBFD systems.Generally, the SIC can be performed in analog and/or digital domains.Analog domain cancellation can be performed passively at the radio frequency (RF), i.e., propagation level, using antenna isolation [21], beamforming [28], polarized antennas [44], circulators [45], and/or hybrid junction networks [46].Instead, analog domain cancellation can be carried out actively by generating a pre-processed copy of the SI signal, which is exploited to cancel the original SI signal at the Rx chain.Analog domain cancellations are often incapacitated to suppress the SI signal to the Rx noise floor level.As a consequence, additional focus has been directed to canceling the SI at the baseband level using digital domain cancellation [47], [48], [49], [50], [51], [52], [53], [54], [55], [56].At low or moderate transmit power levels, the digital domain cancellation is typically performed using linear cancelers, which reconstruct an estimated copy of the SI signal based on techniques such as least-squares (LS) channel estimation [47], [49], [53].However, at high transmit power levels, such cancellation only becomes insufficient to entirely suppress the SI to the Rx noise floor due to the stringent non-linear behavior of FD transceiver's components, such as the power and low-noise amplifiers (PA and LNA) [47], [49], [52].Thus, non-linear digital cancellation is applied with the linear cancellation to bring the SI to the Rx noise floor level.The non-linear SIC is conventionally performed using model-driven approaches, e.g., polynomial models, which are shown to fit well in practice; however, they need many trainable parameters that, in turn, translate to higher computational requirements [57].
The major contributions of this work are as follows: r We have highlighted the main challenges and potential research directions for successful adoption of ML approaches for canceling the SI in FD transceivers.The rest of this article is organized as follows.Section II introduces the ML-based FD system model.Section III summarizes the traditional approaches for SIC in FD transceivers.Section IV reviews the up-to-date contributions that apply ML approaches for SIC.Simulation results are presented in Section V, challenges and future directions are summarized in Section VI, and finally, concluding remarks are drawn in Section VII.The detailed organization of this article is depicted in Fig. 1.
II. ML-BASED FD SYSTEM MODEL
The system model consisting of an FD transceiver with single transmit and single receive antennas, RF, and digital cancellation stages is illustrated in Fig. 2. At the Tx chain, the digital baseband samples, denoted by x(n)-with n as the sample index-are firstly distorted by the in-phase and quadrature-phase (IQ) imbalance of the mixer and then by the non-linearities of the PA.The digital equivalent of the baseband transmitted signal at the output of the Tx chain can be expressed as [99], [100], [101] x t (n) with x IQ (n) as the IQ mixer's output signal and (.) * as the complex conjugate operator, whereas M PA , h m,p , and P are the memory depth, impulse response, and non-linearity order of the PA, respectively.In (1), p is an odd number, i.e., the odd-order non-linearities are only taken into account, e.g., p ∈ {3, 5. .., 9}, as the even-order non-linearities are out-ofband and they are filtered by the Rx's analog and digital filters [100].The transmitted signal x t is propagated through an SI channel, forming an inevitable SI at the Rx chain.As a consequence, the received signal at the output of the Rx chain, i.e., at the output of the analog-to-digital converter (ADC), can be written as [127] where w(n) ∼ CN (0, σ 2 ) denotes the thermal noise, which is complex-valued Gaussian distributed with zero mean and variance σ 2 , y SoI (n) indicates the received signal of interest (SoI), and y SI (n) represents the SI signal, which can be expressed as [99], [100], [101] with h m,q,p as the impulse response of an overall channel containing the total effect of all transceiver impairments, e.g., PA non-linearities, IQ imbalance, and SI channel, and M i as the memory effect introduced by the PA, SI channel delay spread at the Rx, etc.
To better evaluate the capabilities of the SI cancelers to suppress the SI signal properly, we assume, for simplicity, that there is no SoI from any other FD transmit receive points (TRPs) and no mutual interference from any base station transmitting at the same frequency [90], [96], [97], [99], [100], [101]; hence, the received signal at the Rx chain's output will end up with the SI signal plus noise.The objective of the digital SI canceler is thus to suppress the SI to the Rx noise floor level.To that end, we firstly estimate the linear SI channel (i.e., causing the linear SI component) using the traditional LS channel estimation, which is performed for the case of single transmit and single receive antenna as follows [99], [100], [101]: ĥ = X tr H X tr −1 X tr H y tr , ( with (.) -1 and (.) H as the inverse and conjugate transpose operators, respectively.The channel estimate ĥ ∈ C M i ×1 while X tr ∈ C (N tr −M i )×M i , and y tr ∈ C (N tr −M i )×1 are respectively formed as and with N tr as the number of training samples and (.) T as the transpose operator.Upon estimating the SI channel ĥ, the linear SI component can be respectively reconstructed in the training and testing phases as follows: y ts SI,lin = ĥ ⊗ x ts , ( where ⊗ indicates the convolution operator.
is formed from the training samples as is constructed similarly to x tr from the testing samples (not from the training samples), and by replacing N tr with N ts , where N ts represents the number of testing samples.Noting that, upon performing the convolution, the sequences y tr SI,lin are resized to be aligned with the dimension of y tr .
The non-linear SI component, employed to train the ML approaches, e.g., NNs and SVRs, is obtained by subtracting the linear component from the original SI signal 1 as follows: Since the ML approaches are typically trained using real-valued inputs,2 we separate the real and imaginary parts of X tr and construct the input feature map, for the case of the ML algorithms trained using the input samples only.However, for those trained with the input and output samples, z(n) will include a part of the output samples, as will be discussed later in Section IV.Upon constructing the input feature map, Z tr nl , we separate the real and imaginary parts of ỳtr SI,nl to serve as labels for training.Thus, during the training phase of the non-linear canceler, the input feature map Z tr nl is utilized with {ỳ tr SI,nl } and {ỳ tr SI,nl } to generate the modeling functions, f (.) and f (.), associated with approximating the real and imaginary parts of the non-linear SI signal, respectively.The real and imaginary parts can then be respectively predicted in the testing phase as where Z ts nl is the non-linear canceler's testing matrix, which is formed similarly to Z tr nl , but with replacing N tr by N ts .Based on the aforementioned description, the non-linear SI signal is obtained by summing the real and imaginary parts as The estimated SI signal, i.e., after applying the linear and nonlinear cancellations, can be expressed as and the residual SI signal can be written as The total SIC achieved upon applying the linear and nonlinear cancellations can be quantified in dB as with y(n) and y res (n) as the n th samples of y ts and y ts res , respectively.
III. TRADITIONAL APPROACHES FOR SIC IN FD TRANSCEIVERS
Canceling the SI in FD transceivers can be performed using various techniques that span the propagation, analog, and/or digital domains [28], [43], as summarized in Fig. 3.The following subsections briefly review such SIC approaches, discussing their advantages, disadvantages, and/or challenges.
A. PROPAGATION DOMAIN SELF-INTERFERENCE CANCELLATION
Canceling the SI within the propagation domain is typically performed at the early stage of the FD transceiver, i.e., it revolves around the Tx and Rx antennas.Propagation domain cancellation can be accomplished passively using techniques such as antenna separation, coupling networks, phase control, cross-polarization, and/or surface treatments [28], [43], as shown in Fig. 3.Alternatively, it can be done actively using techniques such as active coupling networks, active cross-polarization, and/or Tx beamforming [43].Additionally, antenna interfaces, such as balanced duplexers and circulators, can also be employed, as shown in Fig. 3. Applying the SIC within the propagation domain has the advantage of refraining the SI signal from saturating the front end of the FD Rx; however, in some cases, it may lead to the suppression of the desired signal (i.e., SoI) [28].Also, it can come at the cost of adding a hardware circuity to the FD transceiver.Hence, the focus is directed to additionally canceling the SI in other signal domains, e.g., analog and digital domains.
B. ANALOG DOMAIN SELF-INTERFERENCE CANCELLATION
Canceling the SI within the analog domain is performed in the analog circuits between the antennas and digital conversion stages [28], [43].Analog domain cancellation approaches have been classified based on their architecture, location, and tunability, as depicted in Fig. 3 [43].One of the common architectures for analog domain cancellation is to use digitally-assisted techniques based on auxiliary transmit chains [43].Digitally assisted analog domain cancellation has the advantage of preventing the SI signal from saturating the ADC, especially in mobility channel environments.However, it can result in an auxiliary transmit noise floor desensitization problem at the Rx.In addition to the Rx desensitization, the processing in the analog domain can be very costly and challenging to scale up into a higher number of antennas (i.e., multiple-input multiple-output (MIMO) scenario) [28].The focus is thus directed to additionally canceling the SI in the digital domain, considering that the propagation and analog domain SIC have sufficient performance to provide the optimal dynamic range to the Rx's ADC.
C. DIGITAL DOMAIN SELF-INTERFERENCE CANCELLATION
Canceling the SI in the digital domain is performed after the ADC using techniques such as channel modeling and/or Rx beamforming, as shown in Fig. 3. Digital domain approaches, applying channel modeling techniques, use the fact that the Rx of any IBFD TRP has knowledge of its transmitted signal in order to model the transceiver's impairments.Specifically, in channel modeling-based SIC, linear, widely linear, and reference-based models are applied to approximate the SI channel effects.Additionally, non-linear models, such as Wiener, Hammerstein, Wiener-Hammerstein, and parallel Hammerstein, are employed to model the transceiver's nonlinearities, as shown in Fig. 3. Digital domain cancellation has the advantage that the processing becomes relatively easy to perform and less hardware-expensive compared to the analog domain cancellation [28]; however, it can come at the cost of increasing the computational complexity of the FD transceiver [57] From the previous discussion, applying the traditional approaches for SIC in FD transceivers can come with challenges, such as imposing extra hardware, higher cost, and/or additional computational complexity.In contrast, applying the ML approaches for SIC in FD communications can relax such requirements, as reported in [90], [95], [96], [97], [99], [100], [101].Given these potentials, more research efforts have recently been spurred to cancel the SI in FD transceivers using ML approaches.This article provides an in-depth survey of using the digital domain SIC based on ML non-linearity modeling techniques to tackle the SIC problem in FD transceivers.
IV. ML-BASED APPROACHES FOR SIC IN FD TRANSCEIVERS
Fig. 4 summarizes the up-to-date contributions for applying ML-based approaches for SIC in FD transceivers.As can be seen from the figure, the SIC in FD systems can be performed using traditional ML approaches, such as NNs and SVRs.Also, advanced ML techniques, such as TC, Tensor-Flow graphs, and random Fourier features (RFFs), integrated with online learning, have been investigated for modeling the SI in FD transceivers.Other ML approaches, such as dynamic regression (DR), GMMs, DU, LL, and adaptive projected subgradient method (APSM), have also been studied for SIC.Among the different ML approaches applied for SIC, one can notice that NNs are the most popular due to their proven capabilities in modeling non-linearities with reduced complexity compared to other ML techniques.In this section, we aim to review and summarize the up-to-date research progress in applying ML-based approaches for SIC in FD transceivers. 4
A. NEURAL NETWORK (NN)-BASED SIC APPROACHES
Broadly speaking, canceling the SI in FD transceivers using ML mostly relies on NNs to make use of their potential compared to other ML approaches.As can be seen from the right-hand side of Fig. 4, a broad range of NN architectures, starting from typical NNs reaching to customized NN architectures, such as grid-based NNs, hybrid-layers NNs, and adaptive NNs, have been introduced for SIC in FD transceivers.The following subsections review and summarize the recent advances in applying NNs to SIC in FD transceivers.
1) TYPICAL STRUCTURES
The first attempt to apply NNs for SIC in FD transceivers is done in [90], where a shallow feed-forward NN (FFNN) is utilized to approximate the non-linear SI signal.The FFNN in [90] is constructed-similarly to the real-valued time delay NN (RV-TDNN) in [128]-from an input layer fed by real-valued inputs consisting of current and past samples of the input signal to consider the FD system's memory effect. 5he current and past samples are then transferred to a hidden layer to detect the FD system's non-linearities and finally to an output layer to estimate the target non-linear SI signal, as can be observed from Fig. 5(a-i).Simulation results show that the RV-TDNN could be beneficial from memory storage and computational complexity perspectives when compared to the polynomial model-a general form of the widely utilized parallel Hammerstein model [122]-at a similar SIC performance [90]. 6The hardware implementation of the NN-based cancelers is investigated in [95], [96], where the RV-TDNNbased canceler is proved to be efficient in terms of area and energy consumption when compared to the polynomial-based canceler at a similar performance.
In [97], a typical recurrent NN (RNN) is introduced for canceling the interference in FD transceivers.The RNN [97] is trained similarly to the RV-TDNN using real-valued inputs consisting of current and past samples with memory.Contrary to the RV-TDNN [90], the RNN employs both forward and recurrent connections to enhance the learning capabilities [97], as can be seen from Fig. 5(a-ii).Applying a shallow RNN-with a single-hidden layer-for canceling the SI in FD transceivers can be beneficial from memory and computational complexity perspectives when compared to the typical RV-TDNN at a similar cancellation performance [97].
In [97], [98], a complex-valued time delay NN (CV-TDNN) is investigated for canceling the FD system's SI.As can be observed from Fig. 5(a-iii), the CV-TDNN has a similar network architecture to that of RV-TDNN [90], while employing only one neuron instead of two neurons at the output layer.As its name implies, the CV-TDNN is trained using CV inputs and labels instead of the real-valued ones utilized in the case of RV-TDNN and RNN.Simulation results show that a shallow CV-TDNN-based canceler could be beneficial in terms of computational complexity when compared to its RV-TDNN and RNN counterparts at a similar SIC performance [90], [97].
2) GRID-BASED STRUCTURES
In [99], two grid-based NN structures, termed as ladder-wise grid structure (LWGS) and moving-window grid structure (MWGS), are introduced for modeling the interference in FD transceivers.The LWGS and MWGS are trained using CV data and built by a grid of connections-analog to nodes in the fully-connected NNs-between the input, hidden, and output layers' neurons, as shown in Fig. 5(b).As their names imply, the LWGS constructs the connections between the layers' neurons based on a ladder-wise topology, while the MWGS employs a moving window technique to arrange the connections, as can be seen from Fig. 5(b-i) and (b-ii), respectively.Using such a grid topology, the LWGS and MWGS exploit a fewer number of connections between the input and hidden layers' neurons to reduce the number of required parameters and, as a consequence, relax the computational complexity compared to the fully-connected NN counterparts.Simulation results indicate that the LWGS and MWGS [99] could achieve a comparable performance to that of CV-TDNN [97] while being beneficial in terms of memory storage and computational complexity.
3) HYBRID-LAYERS STRUCTURES
In [100], two hybrid-layers NN architectures-referred to as hybrid convolutional recurrent NN (HCRNN) and hybrid convolutional recurrent dense NN (HCRDNN)-have been introduced for learning the FD system's SI.The HCRNN and HCRDNN are trained using RV inputs and built using a combination of different NN layers, such as convolutional, recurrent, and dense layers, as shown in Fig. 5(c).The HCRNN and HCRDNN exploit the advantages of each layer in their network design to make use of their combined characteristics to improve the learning capabilities compared to the typical and grid-based NN architectures [90], [97], [99].In particular, the HCRNN relies on a convolutional layer to use the weightsharing property to reduce the number of required parameters and, consequently, relax the computational complexity.Further, it depends on a recurrent hidden layer to use its ability to learn the temporal behavior.On the other hand, the HCRDNN relies on an additional dense layer-added after the convolutional and recurrent layers-to build a highly predictive NN model with low computational complexity requirements.The HCRNN and HCRDNN [100] are shown to be beneficial from the computational complexity perspective while achieving a similar SIC performance compared to the typical and grid-based structures, albeit at the cost of increased memory requirements [90], [97], [99].
4) OUTPUT FEEDBACK STRUCTURES
In [101], two output feedback (OF)-based NN structures, namely two-hidden layers NN (2HLNN) and dual-neurons two-hidden layers NN (DN-2HLNN), have been introduced for canceling the SI in FD transceivers.As their names imply, the OF-based NN structures exploit a part of the output samples-fed back through a buffer to the input layer-to be utilized as features for training.In other words, the OF-based NN structures are trained using an input feature map that considers not only the input samples as features for training but also the output samples, as shown in Fig. 5(d).Feeding part of the output samples for training helps to consider the effect of over-the-air SI propagation delay spread, which in turn enhances the learning capabilities, and as a consequence, improves the SIC performance compared to the NN structures only trained by the input samples.In the 2HLNN, a full connection is established between the input features-including both input and output samples-and the first hidden layer's neurons, as shown in Fig. 5(d-i).However, in the DN-2HLNN, the input features are not fully connected traditionally to the first hidden layer's neurons.The features related to the input samples are connected to one neuron to recognize the input signal's memory effect, while those related to the output samples are connected to another neuron to recognize the output signal's memory effect, as shown in Fig. 5(d-ii).Simulation results [101] reveal that the DN-2HLNN could be beneficial from memory storage and computational complexity perspectives while achieving a similar SIC performance to that of the LWGS, MWGS, HCRNN, HCRDNN, and 2HLNN [99], [100], [101].
5) ADAPTIVE STRUCTURES
In [102], a channel adaptive NN structure, referred to as channel-robust NN (CHRNN), has been integrated with an LS-based linear canceler to model the SIC in FD transceivers over time-varying SI channels.In more detail, in [102], a linear canceler is trained continuously in each frame to estimate the channel coefficients, and a pre-trained NN is then utilized to construct the non-linear SI signal based on either raw or processed channel coefficients, as shown in Fig. 5(e-i) and (e-ii), respectively.For the former, the pre-trained NN is fed directly with the estimated channel coefficients obtained by the linear canceler, whereas for the latter, the pre-trained NN is fed by a processed version of the estimated channel coefficients [102, eq. ( 7)].Simulation results indicate that CHRNN learns well when it is fed by processed channel coefficients rather than the raw ones.Further, the results reveal that the CHRNN-based canceler could lead to time reductions in computational complexity while attaining a similar performance to that of the polynomial-based canceler, adapted to handle time-varying SI channels [102].
6) DEEP STRUCTURES
The concept of DL has also been introduced for modeling the interference in FD transceivers.In [97], deep versions of the typical RV-TDNN, RNN, and CV-TDNN, as shown in Fig. 5(f), have been introduced to model the SIC with lower memory and complexity.Using deep rather than shallow NNs is motivated by the fact that a deep NN with a small number of neurons in each layer, i.e., lower memory storage and computational complexity, can typically generalize better than a shallow NN with a large number of neurons in one layer [89].Simulation results show that a deep CV-TDNN could be beneficial from memory storage and computational complexity perspectives while achieving a similar performance to that of a shallow CV-TDNN [97].However, this is not applicable in all cases, as using a deep RNN increases the memory storage and computational complexity compared to the shallow RNN due to using many recurrent connections.Finally, adapting deep RV-TDNN for SIC results in decreasing the complexity while augmenting the memory storage compared to its shallow counterpart [97].The concept of DL has also been studied for SIC in FD systems in other contexts, such as [103], [104], [105].
B. SUPPORT VECTOR REGRESSOR (SVR)-BASED SIC APPROACHES
Despite being extensively used for SIC in FD transceivers, the NN-based cancelers are prone to some inherent characteristics of NN models, such as intolerable training complexity and less generalization when few examples are available for the training process.To overcome such bottlenecks, the SVRs, variants of support vector machines, have recently been introduced as alternatives to NNs for modeling the SI.The initial attempt of applying the SVRs for SIC is presented in [106] for frequency division duplex (FDD) transceivers-not for FD transceivers-where an SVR model is employed to generate a replica of the undesired transmit leakage-based second-order intermodulation distortion (IMD2) signal.Applying SVRs for SIC in FD systems came after in a few works in [107], [108].The following subsections review and summarize the few attempts to apply SVRs to cancel the SI in FD transceivers.
1) NESTED-BASED APPROACHES
The first attempt to apply SVRs for SIC in FD transceivers is made in [107], where a non-linear time-delay SVR (TDSVR)based canceler is integrated with a linear canceler-in a nested scenario-to suppress the SI signal down to the Rx noise floor level.The nested TDSVR (NTDSVR), shown in Fig. 6(i), is trained using an input feature map that considers the real and imaginary parts of the current and past input samples.Besides, the odd higher-order terms of the input samples (with memory) are also considered for training.The output labels for training the NTDSVR are created by first estimating the SI channel; thereafter, an inverse filtering is applied to remove the effect of the linear SI channel [107].Upon eliminating the channel effect, the output samples, denoted by ỹSI,nes in Fig. 6(i), and including the impact of non-linearity only, are served as labels to train the NTDSVR.After the non-linear SI component is reconstructed, the linear channel is then applied for linear component reconstruction.The estimated SI signal, including the linear and non-linear components, is then subtracted from the original SI signal to perform the SIC.Simulation results show that the NTDSVR-based canceler is beneficial in terms of SIC performance enhancement compared to the conventional non-linear polynomial-based cancelers [107].
2) RESIDUAL-BASED APPROACHES a) RTDSVR: The second attempt to apply SVRs for SIC in FD transceivers is investigated in [108], where a residualbased TDSVR (RTDSVR) is introduced.The input feature map to train the RTDSVR is constructed similarly to the nested approach [107].However, the output labels are created differently based on the residual output signal after applying the linear SIC, as can be seen from Fig. 6(ii).Particularly, in the residual scheme, the linear SI channel is first estimated, and then the linear SI signal's component is fully reconstructed.The estimated linear SI signal is then subtracted from the original SI signal, and the residual SI signal, denoted by ỹSI,nl , and involving the non-linear component only, is utilized for training the RTDSVR.Upon reconstructing the non-linear SI, it is combined with the linear one before being subtracted from the original SI to perform the SIC.Simulation results reveal a superiority of the RTDSVR to improve the SIC compared to the NTDSVR, especially for low or moderate transmit power levels [108]. 7) OF-TDSVR: Investigating the effect of feeding back part of the output samples to be exploited as features for training the SVR-based cancelers has not previously considered in the literature and is examined for the first time in this article, in which an SVR model, referred to as output-feedback timedelay SVR (OF-TDSVR), is integrated with a linear canceler in a residual scheme to suppress the SI signal.Similar to the OF-based NN structures, the OF-TDSVR is trained using an input feature map that considers both input and output samples as features for training, as shown in Fig. 6(iii).As proved for NNs, feeding part of output samples for training helps to consider the effect of over-the-air SI propagation delay spread, which in turn can enhance the learning capabilities and, subsequently, improve the SIC performance compared to the existing SVR-based cancelers-trained only by the input samples.Also, it may be beneficial for reducing the training overhead compared to the existing SVR literature benchmarks.
C. ADVANCED ML-BASED SIC APPROACHES
Advanced ML approaches, such as TC, TensorFlow graphs, and RFFs, integrated with online learning, have recently been introduced for SIC in FD transceivers.The details of such advances are provided in the following subsections.
1) TENSOR COMPLETION (TC)
In [109], a canonical system identification (CSID) approach, based on a low-rank tensor constraint optimization problem, is utilized to approximate the non-linear SI signal as in the case of NNs and residual-based SVRs.In more detail, the CSID approach formulates the SIC problem as a low-rank tensor decomposition problem to be solved using an alternating least squares optimization algorithm.Simulation results [109] indicate that the CSID-based cancelers could achieve similar performance to that of the polynomial and NN-based cancelers [90], [96].Meanwhile, they can be beneficial from the computational complexity perspective at the cost of higher memory storage requirements.
2) TENSORFLOW GRAPHS
In [110], TensorFlow graphs, recent advances in ML, are introduced to cancel the SI in a real-time software-defined radio (SDR).Generally, graphs are exploited in ML to enable ML researchers/developers to write an abstracted version of their ML techniques in the form of data-flow graphs, which can then be utilized and applied to any of the ML algorithms [111].Based on such graphs, in [110], the SIC is performed in real-time SDR based on an NN that employs a Google Ten-sorFlow graph.Simulation results reveal that the TensorFlow graph-based approach could achieve a SIC that can reach the hardware limit and surpass existing digital non-ML-based SIC approaches in the literature [110].
3) RANDOM FOURIER FEATURES (RFFS)
In [112], the RFFs and the least mean-squares (LMS) algorithm are integrated with online linear regression to perform the SIC in FD transceivers.Principally, RFFs are utilized to scale up kernel-based ML techniques by providing a nonlinear transformation of input data to a higher dimensional feature space.So, non-linearities can be efficiently modeled using linear-based techniques in the original space, resulting in scalable, fastly-converged, and computationally efficient solutions [113], [114].Based on this, in [112], the input samples are first transformed using RFFs, then the residual SI signal, after applying the linear SIC, is used with the transformed input to approximate the non-linear SI signal using an LMS-based canceler.The estimated signal is then subtracted from the original SI to obtain the residual SI signal; thereafter, an estimation vector is updated online based on that residual and using an RFFs-based observation matrix.Simulation results show that an online RFFs-LMS-based canceler could be beneficial from SIC and complexity perspectives compared to batch learning algorithms involving NTDSVRs [112]. 8
D. OTHER ML-BASED SIC APPROACHES
Seeking more advantages in other ML approaches investigated in other disciplines, the DR, GMMs, DU, LL, and APSM have been explored for SIC in FD transceivers.The details of such approaches are provided in the following subsections.
1) DYNAMIC REGRESSION (DR)
In [119], a classical DR model is introduced for canceling the interference in FD transceivers.Generally, DR models are exploited in ML problems to identify how related a certain output is to an input and allow future output forecasting.Based on this, in [119], a classical DR model is utilized to represent the memory effect caused by the amplifiers in FD systems.Upon estimating the DR coefficients, the SI signal is jointly estimated in time and frequency domains and is 8 Although the RFFs are integrated with online regression in [112], they are utilized with various ML algorithms in other disciplines, such as [115], [116], [117], [118].
subtracted from the original SI signal to perform the digital SIC.Simulation results reveal that the DR-based SIC approach could achieve a high digital SIC performance and effectively attenuate the SI signal close to the Rx noise floor level.Besides, the DR-based SIC approach is validated using a real-time SDR platform and is able to properly provide a demonstration via video streaming [119].
2) GAUSSIAN MIXTURE MODELS (GMMS)
In [120], an ML approach based on GMMs clustering is introduced to design an FD transceiver, which can detect the desired signal (i.e., SoI) directly without using digitaldomain cancellation or even channel estimation.As the name implies, GMMs clustering uses a mixture, i.e., a superposition, of Gaussian distributions to fit the training data and assign the data points to a certain cluster based on their conditional probabilities [121].In more detail, in [120], the received signal is firstly clustered, and a one-to-one mapping of the symbols, based on a GMMs clustering and an expectation-maximization (EM) algorithm, is utilized to perform the signal detection in each cluster.Simulation results reveal that an FD transceiver, utilizing the GMMs clustering, could achieve a comparable bit error rate with that of FD transceivers employing maximum likelihood detectors when perfect channel knowledge is considered and a better one when the LS/LMS channel estimation is used [120].However, this transceiver is limited to operating scenarios when low-order modulation techniques are employed.
3) DEEP UNFOLDING (DU)
In [122], an ML approach based on DU is introduced for canceling the interference in FD transceivers.DU involves converting the model-based methods, requiring iterative optimization algorithms for solving, into layer-wise structures analog to that of NNs [123], [124].This enables fusing the iterative optimization methods with NNs' libraries/tools to cover a wide range of tasks and applications.The concept of DU is applied for SIC in [122], where a cascade of nonlinear blocks-involving the impact of PA and IQ mixer non-linearities-is exploited with the traditional backpropagation algorithm to approximate the SI signal.Simulation results corroborate that the DU-based SIC approach could be beneficial from memory storage and computational complexity perspectives when compared to the literature benchmarks, e.g., polynomial-and CV-TDNN-based cancelers, at a similar cancellation performance [122].
4) LAZY LEARNING (LL)
In [125], an ML approach based on LL is introduced to perform the SIC in cellular wireless networks operating with FD transmission.As their names imply, the LL-based models postpone the generalization to the training data until a system query is performed.Based on this concept, in [125], offline and online stages are exploited to generate the interference database and transmit the data, respectively.In the offline phase, the FD system's output signal excluding the SoI, is recorded in a database.However, in the online phase-in which the system is fully operated with the SoI-a suitable SI value is looked up in the offline-generated database with the help of a learning approach to perform the digital SIC.Simulation results show that the LL-based SIC approach could be effectively utilized for canceling the interference and enabling the FD transmission in cellular wireless networks [125].
5) ADAPTIVE PROJECTED SUBGRADIENT METHOD (APSM)
In [126], an ML SIC approach based on parallel APSM is introduced for canceling the interference in FD transceivers.Specifically, in [126], a hybrid kernel is first constructed by combining linear and non-linear Gaussian kernels.This kernel is then adapted to a parallel APSM approach where a non-linear function-approximating the SIC problem-is extracted using projection.Simulation results show that the hybrid kernel-based APSM approach could properly model the SI compared to a SIC method employing the normalized LMS filtering [126].Moreover, it can also be parallelized, i.e., it can perform parallel processing to reduce the system latency.
Thus so far, we have surveyed the up-to-date contributions that apply ML-based approaches for SIC in FD transceivers, as summarized in Table 1.The adaption of a particular MLbased approach for SIC depends on the system demands, such as the achieved SIC, training overhead, memory storage, and computational complexity.The following section will help to select a suitable ML-based approach for SIC in FD systems.
V. SIMULATION RESULTS AND DISCUSSIONS
In this section, we provide a case study to compare the performance of the prominent ML approaches, surveyed in Section IV, with that of the polynomial canceler for two test setups (i.e., two training datasets) and using various dataset sizes.Specifically, we evaluate the prominent ML approaches in terms of the achieved SIC, PSD performance, training overhead, memory storage, and computational complexity and compare them with those of the polynomial-based canceler.
A. SELECTED APPROACHES
First, from the NN-based approaches shown on the righthand side of Fig. 4, we select the typical NN architectures, i.e., RV-TDNN, RNN, and CV-TDNN [90], [97]; being the first literature benchmarks to apply ML approaches for SIC in FD transceivers.Further, we select the OF-based NN architectures, i.e., 2HLNN and DN-2HLNN, as proved to be efficient in terms of memory storage and computational complexity when compared to the other NNs [101].Second, from the SVR-based approaches shown on the upper hand-side of Fig. 4, we select the RTDSVR [108] as it is shown to outperform the NTDSVR [107], especially for low or moderate transmit power levels.Additionally, we consider the investigated OF-TDSVR to be compared in reference to the existing NN and SVR benchmarks.Third and last, from the advanced and other ML approaches, shown on the lower-and lefthand sides of Fig. 4, we select the TC [109] and DU [122] approaches, as proven to be efficient in terms of memory storage and/or computational complexity when compared to the RV-TDNN and CV-TDNN, respectively.In the following subsections, we will evaluate and compare the previously selected approaches based on two test setups and using various performance metrics, such as the achieved SIC, PSD performance, training overhead, memory storage, and computational complexity.9
B. MEASUREMENT SETUP
The measurement setup utilized to capture the datasets employed for training the prominent ML-based approaches selected in Section V-A is described in Fig. 7. Herein, an FD testbed, employing one transmit antenna and one receive antenna (1T1R), was set up in an indoor lab environment to generate two datasets [90], [96], [109].The first dataset [90] applies an orthogonal frequency division multiplexing (OFDM) signal with a quadrature phase-shift keying (QPSK) modulation and 10 MHz bandwidth, while the second [96], [109] uses a QPSK-modulated OFDM signal with 20 MHz bandwidth.The average transmit power is set to 10 dBm and 32 dBm in the first and second datasets, respectively.The transmitted and received data are captured at 20 MHz and 80 MHz sampling rate for the first and second datasets, respectively.It is worth noting that using a higher sampling frequency enables the ML approaches to model the higherorder intermodulation distortion terms to efficiently suppress the SI, especially when high-transmit power levels are utilized.
At the Rx side of the FD testbed, total analog (i.e., passive and active) cancellations of 53 dB and 65 dB are applied in the first and second datasets, respectively, to refrain the SI signal from saturating the FD-sensitive Rx chain.The digital received data after the ADC is then captured and retrieved to a personal computer (PC) for offline post-processing.In order 2.
C. PARAMETERS SETTING
The goal of this work is to find the peak performance of each SI canceler, e.g., polynomial, NN, SVR, TC, and DU.In other words, we aim to find the maximum SIC that each canceler can attain.Then, we compare the different cancelers in terms of the training overhead, memory storage, and computational complexity required to achieve their maximum SIC.To that extent: 1) for the polynomial canceler [90], we have optimized the non-linearity order P and memory length M i ; 2) for the NN-based cancelers, e.g., RV-TDNN, RNN, and CV-TDNN, etc. [90], [97], [101], we have optimized the memory length M i along with the NN's hyperparameters, such as the number of hidden layers' neurons n h , batch size (BS), learning rate (LR), activation function, and training optimizer; 3) for the SVR-based cancelers, i.e., RTDSVR and OF-TDSVR [108], we have obtained the optimum value for the memory length M i , regularization term C, margin , along with the kernel hyperparameter, namely γ ; 4) for the TC approach [109], we have tuned the memory length M i , along with the optimization problem's hyperparameters, such as the tensor rank F , number of quantization levels I, regularization parameter ρ, and the smoothness factor μ n ; 5) for the DU approach, we have optimized the memory length M i , and the LR and BS of the follow the regularized leader (FTRL) optimizer as in [122].The ranges for hyperparameter tuning and the optimal values for hyperparameters over the first and second datasets are summarized in Tables 3 and 4, respectively.
D. PERFORMANCE COMPARISON
In this subsection, we assess the performance of the prominent ML-based SIC approaches in terms of their SIC, PSD, training time, memory storage, and computational complexity and compare them with those of the polynomial model.Afterward, we evaluate the efficiency of each canceler according to system demands.All the SIC approaches considered in this analysis are trained using the datasets described in Section V-B, and with parameter settings optimized in Section V-C.
1) SIC PERFORMANCE
The total SIC achieved by different ML-based SIC approaches compared to the polynomial model upon tested using the first and second datasets, and with 2000, 3000, 4000, and 5000 samples is shown in Fig. 8(a) and (b), respectively. 11From the figures, one can observe that in the first dataset, where a low average transmit power is employed, the polynomialbased canceler achieves the highest cancellation performance compared to other cancelers for most of the dataset sizes.However, in the second dataset, where a high average transmit power is utilized, the RV-TDNN-based canceler provides the highest cancellation among the other cancelers for all dataset sizes.It can also be inferred from the figures that the RTDSVR achieves the lowest cancellation performance among the others, even if a low or high transmit power is utilized.Further, one can notice that employing a part of the output samples as features for training the SVR models can enhance the cancellation performance compared to the existing RTDSVR, i.e., the OF-TDSVR attains a significantly higher SIC than the RTDSVR benchmark.In sum, the polynomial canceler could be a good choice when a low transmit power is utilized, i.e., low transmit power generates less non-linearity SI signals.
However, when a higher transmit power is employed, the RV-TDNN could be a better choice, i.e., high transmit power generates higher non-linearity SI signals. 12 11 In this work, we provide a case study to compare the performance of different ML approaches with the polynomial canceler when achieving the maximum SIC (i.e., peak-performance) at short dataset sizes, e.g., 2000, 3000, 4000, and 5000 samples.However, in our previous works in [99], [100], [101], we have compared the different ML approaches with the polynomial canceler when attaining a similar SIC (i.e., equi-performance) at a large dataset size, e.g., 20,000 samples.Accordingly, some of the results obtained in this work may differ from those reported in [99], [100], [101]. 12Although all SI cancelers achieve a high non-linear cancellation in the second dataset compared to that attained in the first, as a result of having increased non-linearity, we interestingly note that the total SIC achieved in the
2) PSD PERFORMANCE
The power spectra of the residual SI signal after applying the different ML-based SIC approaches compared to that of the polynomial-based canceler when tested using the first and second datasets and with 5000 samples, as an example, is shown in Fig. 9(a) and (b), respectively.From Fig. 9(a), one can observe that the polynomial-based canceler is able to suppress the SI signal with the lowest gap to Rx noise floor among former is lower than that in the latter, as can be seen from the sample results in Table 5.This is due to the degradation of the linear canceler's performance with increased non-linearity.
TABLE 5. SIC of Different Approaches When Trained Using 5000 Samples of the First and Second Datasets
values achieved by the polynomial, RV-TDNN, and RTDSVR cancelers, as an example, match those reported in Table 5.
3) TRAINING OVERHEAD
In this subsection, we assess the training time, i.e., fitting time, required by each SI canceler to complete the training process.Specifically, for the polynomial-based canceler, we evaluate the training time needed to estimate the polynomial model's coefficients based on the LS algorithm.For the NNand DU-based cancelers, we calculate the training time as the average training time required over different random seeds.For the SVR models, we approximate the training time as the maximum between the times needed to fit the SV R and SV R , associated with estimating the real and imaginary parts of the non-linear SI signal, respectively, as shown in Fig. 6.Finally, for the TC-based canceler, we evaluate the training time required for fitting the low-rank tensor decomposition problem.Based on the aforementioned, in Fig. 10(a) and (b), we depict the training time of all the ML-based cancelers compared to the polynomial model upon tested using the first and second datasets, respectively.From the figures, it can be observed that the polynomial-based canceler requires the lowest training time among the others even if low or high average transmit power is employed, i.e., even if it is trained using the first or second dataset.Further, one can notice that the RTDSVR shows a good training time, i.e., it requires a lower training time than all other cancelers except the polynomial-based canceler.One can also observe that the SIC enhancement provided by the OF-TDSVR comes at the cost of increasing its training time compared to the RTDSVR benchmark.Additionally, it can be noticed that the TC-and DU-based cancelers require significantly higher training than the others, making them unfavorable choices for SIC, especially for operating scenarios where the training time is of interest.Finally, it can be observed from the figures that typically, as the dataset size increases, the training time of all SI cancelers increases as well.
4) MEMORY STORAGE
In this subsection, we assess the memory storage of different ML approaches in terms of the total number of parameters required in the inference stage and compare it with that of the polynomial model.Specifically, the number of parameters of the polynomial-based canceler is calculated as 2M i + 2M i {( P+12 )( P+1 2 + 1) − 1} [90].Further, the number of parameters of the typical RV-TDNN, RNN, and CV-TDNN is respectively evaluated as 2M i (n h + 1) + 3n h + 2, 2M i + n h (n h + 5) + 2, and 2M i + 2(M i n h + 2n h + 1), with n h as the number of hidden neurons [90], [97].The number of parameters of the OF-based NN structures, i.e., 2HLNN and DN-2HLNN, is respectively calculated as 2M i + 2{n h1 (M i + M o + n h2 + 1) + 2n h2 + 1}, and 2M i + 2(M i + M o + 4n h2 + 3), with n h1 and n h2 as the number of neurons in the first and second hidden layers, respectively [101].The number of parameters of the SVR models, i.e., RTDSVR and OF-TDSVR, employing a radial basis function (RBF) kernel, is evaluated as 2M i + N sv + N sv + 8, with N sv and N sv as the number of support vectors required to approximate the unknown functions of SV R and SV R , respectively [108], [129].the number of parameters for the TC-DU-based is respectively given by 2{M i (2F I + and 2{M ( P+12 ) + 2}, with F and I indicating the tensor and number of quantization levels in the TC approach, respectively [109], summary of the total parameters utilized to evaluate the memory storage of various SI cancelers shown in Table 6.
Based on the aforementioned, we depict the number of parameters required by the various SI cancelers when tested by the first and second datasets in Fig. 11(a) and (b), respectively.From the figures, one can observe that the DU-based canceler requires the lowest number of parameters compared to the others for both datasets and for all dataset sizes.The SVR-based cancelers, i.e., RTDSVR and OF-TDSVR, require the highest number of parameters among the others in the first dataset, as their parameters basically depend on the number of support vectors, i.e., N sv and N sv , which in turn depend on the number of training data [129].Thus, one can notice from Fig. 11(a) and (b) that as the dataset size increases, the SVR models' parameters significantly increase as well.Finally, it can be inferred from the figures that the RNN-based canceler requires the highest number of parameters compared to the others in the second dataset as a result of using many recurrent connections.
5) COMPUTATIONAL COMPLEXITY
In this subsection, we evaluate the computational complexity of various ML-based SIC approaches in terms of the total number of floating-point operations (FLOPs) required in the inference stage and compare it with that of the polynomial model.Particularly, the number of FLOPs of the polynomialbased canceler is calculated as 10M i + 10M i {( P+12 )( P+1 2 + 1) − 1} − 2 [90].Besides, the number of FLOPs of the typical RV-TDNN, RNN, and CV-TDNN are respectively evaluated as 10M i + n h (4M i + 5), 10M i + 2n h (n h + 9 2 ), and 10{M i (n h + 1) + 6 5 n h } [90], [97].Further, the number of FLOPs of the 2HLNN and DN-2HLNN are calculated as 10M i + 10{n h1 (M i + M o ) + n h1 n h2 + 6 5 n h2 } and 10M i + 10(M i + M o + 16 5 n h2 ), respectively [101].On the other hand, the number of FLOPs of the SVR models, i.e., RTDSVR and OF-TDSVR, employing an RBF kernel, are respectively evaluated in the worst case as 10M i + 4dM i ( )Q and )Q, with d and Q as the degree (e.g., d = 3 for RTDSVR and d = 1 for OF-TDSVR) and the number of testing samples, respectively [108].Finally, the number of FLOPs of the TC and DU approaches are respectively given by 8M i (2F + 1) − 3F − 7 and 10{M i ( P+1 2 ) + 2} [109], [122].A summary of the number of FLOPs utilized to asses the computational complexity of various cancelers is shown in Table 6. 13ased on the aforementioned, in Fig. 12(a) and (b), we depict the FLOPs required by various SI cancelers when tested using the first and second datasets, respectively.From the figures, one can observe that the DU-and TC-based cancelers require the lowest number of FLOPs for all dataset sizes in the first and second datasets, respectively.Further, the polynomial-, RV-TDNN-, and DN-2HLNN-based cancelers require a reasonable number of FLOPs when compared to the others for all dataset sizes.Finally, it can be inferred from the figures that the SVR-based cancelers, i.e., RTDSVR and OF-TDSVR require an intolerable computational complexity compared to the others, as their FLOPs depend on the number of support vectors, N sv and N sv , as well as the number of testing samples Q [108].
6) CANCELER EFFICIENCY
In the previous subsections, we evaluated the performance of each SI canceler in terms of its SIC (or PSD), training overhead, memory storage, and computational complexity.Based on this analysis, we have found that some of the cancelers outperform in terms of SIC performance, and some are promising in terms of training time, memory storage, and/or computational complexity.So, the question is how to select a certain ML-based SIC approach to fit a target application, i.e., meet system criteria.This subsection will help to address the above question to select a suitable SIC approach depending on the system requirements.
As the challenge in the SIC problem is to find an SI canceler that maximizes the achieved SIC while minimizing the training time, memory storage, and computational complexity requirements, we have devised an efficiency measure η to evaluate each canceler based on the aforementioned metrics as follows: where w C ∈ {0, 1}, w τ ∈ {0, 1}, w ∈ {0, 1}, w F ∈ {0, 1} represent the cancellation, training, storage, and complexity weighting factors, respectively, which take either 0 or 1 values depending on the system requirements. 14Moreover, η C , η τ , η , and η F indicate the cancellation, training, storage, and complexity efficiencies of each canceler, which can be respectively expressed as ) ) with C as the total SIC achieved by each canceler over a certain dataset, while C max and C min are the maximum and minimum SIC attained by any of the cancelers within this dataset, respectively.Similarly, τ is the training time needed by each canceler over a certain dataset, whereas τ max and τ min are the maximum and minimum training time required by any of the cancelers within this dataset, respectively.Likewise, represents the number of parameters required by each of the cancelers over a certain dataset, while max and min indicate the maximum and minimum parameters needed by any of the cancelers within this dataset, respectively.Finally, F represents the number of FLOPs required by each of the cancelers over a certain dataset, whereas F max and F min denote the maximum and minimum number of FLOPs required by any of the cancelers within this dataset, respectively.Based on the above, we have assessed the efficiency η for various SI cancelers over the first and second datasets in Table 7.It can be observed from the table that the polynomial model achieves the highest efficiency among the other SI cancelers for most of the test cases in the first dataset; i.e., the polynomial-based canceler is efficient for the test cases where a low average transmit power is utilized, and the non-linearity is not severe.However, in the second dataset, where a high transmit power is used, the RV-TDNN-based canceler achieves the highest efficiency among the others for most of the test cases.One can also notice from Table 7 that the polynomial-based canceler requires a large number of training examples to achieve the highest efficiency, e.g., the polynomial-based canceler is unable to attain the highest efficiency when being trained using 2000 samples of the first dataset.In addition, one can infer from the table that the RV-TDNN works well in the test cases where the training overhead is not of the system demands, e.g., the RV-TDNNbased canceler is unable to attain the highest efficiency in the second dataset for all test cases where w τ = 1 and the polynomial-based canceler becomes a better choice in such test cases.
In sum, upon testing several ML-based approaches for SIC in FD transceivers, using two test setups and over short dataset sizes, we can conclude that the model-driven approaches, i.e., polynomial-based canceler, can be a good choice in operating scenarios where a low transmit power is employed; however, at high transmit power levels, the data-driven ML approaches, i.e., RV-TDNN-based canceler, can be a better choice.
VI. CHALLENGES AND FUTURE RESEARCH DIRECTIONS
The previous sections provided a comprehensive overview of applying ML-based approaches for SIC in FD transceivers.Suitable SIC approaches have also been selected for SIC, depending on the system criteria.Although the literature works surveyed in this manuscript provide a significant role in empowering the application of ML techniques for SIC in FD transceivers, more efforts remain to be made to adopt such techniques in practical wireless systems employing FD transmission.The following subsections delve into the main challenges of applying ML-based approaches for SIC in FD transceivers and provide a guide for future research directions.
A. CONSIDERING THE EFFECT OF SOI WHILE PERFORMING THE SIC
The existing ML-based SIC approaches consider the cancellation of the SI signal only, i.e., no signal from any remote FD or half-duplex TRPs is considered.However, in practical situations, i.e., real-time FD systems, the SIC in one FD node has to be done while an SoI from another TRP is received and demodulated.Initial works in [127], [130] investigated a joint detection of the SI and SoI and proved that an NN-based SI canceler is beneficial to enhance the signal demodulation.Despite the potential of the works in [127], [130], there are still more issues remaining to be addressed, and the point of detecting the SoI while performing the SIC is open to improvements from both performance and complexity perspectives.For instance, one issue is that all ML-based approaches surveyed in this manuscript are trained and verified using time-domain samples, i.e., they are completely working in the time domain.However, if the SoI signal employs any of the frequency-domain modulation formats, e.g., OFDM modulation, performing the SIC could be done in the frequency domain; this would be similar to the fifth-generation new radio or future 6G demodulation pilots (demodulation reference signals uplink or downlink) which are in specific time and frequency symbols [127].Thus, adapting the ML-based SIC approaches to work with frequency-rather than time-domain samples can be a direction for future investigation.
B. TACKLING THE TIME-VARYING SI CHANNELS
The existing ML-based SIC approaches use offline-trained ML algorithms to estimate the SI signal over a static SI channel.However, in practical situations, the movements of user equipment TRPs and/or environmental changes can vary the SI channel over time, and the ML algorithms may need to be retrained in order to adapt to the time-varying SI channel.Nevertheless, as presented in Fig. 10, some ML algorithms require a higher training time, i.e., they are not fast enough to be retrained during the FD transmission, which can lead to significant performance degradation.Initial works in [102], [131] investigate the effect of canceling the SI signal under time-varying SI channels.However, these are incipient works, and the point is open to improvements in both performance and complexity perspectives.For instance, applying reinforcement and online learning to iteratively tackle the time-varying SI channel can be a future direction of investigation.Scaling the performance and/or complexity as a result of employing reinforcement and online learning can also be considered for future investigation.
C. APPLYING ML APPROACHES FOR SIC IN FD MIMO SYSTEMS
The ML-based SIC approaches surveyed in this work are trained and verified using a single-input single-output (SISO) FD testbed.However, in recent communication standards, the MIMO technology has become a basic transmit/receive scheme.Hence, extending the above ML-based SIC techniques to MIMO rather than SISO FD transceivers is imperative.Typically, the complexity of the SIC approaches exponentially increases under MIMO operation where M transmit antennas interfere with N receive antennas.A straightforward approach-to process several SI signals in the digital domain-is to perform the SIC using separate SI cancelers, which consider the interfering signals from all transmit antennas; however, this results in excessive complexity.To address this issue, alternative approaches can be designed.For instance, exploiting the spatial correlation between the MIMO channels to develop a common SI canceler, i.e., not separate cancelers, can be a direction for future investigation in order to reduce the impractical computational complexity of the traditional MIMO SIC-based approaches [132].
D. TRAINING COMPLEXITY OF ML-BASED SIC APPROACHES
The computational complexity of the existing ML-based SIC approaches is typically evaluated and compared in terms of FLOPs required in the inference stage, i.e., upon performing and finalizing the training process.However, estimating the training complexity (in terms of FLOPs) is crucial and should be considered, especially for ML algorithms targeted to be integrated with online learning as described in Section VI-B.For instance, calculating the number of FLOPs required for performing the backpropagation in NNs, approximating the unknown function using optimization in SVRs, and solving the low-rank tensor decomposition problem in TC-based cancelers should be explored to provide insights about the feasibility of applying ML-based approaches for SIC in real-time FD transceivers.
VII. CONCLUSIONS
In this paper, we have surveyed the up-to-date contributions in applying ML approaches for SIC in FD transceivers.Based on a comprehensive review, we have found that canceling the interference in FD transceivers using ML has been initially performed by traditional approaches, such as NNs and SVRs.Advanced ML approaches, such as TC, TensorFlow graphs, and RFFs, integrated with online learning, have been employed for SIC as well.Further, other ML approaches proven in other disciplines, such as DR, GMMs, DU, LL, and APSM, have also been utilized for modeling the SI in FD transceivers.Upon surveying the literature, we have provided a case study to evaluate the performance of the prominent ML-based approaches over short dataset sizes and using two test setups employing different transmit power levels.Specifically, we have assessed the performance of the prominent data-driven ML-based approaches in terms of the SIC, PSD, training time, memory storage, and computational complexity and compared them with those of the model-driven approaches, e.g., polynomial-based canceler.Afterward, we evaluated the efficiency of the different SIC approaches based on the aforementioned metrics to select a suitable approach for SIC, depending on system requirements.Based on this study, we have found that the model-driven approaches, i.e., polynomial-based canceler, could be a good choice when a low transmit power is utilized (i.e., low nonlinearity exists).However, at high transmit power (i.e., high non-linearity exists), the data-driven ML-based approaches, i.e., RV-TDNN-based canceler, could be a better choice.We have finally identified the research gaps in applying ML approaches for SIC in FD transceivers, paving the way for future research directions, such as considering the SoI effect, extension to MIMO FD transceivers, and tackling the time-varying SI channels.
FIGURE 1 .
FIGURE 1. Organization of this paper.
FIGURE 2 .
FIGURE 2. ML-based FD system model with linear and non-linear digital cancellation stages.
FIGURE 4 .
FIGURE 4. ML-based approaches for SIC in FD transceivers.
FIGURE 5 .
FIGURE 5. NN-based approaches for SIC in FD transceivers.
FIGURE 6 .
FIGURE 6. SVR-based approaches for SIC in FD transceivers.The NTDSVR is trained using ỹSI,nes , which is generated after estimating the SI channel and performing the inverse channel filtering.However, the RTDSVR and OF-TDSVR are trained using ỹSI,nl , which is generated after linear SI estimation and reconstruction[108].
FIGURE 8 .
FIGURE 8. SIC of different ML-based SI cancelers compared to the polynomial canceler over the first and second datasets.
the other cancelers in the first dataset; it can provide a gap to Rx noise floor value of (90.8 − 88.7 = 2.1 dB), bringing the SI signal very close to the Rx noise floor level.It can also be inferred from Fig. 9(b) that the RV-TDNN-based canceler provides the lowest gap to Rx noise floor compared to the others in the second dataset; it attains a gap to Rx noise floor value of (85.3 − 81.3 = 4 dB), bringing the SI signal close to the Rx noise floor level.The low gap to Rx noise floor achieved by the RV-TDNN compared to the polynomial canceler comes from the fact that it can reduce the leakage of the carrier around the DC tone, as shown in Fig. 9(b) [96].Finally, one can observe from the figures that the SIC
FIGURE 10 .
FIGURE 10.Training time of different ML-based SI cancelers compared to the polynomial canceler over the first and second datasets.
FIGURE 12 .
FIGURE 12. FLOPs of different ML-based SI cancelers compared to the polynomial canceler over the first and second datasets.
TABLE 1 . Summary of ML-Based Approaches Applied for SIC in FD Transceivers 34
VOLUME 5, 2024Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
TABLE 2 .
Measurement Setup Specificationsto post-process the captured data at the PC, a 3.7.5 version of Python is installed in a Windows environment, using the 5.1.5version of Spyder as the integrated environment for development, comparisons, and evaluation of different ML-based SIC approaches.
10Finally, for analyzing the performance of various ML-based approaches at different dataset sizes, we have split each of the above-mentioned datasets into four separate datasets containing 2000, 3000, 4000, and 5000 samples, respectively.In all test cases, the first 90% of samples are used for training (and validation, if any), while the last 10% are reserved for testing.The specifications of the measurement setup employed in this work are detailed in Table | 2023-11-15T17:01:54.670Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "0b56805209433a0322ca5cc80910064fda427d42",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/8782711/8889399/10314438.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "32002204b4db7c84fc8c2401dbefc77005062de7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
33777028 | pes2o/s2orc | v3-fos-license | Non-metallic, non-Fermi-liquid resistivity of FeCrAs from 0 to 17 GPa
An unusual, non-metallic resistivity of the 111 iron-pnictide compound FeCrAs is shown to be relatively unchanged under pressures of up to 17 GPa. Combined with our previous finding that this non-metallic behaviour persists from at least 80 mK to 800 K, this shows that the non-metallic phase is exceptionally robust. Antiferromagnetic order, with a Neel temperature T_N ~ 125 K at ambient pressure, is suppressed at a rate of 7.1 +/- 0.1 K/GPa, falling to below 50 K at 10 GPa. We conclude that formation of a spin-density wave gap at T_N does not play an important role in the non-metallic resistivity of FeCrAs at low temperatures.
. ρ ab rises monotonically with decreasing temperature, while the rise in ρ c is interrupted by a peak at the Néel temperature T N ∼ 125 K. As T → 0 K the resistivity has a negative slope with a sub-linear, non-Fermi-liquid power law. The inset shows that, in contrast, the specific heat is Fermi-liquid-like at low temperature. The data are from reference [7].
behaviour is somewhat reminiscent of systems that are on the border of Anderson localization, in which the Fermi energy is close to the mobility edge separating extended from localized states in disordered semiconductors [11]. Such materials can show a nonmetallic resistivity combined at low temperature with a linear C(T ), however the carrier density in FeCrAs seems much too high, and the level of disorder much too low, for it to be in this regime. Thus, for example, in well-known cases with a non-metallic resistivity such as phosphorus-doped silicon or non-stoichiometric Ce 3−x S 4 [12,13] the resistivity on the border of Anderson localization is over 100 times larger than we see in FeCrAs, while in Si:P which has a linear specific heat at low temperature, C(T )/T is 300 times smaller than we see in FeCrAs [14]. An obstacle to understanding the physics of FeCrAs is the antiferromagnetic transition near 125 K. As samples are cooled through this transition, the c-axis resistivity falls abruptly in the highest quality crystals, before continuing to rise again at lower temperature, as shown in figure 1. This behaviour suggests that there is some spinfluctuation scattering, but it might also be compatible with an orbital mechanism. The ab-plane resistivity does not fall upon cooling below T N , indeed if anything the slope of dρ ab /dT is steepest just below T N (see figure 1), producing a weak 'bump', or concave downwards region, below T N .
It would be interesting to know the low temperature limiting behaviour of the resistivity in the absence of antiferromagnetic order. A possible scenario is that the resistivity would saturate, or even begin to fall, as T → 0 K in the absence of magnetic order that opens a spin-density-wave gap over part or all of the Fermi surface. Or, at the other extreme, perhaps a full gap would open over the entire Fermi surface, leading to diverging resistivity as T → 0 K, but antiferromagnetism prevents the opening of this gap and leaves a semi-metallic state with a small Fermi surface. The antiferromagnetic order in FeCrAs is itself unusual, and indicative of some level of frustration due to the P62m crystal structure, which can be viewed as a triangular lattice of iron 'trimers', plus a distorted Kagome sublattice of Cr ions. Magnetic order is found only on the Cr sublattice, in the form of a commensurate spin-density wave that triples the unit cell in the hexagonal plane, while along the c-axis the moments are parallel [7,15]. The Néel temperature is low compared with the comparable tetragonal systems Fe 2 As and Cr 2 As, which have T N ∼ 350 K [16] and T N ∼ 393 K [17] respectively. In FeCrAs, the iron site is tetrahedrally coordinated by As, as in the iron-pnictide superconductors, and even below the antiferromagetic transition it does not display a measurable magnetic moment in neutron scattering or Mössbauer spectroscopy. This is in agreement with band-structure calculations [8] that found that the partial density of states on the iron sites is too low to meet the Stoner criterion for magnetic moment formation, and indeed in a recent paper the iron Kβ x-ray emission spectrum from FeCrAs was used to provide a non-magnetic reference [18]. It should be noted, however, that in the related tetrahedral compound Fe 2 As the iron moment on the tetrahedrally coordinated site is 1.28 µ B [16], suggesting that this site is close to the magnetic/nonmagnetic moment-formation boundary, and that frustration may also play a role in moment suppression on the iron site.
The physics of frustrated metallic magnets still has many open questions [19,20]. Based on the frustrated magnetic sublattices and the absence of a magnetic moment on the iron sites, Rau et al. have put forward a theory that the anomalous behaviour of FeCrAs arises from a 'hidden spin liquid' on the iron sublattice [21]. In this picture, the conduction electrons fractionalize, and anomalous transport is due to scattering of bosonic charge degrees of freedom off of strong gauge fluctuations.
In this paper we try to determine whether the antiferromagnetism is playing an important role, particularly in the T → 0 K limit, by using pressure to adjust T N . We find that, despite suppressing T N by more than a factor of two, and possibly all the way to 0 K in our highest pressure measurements, the general behaviour of the anomalous resistivity is not dramatically modified, suggesting that the opening of a spin-density-wave gap does not play an important role in the non-metallic resistivity of FeCrAs.
Experiment:
We have carried out four-terminal resistivity measurements on single crystal samples of FeCrAs at high pressure. Crystals were grown from a stoichiometric melt in an alumina crucible within a sealed quartz tube. The material was melted twice, and then annealed at 900 • C for 150 hours. Sample quality in FeCrAs is revealed by the sharpness of the resistive transition at T N , the value of T N , and the temperature at which glassy behaviour in the magnetic susceptibility sets in. The crystals used in these measurements were from our highest quality batch [7], in which T N ∼ 125 K in susceptibility measurements, T N ∼ 133 K according to the cusp in the c-axis resistivity, and in which glassy behaviour is very weak and only sets in below 10 K. Details of crystal growth and characterization can be found in reference [10].
Electrical contacts to the samples were made with Dupont 6838 epoxy. These had high resistivities at ambient pressure, but under pressure they fell to the range of a few ohms.
We pressurized two single crystals, one with I c which measures ρ c , and the other with I ⊥ c, measuring ρ ab . The ρ c sample had dimensions 250 × 200 × 30 µm 3 . It was pressurized in a Moissanite anvil cell with 800 µm culets. The gasket was berylliumcopper, with a 400 µm hole, insulated with a mixture of alumina-powder and stycast 1266 epoxy. The ρ ab sample had dimensions 250 × 100 × 25 µm and was pressurized in a diamond anvil cell with 600 µm culets, using a fully hardened T301 stainless-steel gasket. Daphne oil 7373 was used at the pressure medium. The pressure was determined at room temperature using ruby fluorescence; the pressure may shift by up to ∼ 0.4 GPa while the cell is cooled. The ρ c sample survived up to 10 GPa before an anvil broke, while the ρ ab sample survived up to 17 GPa.
In order to track the pressure dependence of T N we made use of the peak in ρ c at T N . Unfortunately, ρ ab does not have a well-defined anomaly at T N , as can be seen in figure 1, so we could only follow T N vs. pressure with confidence up to 10 GPa.
A possible concern with all high pressure measurements is pressure-induced structural phase transitions. Among the 111 pnictides however, FeCrAs should be relatively immune to such transitions. The 111 pnictides come in three main crystal structures: tetrahedral, hexagonal and orthorhombic, in order of decreasing unit cell volume [22]. Both Fe 2 As and Cr 2 As have the tetrahedral structure, and there is only a narrow range of stability of the hexagonal phase around the FeCrAs stoichiometry, thus FeCrAs must be just barely below the volume criterion of stability for the hexagonal phase. We thus expect it to be able to withstand quite a lot of compression before it transforms to the orthorhombic phase, and indeed in our measurements we did not see any abrupt changes in resistivity that would indicate a change of structure.
Resistivity measurements were carried out at many pressures, as shown in figures 2 and 4. At each pressure the temperature was varied between room temperature and 2 K, using a dipping probe to control the temperature. that the sample becomes more conducting with increasing pressure, the overall effect is small; 2) T N is suppressed by pressure, as shown by the shift to lower temperature of the peak in ρ c , which is known from our previous measurements to coincide with T N [7]; and 3) the overall shape of ρ c vs. T does not change markedly. Point (3) is our key result. Despite the fact that T N is suppressed by more than a factor of two, for T > T N the resistivity remains non-metallic with little change in slope. The T → 0 K slope is also roughly independent of pressure, remaining non-Fermi-liquid like at all pressures, with the same power law behaviour within the error, ρ c ∼ ρ c,• − AT 0.7±0.1 , as was observed at ambient pressure.
In figure 3 we elaborate on points (2) and (3) above. The main figure shows a plot of T N vs. P . T N has been extracted from the curves in figure 2 by finding the maximum in the second derivative. Note that the maximum weakens with increasing pressure, and ultimately becomes a bump at 9.7 GPa, making it more difficult to extract T N . Nevertheless, it is clear from figure 3 that T N falls roughly linearly with pressure. Fitting a straight line to the points in figure 3 gives dT N /dP = −7.3 ± 0.1 K/GPa. If we extrapolate this linear behaviour to estimate the pressure at which the quantum critical point, T N = 0 K, would be reached, then we find P c ∼ 15.5 ± 1 GPa. It should be noted, however, that such extrapolations are not always reliable: T N vs. P curves can turn downwards [23] or saturate [24], so that P c may be significantly smaller or larger than this value.
The inset of figure 3 shows that the high temperature slope of ρ c vs. T is unaffected by pressure, within the error, emphasizing that pressure has little effect on the non- metallic resistivity at T > T N . Figure 4 shows ρ ab vs. T at four pressures between 4.3 and 17.3 GPa. Unfortunately, ρ ab does not have a sharp feature at T N (see figure 1), so we cannot continue our T N vs. P curve using this data. As with ρ c , ρ ab decreases with increasing pressure, the overall effect is small, and the non-metallic temperature dependence is relatively unaffected by pressure up to 17.3 GPa. The high-temperature slope, shown in the inset, is unchanged within the error.
The T → 0 K non-metallic behaviour of ρ ab vs. T also persists to high pressure, however the slope is smaller in the two highest-pressure curves, and the unusual sublinear temperature dependence crosses over to become more linear in T . At intermediate temperatures the concave region, seen at ambient pressure below T N , seems to have been suppressed in these highest pressure curves. This would be consistent with the extrapolation of T N in figure 3, so antiferromagnetism may indeed have been suppressed by 17.3 GPa. As a result of the suppression of this concave downwards section, ρ ab at 17.3 GPa looks quasilinear over the whole temperature range.
We have seen no indication of superconductivity in any of our data.
Discussion:
In these measurements we have suppressed T N by at least a factor of two, and if the extrapolation of T N vs. P beyond 10 GPa can be trusted, then in our measurements up to 17.3 GPa on the ρ ab sample T N may have been suppressed to 0 K. Despite this Figure 4. Resistivity of the ρ ab sample as a function of temperature, from 300 to 2 K. The low temperature data (T < 15 K) fit to a sub-linear power law with nearly the same exponent (x = 0.7 ± 0.1) for the two lowest-pressure curves, but become nearly linear at 13.7 and 17.3 GPa. The inset shows the pressure dependence of the T > T N slope dρ/dT as a function of pressure, obtained by fitting a line to the data between 150 and 300 K. No significant pressure dependence is observed.
suppression of antiferromagnetic order, the overall shape of the resistivity curves is not dramatically affected: the high temperature resistivity remains non-metallic with minimal change of its large, negative slope, while the T → 0 K resistivity also remains non-metallic, with non-Fermi-liquid power laws persisting to all but the two highest pressures, and with remarkably little change of slope. The most notable changes that we do observe are the gradual disappearance of the concave downward regions in ρ vs. T that are associated with antiferromagnetic order, and an indication in the ρ ab sample that for P > 10 GPa the T → 0 K resistivity crosses over from sublinear to linear dependence on T . This latter change may have to do with the antiferromagnetic quantum critical point being approached. Linear resistivities are typical of antiferromagnetic quantum critical points, although in all of the cases that we know of, the slope of ρ vs. T is positive, not negative as in FeCrAs.
There are few materials to which we can compare these results. The robustness of the non-metallic resistivity of FeCrAs under pressure is in sharp contrast to that of CeCuAs 2 [25], whose strongly non-metallic resistivity between room temperature and 1.8 K is completely suppressed at 10 GPa to produce a metallic resistivity over the entire temperature range. LaFeAsO is the iron-pnictide superconductor whose resistivity most closely resembles FeCrAs: for T > 200 K its ambient-pressure resistivity is nearly flat, although unlike FeCrAs its resistivity becomes metallic for T < 200 K. As pressure is increased up to 12 GPa, the magnitude of the high-temperature resistivity falls, but the non-metallic slope remains flat [26]. Thus, the behaviour is like FeCrAs in that the resistivity vs. temperature curves are displaced downwards by pressure but their slope is not changed. Another 1111 pnictide, CaFeAsO, has a quasi-linear, weakly metallic, slope at high temperature, and again this slope is nearly independent of pressure while the resistivity curves displace downwards [26]. The 122 pnictides are even more metallic at high temperature, and in these systems pressure increases the metallic slope somewhat, so pressure makes these materials more metallic, e.g. [27]. One major difference between FeCrAs and all of these systems however is that pressure suppresses the total resistivity much more slowly in FeCrAs. For example, in LaFeAsO [26], the non-metallic resistivity at room temperature falls by 55% between 0 and 12 GPa -from from 3.8 to 1.7 mΩcm -while in FeCrAs ρ ab at room temperature falls by only 10% between 4 and 17 GPafrom 380 to 340 µΩcm.
While our results rule out magnetic long-range order as an important factor in the non-metallic resistivity of FeCrAs at low temperature, they do not necessarily exclude spin-fluctuations as playing a role. The iron-pnictides superconductors are believed to be incipient Mott insulators. In a theoretical study based on this picture, Dai et al. decomposed the electronic excitations into a coherent part near the Fermi energy and an incoherent part further away [6]. The latter comprises incipient lower and upper Hubbard bands, which accommodate localized Fe moments. This model supports a magnetic quantum critical point as a result of the competition between magnetic ordering of the local moments and the mixing of the local moments with the coherent electrons. The spectral weight of the coherent quasiparticle peak changes as a result of mixing and once it exceeds a critical value, magnetism disappears. In analogy with this model, the non-metallic behaviour of FeCrAs could be a manifestation of incipient Mott insulating behaviour.
However, there are good reasons for thinking that the physics of FeCrAs may be different. In FeCrAs, local moments reside on the Cr and not the Fe sites, so the incoherent carriers would have to be released from Cr d-orbitals while the coherent part of the spectrum most likely would stem from Fe d-orbitals hybridized with As p-orbitals. Non-Fermi-liquid behaviour could then arise due to a coupling between the two species of carriers. Within this theoretical framework, we naively expect that, because pressure increases the mixing between the coherent and incoherent parts of the spectrum, at some critical pressure the magnetic order must vanish. This model has not been worked out for FeCrAs, and as far as we know it would not explain the Fermi liquid specific heat and magnetic susceptibility that are seen at ambient pressure.
Morever, it is far from clear that FeCrAs is close to a Mott transition. As noted in the introduction, band-structure calculations predict three large Fermi surfaces, and would seem to place the system far from a Mott state. Alternative models, for example involving orbital effects [5], a hidden spin liquid [21], microscopic phase-separation [2], or even some exotic form of Kondo effect may more accurately capture the physics.
It would be of interest to carry out optical conductivity and NMR measurements to see if a pseudogap is forming as the resistivity rises with decreasing temperature, and to investigate the spin dynamics. In terms of possible spin-liquid states [21], thermal conductivity measurements at low temperature may be enlightening.
Conclusions
We have found that suppression of antiferromagnetic long-range order doesn't strongly affect the non-metallic resistivity seen across several decades of temperature in FeCrAs, while the the non-Fermi-liquid power law behaviour of the resistivity as T → 0 K, at most crosses over from sublinear below 10 GPa, to linear above 10 GPa. These results rule out the formation of spin-density-wave gaps on the Fermi surface as playing an important role in the anomalous non-metallic resistivity of this material. | 2013-02-20T02:10:48.000Z | 2013-02-20T00:00:00.000 | {
"year": 2013,
"sha1": "1994bcd8f5e6e5f133ba1a5a49a91dfd00bd6c18",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1302.4791",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1994bcd8f5e6e5f133ba1a5a49a91dfd00bd6c18",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Chemistry",
"Medicine"
]
} |
270590524 | pes2o/s2orc | v3-fos-license | Immunotoxin-mediated depletion of Gag-specific CD8+ T cells undermines natural control of SIV
Antibody-mediated depletion studies have demonstrated that CD8+ T cells are required for effective immune control of SIV. However, this approach is potentially confounded by several factors, including reactive CD4+ T cell proliferation, and provides no information on epitope specificity, a likely determinant of CD8+ T cell efficacy. We circumvented these limitations by selectively depleting CD8+ T cells specific for the Gag epitope CTPYDINQM (CM9) via the administration of immunotoxin-conjugated tetrameric complexes of CM9/Mamu-A*01. Immunotoxin administration effectively depleted circulating but not tissue-localized CM9-specific CD8+ T cells, akin to the bulk depletion pattern observed with antibodies directed against CD8. However, we found no evidence to indicate that circulating CM9-specific CD8+ T cells suppressed viral replication in Mamu-A*01+ rhesus macaques during acute or chronic progressive infection with a pathogenic strain of SIV. This observation extended to macaques with established infection during and after continuous antiretroviral therapy. In contrast, natural controller macaques experienced dramatic increases in plasma viremia after immunotoxin administration, highlighting the importance of CD8+ T cell–mediated immunity against CM9. Collectively, these data showed that CM9-specific CD8+ T cells were necessary but not sufficient for robust immune control of SIV in a nonhuman primate model and, more generally, validated an approach that could inform the design of next-generation vaccines against HIV-1.
Introduction
CD8 + T cells are thought to be critical for immune control of HIV-1 and, in nonhuman primates, SIV (1)(2)(3).Infected cells display an array of viral peptide epitopes presented in the context of surface-expressed MHC class I molecules that act as antigenic targets for recognition via uniquely encoded T cell receptors (TCRs) (4).In response to signals received during this process of molecular recognition, antigen-specific CD8 + T cells undergo clonal expansion (5) and acquire effector functions, including cytotoxicity (6) and the ability to release proinflammatory cytokines, such as IFN-γ and TNF (6)(7)(8)(9).The emergence of antigen-specific CD8 + T cells with effector functionality has been associated temporally with the initial decline in viremia that occurs during acute infection with HIV-1 (10).Escape mutations in targeted epitopes also occur rapidly after infection (11)(12)(13)(14)(15)(16), and disease progression is tightly linked with the expression of individual MHC class I molecules (17)(18)(19)(20), which dictate the landscape of presentable antigens derived from HIV-1 and SIV.Effective immune control of viral replication is nonetheless rare, and most untreated individuals progress inexorably to AIDS (21).
Antibody-mediated depletion studies have provided direct evidence that CD8 + T cells suppress viral replication during the acute and chronic phases of infection with SIV, the latter either in the absence or presence of continuous treatment with antiretroviral drugs (ARVs) (22)(23)(24).However, such bulk depletion Antibody-mediated depletion studies have demonstrated that CD8 + T cells are required for effective immune control of SIV.However, this approach is potentially confounded by several factors, including reactive CD4 + T cell proliferation, and provides no information on epitope specificity, a likely determinant of CD8 + T cell efficacy.We circumvented these limitations by selectively depleting CD8 + T cells specific for the Gag epitope CTPYDINQM (CM9) via the administration of immunotoxin-conjugated tetrameric complexes of CM9/Mamu-A*01.Immunotoxin administration effectively depleted circulating but not tissue-localized CM9-specific CD8 + T cells, akin to the bulk depletion pattern observed with antibodies directed against CD8.However, we found no evidence to indicate that circulating CM9-specific CD8 + T cells suppressed viral replication in Mamu-A*01 + rhesus macaques during acute or chronic progressive infection with a pathogenic strain of SIV.This observation extended to macaques with established infection during and after continuous antiretroviral therapy.In contrast, natural controller macaques experienced dramatic increases in plasma viremia after immunotoxin administration, highlighting the importance of CD8 + T cell-mediated immunity against CM9.Collectively, these data showed that CM9-specific CD8 + T cells were necessary but not sufficient for robust immune control of SIV in a nonhuman primate model and, more generally, validated an approach that could inform the design of next-generation vaccines against HIV-1.
R E S E A R C H A R T I C L E
JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168 of an entire lineage based on the expression of CD8, which can also include NK cells, introduces caveats to interpretation, feasibly extending to reactive CD4 + T cell proliferation, which could potentially increase the number of target cells available for infection by SIV (25).To circumvent these issues, we selectively depleted CD8 + T cells specific for the Gag epitope CTPYDINQM (CM9), which is typically immunodominant in SIV-infected rhesus macaques expressing the appropriate restriction element, namely Mamu-A*01.This approach was enabled in vivo by conjugating saporin (26) with ultrapure recombinant tetrameric complexes of CM9/Mamu-A*01.Similar immunotoxin complexes have been used previously in mouse models (27) and, more recently, to deplete HIV-1-specific CD8 + T cells in vitro (28).
Immunotoxin administration effectively depleted circulating CM9-specific CD8 + T cells but less effectively depleted tissue-localized CM9-specific CD8 + T cells, mirroring the pattern of wholesale depletion observed with antibodies directed against CD8.No measurable effects on viral replication were observed after immunotoxin administration during acute or chronic progressive infection, the latter irrespective of continuous treatment with ARVs.However, elevated plasma viral loads (VLs) were observed in natural controller macaques after immunotoxin administration, indicating that CM9-specific CD8 + T cells can proactively suppress the replication of SIV.
Results
CM9-specific CD8 + T cells contribute to natural control of viremia during chronic infection with SIV.CD8 + T cell responses directed against the immunodominant Mamu-A*01-restricted CM9 epitope are thought to play a key role in the containment of viral replication (29) as a consequence of biological and structural constraints that limit the options for mutational immune escape in this region of SIV Gag (30,31).To test this notion, we administered a targeted immunotoxin to 5 Mamu-A*01 + rhesus macaques with established SIVmac239 infection, aiming to deplete CM9-specific CD8 + T cells, and measured the impact of this intervention among tissues and PBMCs (Figure 1A).Two of these macaques had spontaneously controlled plasma VL to <10,000 copies/mL.After 4 days, CM9-specific CD8 + T cell frequencies were substantially reduced among PBMCs (Figure 1, B and C), but lesser effects were observed in tissues, including bronchoalveolar lavage (BAL) samples from the respiratory tract, jejunum, and lymph nodes (LNs) (Figure 1D).Depletion of CM9-specific CD8 + T cells was associated with a 10-fold increase in plasma VL in natural controller macaque 08D030 and a 2.5-fold increase in plasma VL in natural controller macaque DGT4.No increases in plasma VL were observed in the 3 macaques with poor control of SIV (Figure 1E).After 17 days, set-point plasma VL was restored in macaques 08D030 and DGT4 (Figure 1E), paralleling the reconstitution of CM9-specific CD8 + T cells among PBMCs (Figure 1F).Although formal statistical analysis was not possible, reflecting the rarity of spontaneous immune control in this model, these data suggested that CM9-specific CD8 + T cells were largely redundant in macaques with unconstrained viral replication but nonetheless exerted potent antiviral activity in macaques with low set-point VLs.
Immunotoxin administration does not alter VL kinetics during acute infection with SIV.To evaluate the impact of CM9-specific CD8 + T cells on acute viral replication, we infected 5 Mamu-A*01 + rhesus macaques with SIVmac239 (day 0) and administered the immunotoxin on days 7, 10, and 13.Immunotoxin was withheld from 5 control Mamu-A*01 + rhesus macaques treated otherwise identically.The dosing and sampling schedules are depicted in Figure 2A.Immunotoxin administration transiently reduced the frequencies of CM9-specific and, to a lesser extent, total memory CD8 + T cells among PBMCs (Figure 2, B and C, and Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.174168DS1).In contrast, no significant depletion of CM9-specific CD8 + T cells was observed in BAL, colon, jejunum, or LNs (Supplemental Figure 1, B-E), and there was no evidence of reactive CD4 + T cell proliferation among PBMCs (Figure 2D).Moreover, immunotoxin administration did not significantly affect plasma VL trajectories measured out to day 30 (Figure 2E) or CD4 + T cell-associated VLs measured on days 10, 20, or 30 (Figure 2F).Despite the caveat of incomplete depletion, especially among tissues that typically sustain viral replication, these data suggested that CM9-specific CD8 + T cells did not critically affect the natural course of acute infection with SIV.
Immunotoxin administration does not modulate the clonotypic repertoire of CM9-specific CD8 + T cells during acute infection with SIV.The mobilization of distinct CM9-specific CD8 + T cell clonotypes, defined by the expression of unique TCRs, has been associated with differential outcomes after acute infection with SIV (32).To determine if particular clonotypes were preferentially depleted by the immunotoxin, we isolated ultrapure JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168populations of CM9-specific CD8 + T cells directly ex vivo and sequenced the corresponding rearranged TCRβ chain (TRB) genes using a high-throughput approach (33).Repertoires were compared across anatomical sites sampled from control macaques (n = 4-5) or immunotoxin-treated macaques (n = 4-5) at comparable time points during the acute infection study.Logo analysis of the third complementarity-determining region (CDR3β), which canonically plays a key role in antigen recognition, revealed similar amino acid chemistries and sequence motifs among circulating CM9-specific CD8 + T cells from control and immunotoxin-treated macaques on day 20, representing the nadir of depletion (Figure 3, A and B).Moreover, we found no significant differences in repertoire diversity measured using any of 3 distinct metrics, namely the number of unique clonotypes, the Shannon-Weiner index (34), or the d50 index (35), among circulating CM9-specific CD8 + T cells from control versus immunotoxin-treated macaques after reconstitution (day 30) (Figure 3, C-E).A similar overall picture was observed in BAL and LNs (Figure 3, C-E).Using the TCR
(C) Total memory CD8 + T cell frequencies among PBMCs (parent = memory T cells). (D) Total memory CD4 + T cell frequencies among PBMCs (parent = memory T cells). (E) Plasma viral loads. (F) CD4 + T cell-associated viral loads. Each symbol represents 1 macaque (B-F). Shaded areas depict mean values (B-E). Horizontal bars indicate median values (F).
Significance was determined using AUC analysis (B-E) or a mixed-effects ANOVA with Šídák correction (F).DPI, days postinfection.
R E S E A R C H A R T I C L E
JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168neighborhood enrichment test (TCRNET) (34), we identified enriched clonotypes in BAL (n = 9) and LNs (n = 3) from immunotoxin-treated versus control macaques on day 20 (Figure 3F), but no such differences were apparent on day 30 (Figure 3G).In similar comparisons of CM9-specific CD8 + T cells isolated from PBMCs, no enriched clonotypes were detected on day 20 (Figure 3F), and very few clonotypes (n = 2) reached the threshold for significance on day 30 (Figure 3G).Multidimensional scaling further revealed no obvious clustering by anatomical site, group, or time point (Figure 3H), and there was no obvious categorical segregation by TRBV or TRBJ segment use (Supplemental Figure 2, A and B).These collective analyses indicated minimal perturbation and rapid normalization of the CM9-specific CD8 + T cell repertoire after immunotoxin administration during acute infection with SIV.
Immunotoxin administration does not impact viral replication during or after treatment with ARVs.To evaluate the role of CM9-specific CD8 + T cells during and after treatment with ARVs, we administered the immunotoxin to 5 chronically infected Mamu-A*01 + rhesus macaques receiving a daily coformulated drug regimen comprising the nucleo(s/t)ide reverse transcriptase inhibitors emtricitabine and tenofovir disoproxil fumarate, a prodrug of tenofovir, and the integrase strand-transfer inhibitor dolutegravir.Immunotoxin was withheld from 5 control Mamu-A*01 + rhesus macaques treated otherwise identically.The dosing and sampling schedules are depicted in Figure 4A.Immunotoxin administration transiently reduced the frequencies of CM9-specific CD8 + T cells among PBMCs (Figure 4B), but no corresponding effects were observed in BAL, colon, jejunum, or LNs (Supplemental Figure 3, A-D).The frequencies of total memory CD8 + T cells also remained largely unchanged among PBMCs (Figure 4C).In contrast, the frequencies of total memory CD4 + T cells were higher in immunotoxin-treated versus control macaques across the observed time course (Figure 4D), but importantly, there were no corresponding differences in the frequencies of circulating Ki67 + CD4 + T cells before and after immunotoxin administration (Figure 4E).This latter observation suggested that depletion of CM9-specific CD8 + T cells did not lead to reactive CD4 + T cell proliferation in the presence of ARVs.
No significant differences in viremia were observed between control and immunotoxin-treated macaques during or after treatment with ARVs (Figure 4F).However, viral recrudescence was delayed in several macaques across both experimental groups after the cessation of ARVs, in some cases for as long as 6-12 months after study initiation, and generally occurred more rapidly after immunotoxin administration (Figure 4F).Power calculations nonetheless indicated that a much larger cohort (n = 15 macaques per group) would have been required to detect a 5-fold increase in plasma VL.CD4 + T cell-associated VLs were also largely unchanged before and after immunotoxin administration, indicating a lack of efficacy against tissue reservoirs of SIV (Figure 4G).These findings suggested that circulating CM9-specific CD8 + T cells minimally affected viral replication during and after treatment with ARVs.
To determine if this lack of efficacy could be explained by mutational immune escape, we sequenced the CM9 epitope in plasma samples obtained 14 days after the cessation of ARVs.Wild-type sequences were detected almost exclusively (Supplemental Figure 3E).A similar pattern was observed in the immunotoxin-treated macaques with acute infection and the immunotoxin-treated macaques with chronic infection described above, with one exception (DFH4) (Supplemental Figure 3E).Accordingly, epitope variation could not account for the biological inefficacy of the immunotoxin during acute or chronic progressive infection, either in the absence or presence of ARVs.
In a previous study, CD45RA + , panKIR + , and/or NKG2A + virtual memory CD8 + T (Tvm) cells were found to become more frequent in HIV-1-infected people during treatment with ARVs (36).Tvm cells also limited viral reactivation ex vivo, potentially explaining an associated diminution of the viral reservoir in vivo (36).In line with these findings, we detected elevated frequencies of CD45RA + panKIR + NKG2A − Tvm cells during versus after treatment with ARVs (Supplemental Figure 4A).Moreover, the frequencies of these cells were unaffected by immunotoxin administration (Supplemental Figure 4, B and C), further confirming the specificity of this intervention.These observations suggested that alternative modes of viral suppression, potentially including Tvm cell activity, were able to compensate for the lack of immune pressure exerted by CM9-specific CD8 + T cells during and after treatment with ARVs.
Immunotoxin administration restructures the clonotypic repertoire of CM9-specific CD8 + T cells during treatment with ARVs.In a final series of experiments, we used a high-throughput sequencing approach to characterize TRB gene rearrangements among CM9-specific CD8 + T cell populations isolated from 4 chronically infected Mamu-A*01 + rhesus macaques before and after immunotoxin administration in the continuous presence of ARVs.Scatter plot analysis spanning all anatomical sites revealed a shift in the
R E S E A R C H A R T I C L E
JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168repertoire and the appearance of new clonotypes after immunotoxin administration (Figure 5A).Tissue-specific analyses confirmed these findings and identified new clonotypes that became dominant in BAL and PBMCs (Figure 5B).Public clonotypes were common, especially before immunotoxin administration, but private clonotypes tended to reconstitute CM9-specific CD8 + T cell populations after immunotoxin administration (Figure 5C).Logo analysis revealed similar CDR3β amino acid chemistries and sequence motifs before and after immunotoxin administration (Supplemental Figure 5, A and B).Repertoire diversity was also largely unchanged before and after immunotoxin administration (Supplemental Figure 5, C-E), and there were no obvious concomitant perturbations in TRBV or TRBJ segment use (Supplemental Figure 5, F and G).These observations suggested that immunotoxin administration enabled previously subdominant and often private clonotypes to reconstitute CM9-specific CD8 + T cell populations in the continuous presence of ARVs.
Discussion
In this study, we used a targeted immunotoxin to deplete CM9-specific CD8 + T cells in Mamu-A*01 + rhesus macaques during acute or chronic infection with SIVmac239, the latter either untreated or treated with ARVs.Our data revealed a key role for CM9-specific CD8 + T cells as mediators of the rare natural controller phenotype but failed to demonstrate significant antiviral activity during acute or chronic progressive infection.We also found no evidence to support the notion that CM9-specific CD8 + T cells help suppress viral replication during or after treatment with ARVs.More generally, our findings validated a promising approach to the depletion of antigen-specific CD8 + T cells in a nonhuman primate model that could aid the discovery of immune determinants of protection against SIV and, by extension, HIV-1.
The early development of highly cytotoxic virus-specific CD8 + T cells equipped with enhanced survival properties has been linked with spontaneous immune control of HIV-1 and SIV (37)(38)(39).In our model, even transient depletion of CM9-specific CD8 + T cells in two macaques with naturally suppressed viremia led to a reactive increase in plasma VLs, suggesting a causal association between antiviral functionality and natural control of SIV.A more comprehensive evaluation was precluded by the fact that very few rhesus macaques (<1%) express Mamu-A*01 and control viral replication in the absence of ARVs (40).Our study was therefore limited from a statistical perspective, with power calculations indicating that hundreds of macaques would have been required to confirm a 10-fold increase in plasma VL.It should also be noted that we did not formally evaluate the functional properties of CM9-specific CD8 + T cells before immunotoxin administration.The biological relevance of SIV-specific CD8 + T cells in vivo has been addressed previously via wholesale depletion using antibodies directed against CD8α, which also eliminate NK cells, or CD8β (24,(41)(42)(43).These reagents efficiently deplete circulating CD8 + T cells but have lesser effects in tissues (21,23,44), akin to our findings with saporin-conjugated tetrameric complexes of CM9/Mamu-A*01.However, the elimination of CD8 + T cells en masse results in the expansion of memory CD4 + T cells to fill the induced homeostatic hole in the immune system, likely reflecting increased availability of IL-15 (45)(46)(47).This phenomenon could potentially enhance viral propagation, at least in the absence of ARVs (48).In contrast, our targeted approach did not elicit reactive CD4 + T cell proliferation, eliminating this caveat to interpretation and enabling us to detect an effect confined to the natural controller phenotype, which appeared uniquely reliant on CM9-specific CD8 + T cells to suppress the replication of SIV.
CD8 + T cells target multiple epitopes during infection with SIV.In some cases, epitomized by the Tat epitope S/TL8, mutational escape occurs readily, but in other cases, epitomized by the Gag epitope CM9, mutational escape either requires compensatory amino acid substitutions and/or compromises viral replication (49,50).Accordingly, it is perhaps not surprising that we were unable to identify a clear antiviral role for CM9-specific CD8 + T cells during acute or chronic progressive infection, although it should be noted that immunotoxin-mediated depletion was not absolute using the protocol reported here, especially among tissue sites that sustain active replication of SIV.Likewise, we found no evidence to support a biologically relevant antiviral role for CM9-specific CD8 + T cells during or after treatment with ARVs, barring a few minor increases in viral replication following immunotoxin administration, which mimicked the natural breakthrough pattern that occurs commonly in the continuous presence of ARVs (51).These results could be explained similarly by the availability of other target epitopes restricted by MHC class I.
R E S E A R C H A R T I C L E
JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168Antibody-mediated depletion studies have indeed shown that CD8 + T cells can maintain viral suppression during treatment with ARVs (23) but are nonetheless unable to delay viral recrudescence after treatment with ARVs (52).This latter observation aligns with our data and suggests that other immune cell types and/ or tissue-localized SIV-specific CD8 + T cells could limit viral replication in the immediate aftermath of treatment cessation, at least in the absence of functional exhaustion (36).
Immunotoxin administration transiently perturbed the clonotypic repertoire of CM9-specific CD8 + T cells during acute infection and more profoundly altered the clonotypic repertoire of CM9-specific CD8 + T cells during chronic infection in the continuous presence of ARVs.Repertoire fluctuations during the active depletion phase could be explained by differences in the susceptibility of individual clonotypes to cell death, tissue redistribution, and/or the preferential expansion of distinct clonotypes
R E S E A R C H A R T I C L E
JCI Insight 2024;9(14):e174168 https://doi.org/10.1172/jci.insight.174168receiving optimal signals via the corresponding TCRs (1,53).The emergence of new tissue-specific clonotypes in particular defined the repertoire perturbations induced by immunotoxin administration during treatment with ARVs.It is notable here that such immune flexibility under conditions of minimal but persistent antigenic drive, which could potentially be exploited therapeutically, appears to be a consistent feature of infection with SIV (33,54).
The model presented here may prove useful in future studies designed to unravel the role of specificity as a determinant of CD8 + T cell efficacy.Our approach could also be extended in principle to antigen-specific CD4 + T cells and other infections beyond SIV.Ultimately, the ability to dissect the biological relevance of specific antigenic targets in nonhuman primates has the potential to inform the design of more effective recombinant vaccines against infectious agents of global concern, such as HIV-1 and SARS-CoV-2.
Methods
Sex as a biological variable.Male and female rhesus macaques (Macaca mulatta) were eligible for inclusion.Biological outcomes were evaluated collectively.Study enrollment was based on the expression of Mamu-A*01.
Experimental design.For the acute infection study, 5 Mamu-A*01 + rhesus macaques were infected intravenously with 3,000 TCID 50 of SIVmac239, indicated as day 0.An immunotoxin preparation comprising saporin-conjugated tetrameric complexes of CM9/Mamu-A*01 was then administered intravenously at a dose of 350 pmol/kg on days 7, 10, and 13.For the chronic infection study, 5 Mamu-A*01 + rhesus macaques with established SIVmac239 infection were injected intravenously with the immunotoxin preparation either once at a dose of 500 pmol/kg, 1 nmol/kg, or 2 nmol/kg or twice at a dose of 350 pmol/kg, separated by an interval of 4 days.For the ARV study, 5 Mamu-A*01 + rhesus macaques received a coformulated subcutaneous drug regimen comprising the nucleo(s/t)ide reverse transcriptase inhibitors emtricitabine and tenofovir disoproxil fumarate, a prodrug of tenofovir, and the integrase strand-transfer inhibitor dolutegravir once daily, starting approximately 3 months after infection with SIVmac239 (55).Once plasma VL was reduced to <50 copies/mL, which occurred approximately 3 months after the initiation of ARVs, the immunotoxin preparation was administered 3 times intravenously at a dose of 350 pmol/kg, separated by intervals of 3 days.BAL, PBMCs, and solid tissue biopsies were collected from each macaque before and after immunotoxin administration.An identical protocol was used to sample an equal number of control macaques in the acute infection study and the ARV study.Details for all participant macaques are listed in Supplemental Tables 1-3.
CM9 monomer production and immunotoxin tetramerization.Biotinylated monomeric complexes of CM9/ Mamu-A*01 were generated as described previously (32,56).Cleanup was performed using Pierce High Capacity Endotoxin Removal Spin Columns (Thermo Fisher Scientific).Monomers were then passed through a polyethersulfone membrane filter (0.1 μm, Sartorius), and residual endotoxin levels were measured using a Pierce Chromogenic Endotoxin Quant Kit (Thermo Fisher Scientific).Saporin conjugation and tetramerization were achieved by adding streptavidin-ZAP (Advanced Targeting Systems) stepwise to the purified monomers at a final molar ratio of 1:4.The immunotoxin complex was then diluted in phosphate-buffered saline (HyClone) and passed through a sterile filter (0.2 μm, Thermo Fisher Scientific).
Flow cytometry and cell sorting.Single-cell suspensions were washed twice with phosphate-buffered saline (HyClone).SIV-specific CD8 + T cells were identified using fluorochrome-conjugated pentameric complexes of CM9/Mamu-A*01 (ProImmune).Antibodies against cell surface markers used to identify and phenotype lymphocyte populations are detailed in Supplemental Table 4. Dead cells were excluded using a LIVE/ DEAD Fixable Aqua Dead Cell Stain Kit (Thermo Fisher Scientific).Samples were acquired using an LSR-Fortessa (BD Biosciences) or a Cytek Aurora (Cytek Biosciences).Bulk CD4 + T cells and SIV-specific CD8 + T cells were sorted using a FACSymphony S6 (BD Biosciences).The gating strategy is depicted in Supplemental Figure 6.All flow cytometry data were analyzed using FlowJo version 10.8.1 (FlowJo LLC).
Clonotype analysis.Clonotype analysis was performed as described previously (33,59).Briefly, SIV-specific CD8 + T cells (n = 100-10,000) were sorted into 100 μL of RNAlater (Sigma-Aldrich), and TRBV gene rearrangements were amplified without bias using a template-switch anchored RT-PCR.Unique barcodes and the P5 and P7 sequencing adaptors (Illumina) were added to all amplification products using sequential PCRs.Sequences were generated using a paired-end (150 bp) strategy in conjunction with MiSeq v2 Kits (Illumina).TRBV and TRBJ segments were identified using MiXCR (60).Diversity and similarity indices were calculated using VDJtools (34).Graphs showing TRBV and TRBJ segment use, heatmaps, and scatter plots were also generated using VDJtools.Multidimensional scaling plots were graphed using RStudio version 1.3.1056.Clonotypes were defined by CDR3β amino acid sequence and TRBV/TRBJ (61,62).Public clonotypes exhibited sequence identity across different macaques in this study and/or identity with previously reported sequences specific for CM9 (https:// vdjdb.cdr3.net).Private clonotypes were identified in a single anatomical site in a single macaque.TCR repertoire plots were constructed to incorporate all sequences with a frequency of >2%.
Statistics.Experimental groups were compared using 2-tailed paired t tests, AUC analyses, or 2-way or mixed-effects ANOVAs with post hoc tests in Prism version 9.3.1 (GraphPad).Significance was assigned at P < 0.05.
Study approval.All experimental procedures were approved by the National Institute of Allergy and Infectious Diseases Animal Care and Use Committee as part of the Intramural Research Program of the NIH (protocol LVD 26E).In line with the Nonhuman Primate Management Plan set out by the Office of Animal Care and Use, macaques were housed at the NIH Animal Center under the supervision of the Division of Veterinary Resources, accredited by the Association for the Assessment and Accreditation of Laboratory Animal Care.Care at this facility met the advisory standards of the Animal Welfare Act and Animal Welfare Regulations, the US Fish and Wildlife Services, and the Guide for the Care and Use of Laboratory Animals (National Academies Press, 2011).The physical condition of each macaque was monitored daily.Participant macaques were exempt from contact social housing on scientific grounds aligned with the respective protocol of the National Institute of Allergy and Infectious Diseases Animal Care and Use Committee.All macaques were therefore housed under noncontact social conditions for the duration of the study.Access to water was provided continuously.Commercial monkey biscuits were offered twice daily, alongside fresh produce, bread and egg products, and a foraging mix consisting of raisins, nuts, and rice.Environmental enrichment to stimulate foraging and play activity was provided in the form of food puzzles, mirrors, toys, and cage furniture.
Figure 3 .
Figure 3. Immunotoxin administration does not modulate the clonotypic repertoire of CM9-specific CD8 + T cells during acute infection with SIV.Experimental details as in Figure 2. (A) Logo plots and chemical classification of amino acids spanning the CDR3β loops of the top 10 pooled clonotypes from control macaques on day 20.(B) Logo plots and chemical classification of amino acids spanning the CDR3β loops of the top 10 pooled clonotypes from immunotoxin-treated macaques on day 20.(C)Repertoire diversity measured using the number of unique clonotypes for CM9-specific CD8 + T cell populations from control and immunotoxin-treated macaques on day 30.(D) Repertoire diversity measured using the Shannon-Weiner index for CM9-specific CD8 + T cell populations from control and immunotoxin-treated macaques on day 30.(E) Repertoire diversity measured using the d50 index for CM9-specific CD8 + T cell populations from control and immunotoxin-treated macaques on day 30.(F) TCRNET analysis of CM9-specific CD8 + T cell repertoires from control and immunotoxin-treated macaques on day 20.(G) TCRNET analysis of CM9-specific CD8 + T cell repertoires from control and immunotoxin-treated macaques on day 30.(H) Multidimensional scaling (MDS) analysis of CM9-specific CD8 + T cell repertoires from control and immunotoxin-treated macaques on days 10, 20, and 30.Clonotypes that were enriched in CM9-specific CD8 + T cell populations from immunotoxin-treated macaques are shown in blue.Each symbol represents 1 macaque (C-E and H).Horizontal bars indicate median values (C-E).Significance was determined using a 2-way ANOVA with Šídák correction (C-E).*P < 0.05.DPI, days postinfection.
Figure 4 .
Figure 4. Immunotoxin administration does not impact viral replication during or after treatment with ARVs.(A) Schematic representation of the experiment.Immunotoxin was administered to 5 Mamu-A*01 + rhesus macaques chronically infected with SIVmac239 undergoing continuous treatment with ARVs.Immunotoxin was withheld from 5 control Mamu-A*01 + rhesus macaques treated otherwise identically.(B) CM9-specific CD8 + T cell frequencies among PBMCs (parent = CD8 + T cells).(C) Total memory CD8 + T cell frequencies among PBMCs (parent = memory T cells).(D) Total memory CD4 + T cell frequencies among PBMCs (parent = memory T cells).(E) Ki67 + CD4 + T cell frequencies among PBMCs (parent = CD4 + T cells).(F) Plasma viral loads.(G) CD4 + T cell-associated viral loads.Each symbol represents 1 macaque (B-G).Shaded areas depict mean values (B-D and F).Horizontal bars indicate median values (E and G).Significance was determined using AUC analysis (B-D and F) or a 2-tailed paired t test (E and G).Postimmunotoxin = day 7 (E and G).DPI, days postimmunotoxin.
Figure 5 .
Figure 5. Immunotoxin administration restructures the clonotypic repertoire of CM9-specific CD8 + T cells during treatment with ARVs.Experimental details as in Figure 4. (A) Scatter plot analysis of CM9-specific CD8 + T cell repertoires spanning all macaques and all anatomical sites before and after immunotoxin administration.(B) Heatmap analysis of CM9-specific CD8 + T cell repertoires spanning all macaques and individual anatomical sites before and after immunotoxin administration.(C) Public and private clonotype frequencies among CM9-specific CD8 + T cell populations from individual macaques (n = 4).Plots incorporate all sequences with a frequency of >2%.Postimmunotoxin = day 7 (A-C). | 2024-06-20T06:16:13.779Z | 2024-06-11T00:00:00.000 | {
"year": 2024,
"sha1": "91d43526e392a615e23baae8600754cb6b63da9c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1172/jci.insight.174168",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33181d0639c641b6c5840c1a7c0dcb8fbdc1163d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4249871 | pes2o/s2orc | v3-fos-license | Laboratory Test Abnormalities are Common in Polymyositis and Dermatomyositis and Differ Among Clinical and Demographic Groups
Objective: Given the difficulties regarding the interpretation of common laboratory test results in polymyositis (PM) and dermatomyositis (DM) in clinical practice, we assessed their range of abnormalities, differences among phenotypes and interrelationships in a large referral population. Methods: We retrospectively assessed 20 commonly measured blood laboratory tests in 620 well-defined PM/DM patients at different stages of illness and treatment to determine the frequency, range of abnormalities and correlations among clinical, gender, racial and age phenotypes. Results: Myositis patients at various stages of their disease showed frequent elevations of the serum activities of creatine kinase (51%), alanine aminotransferase (43%), aspartate aminotransferase (51%), lactate dehydrogenase (60%), aldolase (65%) and myoglobin levels (48%) as expected. Other frequent abnormalities, however, included elevated high white blood cell counts (36%), low lymphocyte counts (37%), low hematocrit levels (29%), low albumin levels (22%), high creatine kinase MB isoenzyme fractions (52%), high erythrocyte sedimentation rates (33%) and high IgM and IgG levels (16% and 18%, respectively). Many of these tests significantly differed among the clinical, gender, racial and age groups. Significant correlations were also found among a number of these laboratory tests, particularly in the serum activity levels of creatine kinase, the transaminases, lactate dehydrogenase and aldolase. Conclusion: Laboratory test abnormalities are common in PM/DM. Knowledge of the range of these expected abnormalities in different myositis phenotypes, gender and age groups and their correlations should assist clinicians in better interpretation of these test results, allow for a clearer understanding what level of abnormality warrants further evaluation for liver or other diseases, and may avoid unnecessary laboratory or other testing.
INTRODUCTION
The idiopathic inflammatory myopathies (IIM) are a group of autoimmune muscle diseases of which polymyositis (PM) and dermatomyositis (DM) are the most frequently recognized forms [1]. Although the IIM are the most commonly acquired muscle diseases in adults, they are still rare in the general population with an estimated annual incidence of only 10 new cases per million persons. These diseases are difficult to diagnose and categorize into predictive groups [2]. Their diagnosis and treatment are often delayed because patients initially present with vague or nonspecific symptoms such as fatigue, myalgias, and arthralgias, which are also common in other types of illnesses [3].
The serum activity level of creatine kinase (CK) is the most commonly performed enzyme test for the diagnosis and monitoring of myositis [1,4,5]. This is due to the relative muscle specificity of the enzyme and the infrequent *Address correspondence to this author at the Environmental Autoimmunity Group, National Institute of Environmental Health Sciences, National Institutes of Health, CRC 4-2352, MSC 1301, 10 Center Drive, Bethesda, MD 20892-1301, USA; Tel: 301-451-6273; Fax: 301-451-5585; E-mail: millerf@mail.nih.gov involvement of other organs that can generate CK (brain, cardiac muscle) in patients with IIM [6]. Although elevations of CK and other serum muscle enzyme activities are used as one criterion for diagnosis [7], few studies have assessed the other laboratory blood tests often ordered in myositis patients, leading to misconceptions and a lack of understanding of the meaning of the test results obtained [5]. Our anecdotal experience is that many unnecessary tests are performed on myositis patients to assess for other conditions in response to laboratory abnormalities that are actually the result of the myositis. For example, elevations of serum activities of the so called "liver enzymes", the transaminases and lactate dehydrogenase (LD) in myositis patients due to myoblast activation, have resulted, and in our experience continue to result, in the misdiagnosis of liver disease and inappropriate liver biopsies [1]. Likewise, elevations in the serum activity of the CK-MB fraction isoenzyme (CK-MB) can result in the misdiagnosis of myocardial infarction [8]. The current study was conducted to address these issues by assessing the results of 20 routine blood laboratory tests from a large group of myositis subjects evaluated at the National Institutes of Health (NIH). This population would be representative of a referral population at different stages of disease progression and with various levels of disease activity undergoing different treatments. Our findings of frequent abnormalities in these tests and differences among phenotypes should aid clinicians in better understanding the utility of various laboratory tests in myositis patients seen in clinical practice.
MATERIALS AND METHODOLOGY
Patient population. The participants in the study were 620 patients (181 men and 439 women; 351 with PM and 269 with DM; 427 Caucasians, 130 blacks and 63 of other races) of the initial 800 consecutive individuals with myopathies who were evaluated in natural history and therapeutic research protocols in the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS) between 1985 and 2005. All subjects signed informed consent for NIAMS institutional review board approved protocols. Most subjects were referred for recommendations for therapy, possible inclusion in therapeutic trials or for diagnostic purposes. Selection criteria for this study included adult patients with the diagnosis of PM or DM on the basis of the exclusion of other forms of myopathy and meeting probable or definite Bohan and Peter criteria [9], and who had results for at least 15 of the 20 laboratory tests commonly assessed in myositis patients at the NIH. All laboratory tests were performed in the NIH Department of Laboratory Medicine of the NIH Clinical Center and data from the first time that at least 15 concurrent laboratory test values were available were used. Subjects with criteria for possible myositis (n=35), muscular dystrophies (n=29), inclusion body myositis (n=28), cancer-associated myositis (n=9), juvenile myositis (n=5) metabolic myopathies (n=4), PM or DM but fewer than 15 concurrent laboratory tests available (n=27), and undefined myopathies (n=43) were excluded from study.
Data recording. Data collected in this retrospective cohort study included demographics (age, gender and race), clinical information (the presence of Gottron's papules or sign or Heliotrope rash to define DM), and 20 laboratory tests commonly performed on myositis patients at the NIH. The laboratory tests included were serum levels of albumin, creatinine, aldolase, CK (total level and isoenzymes CK-BB, CK-MB, CK-MM), quantitative immunoglobulins (IgG, IgM and IgA levels), erythrocyte sedimentation rate (ESR), total white blood cell count (WBC) and differential (including percent basophils, lymphocytes and polymorphonuclear leukocytes), hematocrit, aspartate aminotransferase (AST), alanine aminotransferase (ALT), LD, and myoglobin. When the normal reference ranges changed during the study period, laboratory results were expressed in relation to the upper limits of the normal range and recalculated based upon the current normal range. Medicine (established to cover 95% of normal individuals). Due to the non-normality of the distribution of measurements, the primary analyses to assess possible differences in laboratory tests in demographic groups (i.e., male and female gender; Caucasian, black, and other races; age tercile groups of <37, 37-50, and >50 years) and in clinical groups (i.e., PM and DM), were performed via the nonparametric Wilcoxon rank sum test (for two groups), the Kruskal-Wallis test (for the three unordered racial categories), or Kuzick's nonparametric trend test ("nptrend" test in Stata) for the three ordered age categories.
Correlations of selected laboratory values used Spearman's rank correlation to accommodate the non-normality of the values. For this study we defined Rho correlations as falling into four categories: 0.7-1.0 = high correlation; 0.5-<0.7 = moderate correlation; 0.3-<0.5 = low correlation; and <0.3 = little correlation [10].
Additional analyses used multiway ANOVA or prediction intervals based on linear regression models. For these analyses either logarithms of the raw values were used (14 of the parameters) or the inverse hyperbolic sine transformed-values were used (for six parameters: basophils, lymphocytes, Polys, ESR, and the three CK isoenzyme values; this transformation is very similar to a log transformation, but is more flexible in allowing 0, or even negative, values). As indicated by the Shapiro-Wilk test and other confirmatory tests, these transformations greatly reduced the skewness of the distributions and resulted in more Gaussian-like distributions. The multiway ANOVA analyses were used to confirm the primary univariate group comparisons, by including in each analysis gender, race, and diagnosis. In assessing the effect of age, all three factors of gender, race, and diagnoses were taken into account.
Because this was an exploratory study, a two-sided p value of <0.05 was considered significant for these analyses, with no correction for multiple comparisons.
Laboratory test abnormalities in IIM patients.
As expected, many of the IIM patients frequently had various enzyme values above the normal limits ( Table 1). The abnormal laboratory results included skeletal muscle markers such as aldolase, CK, CK-MB, LD, myoglobin, and transaminases (ALT and AST), as well as acute and chronic inflammatory markers (WBC count, polymorphonuclear leukocytes, ESR, and immunoglobulins IgG and IgM).
In contrast, a number of other test results were frequently below the normal range. Of these, low CK-MM levels, lymphocyte counts, and creatinine levels have been previously reported in patients with IIM [5,11,12]. Low hematocrit and serum albumin levels, however, have not been systematically described in myositis.
Differences in laboratory test results among gender, race, age, and IIM clinical groups. CK levels were significantly higher in men compared to women, in blacks compared to Caucasians and in PM compared to DM (Fig. 1), as previously described [13]. CK-MB levels were also significantly higher in men compared to women with IIM. The median aldolase levels were also above normal in all groups and they were significantly higher in men, blacks and in patients with PM. Age also seemed to impact the aldolase levels. There was a significant trend for values to go down in older age groups primarily due to the oldest age tercile (>50 years) being significantly lower than the other two terciles. Similar age-related trends were seen in all three skeletal muscle markers (CK, CK-MB and aldolase; Fig. 1).
The median values tended to be borderline high for both AST and ALT in most groups, while above normal for LD in all groups (Fig. 2). There were statistically significant differences in gender, race and clinical diagnosis groups for AST and ALT. Men had higher levels of these enzymes than women, as has been described in healthy populations [14]. On the other hand, LD was significantly higher only in blacks and in patients with PM. Higher serum LD in blacks compared to Caucasians is consistent with the observation that enzymes catalyzing reactions in phosphagenic (CK) and glycolytic (e.g., LD) metabolic pathways have significantly higher activities (by 30-40%) in skeletal muscle biopsy tissue of Africans compared to Caucasians [15].
Although all subgroup medians fell within the respective normal reference intervals, there were several statistically significant differences for WBC count, percent lymphocytes, and ESR (Fig. 3). WBCs were higher in males than females and in Caucasians than blacks, while lymphocytes were lower in males than females and tended to decrease with age. This conflicts with data in other populations describing higher WBC counts in women [16]. ESR levels were higher in females compared to males and in blacks compared to Caucasians, as previously reported for healthy populations [17].
The median values were within the respective reference intervals for serum albumin and hematocrit, but they were at the low end of the reference interval for serum creatinine, consistent with the expected loss of muscle mass in myositis patients. On the other hand, serum albumin levels were significantly higher in Caucasians than blacks and significantly lower in PM compared to DM patients. Similar to a healthy population, serum creatinine levels were significantly higher in men than women with IIM, likely due to the larger muscle mass in men. Creatinine levels were higher in those older than 50 years, possibly due to declining renal function. The hematocrit was significantly higher in males compared to females with IIM, as described in healthy populations [18].
In order to assess the relative contribution of diagnosis, age, gender and race to the differences in certain laboratory measures, we compared P-values based on the log values of laboratory measures. These analyses showed that CK and aldolase were most different among the diagnostic groups (P = 1.2 X 10 -17 and P = 5.3 X 10 -9 , respectively). Nonetheless, CK and aldolase also differed among races (P = 4.1 X 10 -9 and P = 0.0086, respectively), and by gender (P = 0.002 and P = 0.005, respectively), but both of these were less significant than the differences among diagnostic groups. Hematocrit differed most between males and females (P = 8.4 X 10 -13 ), but less so among races (P = 0.00008), age groups (P = 0.001) and diagnostic groups (P = 0.035).
The parametric multiway ANOVA analyses, always taking into account the factors of gender, race, age and diagnosis, confirmed all of the statistically significant univariable results, with confirmatory P-values <0.05, except in the following 3 cases where the multivariable P-value did Correlations among laboratory test results. A number of significant correlations among the laboratory tests studied were identified in the IIM (see Table 2 where only correlations greater than 0.5 in absolute value are shown). Moderate to high correlations were observed among aldolase, ALT, AST, CK, LD, myoglobin, Polys, and WBC. Muscle enzymes and muscle breakdown products all Table 1 and Fig. (1). correlated highly, as expected. Aldolase correlated significantly with 14 of the other laboratory tests and particularly strongly with ALT, AST, CK, LD, and myoglobin. A moderate correlation was found between Polys and WBCs. Lower correlations were found among the other laboratory tests. Fig. (4) displays scatter plots for CK levels with aldolase, ALT, AST and LD values for all patients. The plots show the 95% and 99% prediction intervals of CK with each of these four tests, thus providing guidance to clinicians of when a test result may be considered to be outside the expected range for patients with myositis and when further evaluation for liver disease may be warranted. Although only limited data were available in this retrospective study, the medical records of the 17 individuals who had ALT, AST or LD levels higher than the 99% prediction interval were examined to assess the possibility of liver disease. Of these 17 individuals, four had documented liver disease (one each Fig. (3). Box plots showing the median, 25 th and 75 th percentiles and highest and lowest values for WBC count, lymphocytes and ESR in all myositis patients and differences among groups. The shaded area depicts the normal range*. *Abbreviations: per Table 1 and Fig. (1). with fibrotic liver disease of unknown origin, non-alcoholic steatohepatitis, hepatomegaly of unknown cause and hepatitis secondary to blood transfusion).
DISCUSSION
This study assessed routine laboratory test results in a comparatively large referral population of the IIM at various stages of illness to identify possible associations with disease not previously noted, to determine if laboratory test values vary among gender, racial, age and clinical groups, and to define possible correlations among these tests.
It is commonly accepted that CK, aldolase, ALT, AST, and LD are muscle-derived enzymes in myositis whose levels tend to indicate disease activity [3,19]. Thus, not surprisingly, patients with PM, who typically have more severe muscle disease, were found to have higher levels of these enzymes than DM patients [1]. Another possible explanation, however, is that patients with DM more frequently have circulating inhibitors or autoantibodies to CK or to other enzymes and these autoantibodies may result in lower serum enzyme levels [19,20].
In terms of the sex differences, males have higher elevations in enzymes (including CK, CK-MB, CK-MM, aldolase and transaminases) compared to females reflecting their greater average muscle mass [21]. The reasons for the lower immunoglobulin levels and lymphocyte counts in men than in women and higher WBCs in men remain unclear. In the general population, men have higher hematocrit levels and lower ESR levels compared to women [22]. Leukocytosis has been associated with low-grade fever at the onset of myositis and in association with disease flares in certain groups [8]. Different tests were also found to vary with respect to race. It had been known that CK levels tend to be higher in blacks than Caucasians, likely as a result of greater muscle mass in the former [13,23].
Regarding the previously unreported abnormalities, although gender was an important determinant of hematocrit, we found that hematocrit is lower in DM than PM. Because PM tends to present with more severe disease activity than DM [1], a lower hematocrit as a result of the anemia of chronic disease might have been expected more often in PM patients. Perhaps because DM has more vasculopathy than PM [1], there might be less capacity to produce red blood cells or more gastrointestinal or other bleeding in DM. The more frequent vasculopathy seen in DM compared to PM [1], with resultant capillary leak in damaged capillaries in muscle and other target organs or abnormal gastrointestinal absorption, may be the etiology of lower serum albumin, which is a negative acute-phase protein, in patients with DM than in PM. Hematocrits were also higher in Caucasians compared to blacks, in patients >50 years of age, and in PM. Higher hematocrits were reported in healthy populations of Caucasians compared to blacks in both gender groups, but the same study failed to reveal a major age effect [18].
We also noted significant correlations among various laboratory findings in the IIM. Moderate to high correlations were observed among aldolase and ALT, AST, CK, LD, myoglobin, Polys, and WBC. Thus, muscle enzymes and muscle breakdown products all correlated highly, as expected. Muscle enzyme levels may be increased in patients with active disease not only due to skeletal muscle inflammation, but also due to release of these enzymes into the peripheral blood by regenerating myoblasts. CK-MB, aldolase, LD and other enzymes are known to be produced by myoblasts [24][25][26][27].
The significant but low correlations of WBC and Polys with muscle enzyme levels and muscle metabolites such as myoglobin were unexpected. However, CK is expressed in WBC, including macrophages [28,29], as well as in endothelial cells [30], and LD is also expressed by T and B lymphocytes [31]. Thus, while CK and LD may be increased due to active myositis and may correlate on that basis, there may also be several tissue sources outside the muscle for these enzymes, providing additional rationale for their correlation with WBC and other tests.
Although attempts were made to avoid possible confounding in all aspects of the study, there are limitations to the approaches used in this retrospective investigation. First, the data may not be reflective of a typical IIM population as subjects referred to the NIH may have more severe disease or were taking more medications that could have influenced the laboratory results. Second, missing data were inevitable in this retrospective approach, although adequate numbers existed for nearly all of the studies to allow for an appropriately powered analysis. Additionally, possible factors that could have also impacted the laboratory findings that could not be directly investigated in this retrospective study included: variations in clinical disease activity; the degree of muscle damage; disease duration; myositis autoantibodies; concomitant illnesses; lung or cardiac or other organ system involvement related to myositis; and medications or other treatment regimens. All of these important factors should be included in future prospective investigations in this area.
Nonetheless, this is the first study undertaken to describe the range of common laboratory findings from a comparatively large sample of myositis patients, in various myositis phenotypes, and involving a large number of variables. The results also serve as the foundation for future research in developing guidance to clinicians in diagnosing Fig. (4). Scatterplots and the 99% and 95% prediction intervals for the relationships among CK and aldolase (A), ALT (B), AST (C) and LD (D) levels*. *Abbreviations as in Table 1 and Figs. (1-3).
A.
B.
myositis via laboratory tests. The close correlation of some of these tests, particularly CK levels and transaminases, lactate dehydrogenase and aldolase, allow for assessment of the expected ranges of abnormalities and better guidance as to when additional evaluation for possible liver or other disease is indicated.
CONCLUSION
This study highlights the importance of understanding the range of expected routine laboratory test abnormalities in myositis patients and the differences among subgroups in order to effectively evaluate possible other causes of these abnormalities. Knowledge of the expected range of abnormalities of common laboratory tests may aid in avoiding unnecessary expensive and invasive diagnostic techniques in myositis patients. | 2016-10-26T03:31:20.546Z | 2012-06-01T00:00:00.000 | {
"year": 2012,
"sha1": "3593bb2a96c24d26fa8bc3eec571ed0098f0d2ac",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TORJ/TORJ-6-54.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cafb3009a62dc2301228d04090382570b1fab6a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6470219 | pes2o/s2orc | v3-fos-license | Vascularized Free Lymph Node Flap Transfer in Advanced Lymphedema Patient after Axillary Lymph Node Dissection
Lymphedema is a condition characterized by tissue swelling caused by localized fluid retention. Advanced lymphedema is characterized by irreversible skin fibrosis (stage IIIb) and nonpitting edema, with leather-like skin, skin crypts, and ulcers with or without involvement of the toes (stage IVa and IVb, respectively). Recently, surgical treatment of advanced lymphedema has been a challenging reconstructive modality. Microvascular techniques such as lymphaticovenous anastomosis and vascularized lymph node flap transfer are effective for early stage lymphedema. In this study, we performed a two-stage operation in an advanced lymphedema patient. First, a debulking procedure was performed using liposuction. A vascularized free lymph node flap transfer was then conducted 10 weeks after the first operation. In this case, good results were obtained, with reduced circumferences in various parts of the upper extremity noted immediately postoperation.
INTRODUCTION
Lymphedema is tissue swelling caused by localized fluid retention. Its natural course is chronic, progressive, and often destructive. It can either be congenital or acquired following trauma or surgery for cancer treatment that includes removal of the lymph nodes and radiotherapy [1].
The International Society of Lymphology classifies the severity of lymphedema into three stages. Karri et al. [2] reported a modified staging system, classifying the severity of lymphedema into four stages. According to their system, advanced lymphedema is characterized by irreversible skin fibrosis (stage IIIb) and nonpitting edema, with leather-like skin, skin crypts, and ulcers with or without involvement of the toes (stage IVa and IVb, respectively) [3].
Microvascular techniques such as lymphaticovenous anastomosis and vascularized lymph node flap transfer (VLNT) are effective for early stage lymphedema. Interestingly, liposuction, first used to treat lymphedema in 1989, has been shown to be an effective method of reducing limb volume and has good long-term results both cosmetically and functionally, In addition, liposuction has a low rate of complications and does not further disrupt the lymphatic system [4].
In this study, we performed a combined two-stage operation in an advanced lymphedema patient to achieve better results. First, a debulking procedure was performed using liposuction. VLNT was conducted 10 weeks after the first operation because there were no changes in the circumferences of various parts of the upper extremity after 2 months.
CASE REPORT
A 52-year-old woman had lymphedema in the right upper extremity caused by partial mastectomy with axillary lymph node dissection; she had under gone surgery and radiation therapy 10 years earlier for breast cancer. There was no improvement in symptoms following conservative treatment with rehabilitation and medication therapies. There was significant fibrosis in the upper extremity and pitting edema was not observed, indicating a reversible state of lymphedema. We defined this patient's lymphedema stage as IIIa ( Figure 1).
Presurgical evaluation was done with lymphoscintigraphy, which indicated that there was no uptake in the right axillary lymph node. In addition, marked dermal back flow was noted, which is specific to lymphedema ( Figure 2).
We planned a two-stage operation. First, a debulking oper-Lymphedema is a condition characterized by tissue swelling caused by localized fluid retention. Advanced lymphedema is characterized by irreversible skin fibrosis (stage IIIb) and nonpitting edema, with leather-like skin, skin crypts, and ulcers with or without involvement of the toes (stage IVa and IVb, respectively).
Recently, surgical treatment of advanced lymphedema has been a challenging reconstructive modality. Microvascular techniques such as lymphaticovenous anastomosis and vascularized lymph node flap transfer are effective for early stage lymphedema. In this study, we performed a two-stage operation in an advanced lymphedema patient. First, a debulking procedure was performed using liposuction. A vascularized free lymph node flap transfer was then conducted 10 weeks after the first operation.
In this case, good results were obtained, with reduced circumferences in various parts of the upper extremity noted immediately postoperation.
ation using liposuction was arranged. Then, VLNT was planned for physiologic realignment of lymphatics. Liposuction was performed with a cannulated catheter. During liposuction, fibrous septation was disrupted and 800 mL of fluid was drained. Immediately after liposuction, a garment was applied to prevent swelling. After 10 weeks, we performed a lymph node transfer using vascularized free flaps from the cervical region. We elevated the flap based around the transverse cervical artery as sigmoid shape in 1.5 cm above the clavicle. We then elevated the subplatysmal flap and identified the external jugular vein and omohyoid muscle between the sternocleidomastoid muscle and trapezius muscle. Next, we identified the transverse cervical artery, which is beneath the omohyoid muscle toward the anterior scalene muscle, and raised from the thyrocervical trunk. After flap elevation, we anastomosed the radial artery with the end-to-side method ( Figure 3). The severity of the lymphedema was evaluated by measuring the circumferences of various parts of the upper extremity. There were five points of measurement, including the wrist area, 10 cm distal from the cubital fossa, the cubital fossa, 10 cm proximal from the cubital fossa, and the axillary area.
There was improvement in the lymphedema condition following the operations. Ten weeks after the first liposuction operation, decreased circumferences ranging from 35.7% to 100% compared to those of the contralateral side were noted. Of note, we defined "%" as the ratio between the decreased circumferences and the difference between the normal side and affected side at their preoperative circumferences. One year after the second-stage lymph node transfer operation, all measurement points on the right upper limb except the wrist point had a smaller circumference than their preoperative measurement (Table 1, Figure 1).
DISCUSSION
The main mechanism of VLNT is re-establishment of lymphatic circulation in the transferred flap. In the postoperative period, healing of transplanted lymphatics to native lymphatics at the recipient site and removal of lymphatic fluid by a direct pumping mechanism may provide further fluid drainage [5]. In some patients, improvement in lymphedema swelling can be observed immediately in the hospital after VLNT, before the healing of the donor and native lymphatics. While the mechanism of improvement is unclear, the release of scar tissue in the previously operated and/or irradiated lymphatic bed has been postulated to account for this observation [6]. In addition, our patient showed immediate dramatic reduction of circumference length; 10 days after VLNT, the affected side arm had a smaller circumference than the contralateral nor-mal side arm. This may have been caused by lymphatic absorption due to earlier scar tissue release caused by liposuction combined with strict arm elevation. We encouraged the patient to maintain a position with passive elevation of the arm 24 hours a day, using an elastic bandage sling attached to a standing pole.
Liposuction has been shown to remove circumferential subcutaneous fatty tissue in affected limbs. In addition, suction-assisted lipectomy has been demonstrated to maximally reduce excess solid volume remaining in lymphedema after the fluid component has been removed with conventional therapy. While effective in removing excess volume, debulking does not address the pathophysiology causing the lymphedema. Therefore, patients must continue postoperative compression to prevent reaccumulation of excess fluid, and additional postoperative therapy is required. Therefore, custom-fit compression garments are measured by the therapist and must be placed on the patient immediately postsurgery in the operating room.
It should be noted, however, that procedures that remove the fluid component of lymphedema, such as VLNT, are less likely to achieve the large reduction in excess volumes observed after liposuction. Rather, the procedures significantly decrease the postoperative need for compression garments and lymphedema therapy. Conversely, liposuction results in large volume reductions because it removes large amounts of solid fat and lymphatic material, but it does not address the ongoing lymphatic stasis and obstruction. Therefore, we have combined the liposuction and VLNT procedures in a twostaged approach to manage advanced staged lymphedema.
First, liposuction was performed to remove the solid components and reduce excess volume. After postoperative swelling stabilized, VLNT was used to improve lymphatic drainage and address subsequent fluid retention. Previously, this combined approach has resulted in volume reductions of over 83%, with compression garment use required only in the evenings and at night [7].
Furthermore, in this case report, we had to perform VLNT after liposuction to maintain superficial venous circulation. If we had performed VLNT before the liposuction, there would have been circulatory impairment caused by the liposuction procedure, which could have progressed to venous congestion. The compression garment applied immediately after liposuction also contributed to the prevention of venous retention. VLNT requires microanastomosis, and there are three recipient sites available. The axillary area is usually operated on and irradiated, resulting in fibrotic changes, which make the dissection of recipient vessels more tedious. The anterior recurrent ulnar artery is sometimes very small. Therefore, the ulnar artery may be used with an end-to-side technique. For recipient vessels, the radial artery's dorsal branch and the cephalic vein are superficial and therefore easily dissected [8].
There is a hypothesis that VLNT may act by means of an internal pump and suction mechanism using pathways for lymphatic clearance of the lymphedematous limb. The "pump" mechanism is driven by the high-pressure inflow of the arterial anastomosis from the radial artery, which provides a strong hydrostatic force into the vascularized groin lymph node flap. The "suction" is continued by the large-caliber, superficially located, low-pressure venous drainage provided by the cephalic vein [9].
In this case, good results were obtained; immediately postoperation, the patient with advanced staged lymphedema had reduced circumferences in various parts of the upper extremity . However, there were limitations to this study. One limitation is that an increase in diameter was seen at each point between the immediate postoperative evaluation and the evaluation 1-year postoperation. We hypothesize that the increased circumferences after 1 year were the results of the stabilization of the grafted flap, rather than recurrence of lymphedema. Long-term follow-up observations would be needed to confirm this hypothesis. In addition, we did not evaluate the pa-tient's satisfaction in psychological aspects of the study and physical activity; assessments of object findings are necessary in a later study. | 2016-08-09T08:50:54.084Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "f08d14df07bc3ece8ac0d4f6691d9b0ef91f1611",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4048/jbc.2016.19.1.92",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f08d14df07bc3ece8ac0d4f6691d9b0ef91f1611",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4003545 | pes2o/s2orc | v3-fos-license | Principal component analysis of physicochemical and sensory characteristics of beef rounds extended with gum arabic from Acacia senegal var. kerensis
Abstract Principal component analysis (PCA) was carried out to study the relationship between 24 meat quality measurements taken from beef round samples that were injected with curing brines containing gum arabic (1%, 1.5%, 2%, 2.5%, and 3%) and soy protein concentrate (SPC) (3.5%) at two injection levels (30% and 35%). The measurements used to describe beef round quality were expressible moisture, moisture content, cook yield, possible injection, achieved gum arabic level in beef round, and protein content, as well as descriptive sensory attributes for flavor, texture, basic tastes, feeling factors, color, and overall acceptability. Several significant correlations were found between beef round quality parameters. The highest significant negative and positive correlations were recorded between color intensity and gray color and between color intensity and brown color, respectively. The first seven principal components (PCs) were extracted explaining over 95% of the total variance. The first PC was characterized by texture attributes (hardness and denseness), feeling factors (chemical taste and chemical burn), and two physicochemical properties (expressible moisture and achieved gum arabic level). Taste attribute (saltiness), physicochemical attributes (cook yield and possible injection), and overall acceptability were useful in defining the second PC, while the third PC was characterized by metallic taste, gray color, brown color, and physicochemical attributes (moisture and protein content). The correlation loading plot showed that the distribution of the samples on the axes of the first two PCs allowed for differentiation of samples injected to 30% injection level which were placed on the upper side of the biplot from those injected to 35% which were placed on the lower side. Similarly, beef samples extended with gum arabic and those containing SPC were also visible when scores for the first and third PCs were plotted. Thus, PCA was efficient in analyzing the quality characteristics of beef rounds extended with gum arabic.
| INTRODUCTION
Meat extenders are substances that are added in meat products with the aim of improving the binding properties of such products. This has been achieved through the addition of functional ingredients in the form of inorganic salts (i.e., sodium chloride, phosphates, bicarbonate) and organic compounds from plant and animal origins, such as starch, hydrocolloids, and proteins, to meet wide sensory and technological requirements of processed meat producers and consumers (Petracci, Bianchi, Mudalal, & Cavani, 2013). Extenders such as soy protein concentrates (SPC), whole milk, egg proteins, and fillers such as starches are used for the manufacture of affordable but nutritious meat products (Heinz & Hautzinger, 2007). Studies have shown the effect of some of these nonmeat additives on the quality and physicochemical properties of meat products (Andrès, Zaritzky, & Califano, 2006;Soltanizadeh & Ghiasi-Esfahani, 2015;Youssef & Barbut, 2011).
Recently, for the first time, gum arabic from Acacia senegal was reported to enhance cook yield and juiciness when used in extended beef rounds (Mwove, Gogo, Chikamai, Omwamba, & Mahungu, 2016).
Conclusions on the quality of these meat and meat products are done based on measurements taken on many quality attributes which are related (Cañeque et al., 2004). According to Karlsson (1992), the large number of measures used to assess meat quality are usually correlated. Therefore, they could be replaced with a few measures while retaining the information. Conclusions based on analysis performed on single physicochemical and sensory characteristics do not provide any indication on the relationships among the various physicochemical and sensory characteristics, nor allow the grouping of samples with similar characteristics. Therefore, there is need to have a few elements to synthesize the trends observed in beef rounds extended with gum arabic and thus draw more information from the large amount of heterogeneous data collected. Moreover, there are minimal reports that elucidate relationships between meat and meat products quality measurements. To achieve this, multivariate statistical methods such as principal component analysis (PCA) can be employed. PCA can be used to extract the important information and reduce a large set of correlated variables to uncorrelated measures, each of which is a particular linear combination of the original quality characteristics, without loss of information (Abdi & Williams, 2010;Karlsson, 1992;Šnirc, Kral, Ošťádalová, Golian, & Tremlová, 2017 Gray color (GC) Color of meat on the surface and the inside (1 = no surface gray color 0%, 6 = total surface gray color 100%) Brown color (BC) Color of meat on the surface and the inside (1 = no surface brown color 0%, 6 = total surface brown color 100%) Iridescence (I) The property of meat surfaces appearing to change color as the angle of view changes.
a Accessed using a 16-point spectrum universal intensity scale where 0 = absence of attribute and 15 = extremely intense.
for most of the variation in the original data. This helps to identify the most important directions of variability in a large set of heterogeneous dataset (Destefanis, Barge, Brugiapaglia, & Tassone, 2000;Šnirc et al., 2017). The correlation between a component and a variable estimates the information they share. In PCA, this correlation is called a loading (Destefanis et al., 2000). PC loading shows the relationship between the originally measured variables and the extracted PC (Abdi & Williams, 2010). Thus, PCA is a very effective procedure for obtaining synthetic judgment of meat quality, through the reduction in dimensionality, which permits visual interpretation of the data represented by two-dimensional scatter plots (Destefanis et al., 2000;Šnirc et al., 2017). These scatter plots are called PC loading plots. In the loading plots, variables close together are positively correlated, while those lying opposite to each other tend to have negative correlation. The more a variable is away from the axis origin, the more it loads onto that PC (Baardseth, Helgesen, & Isaksson, 1996). Various studies have employed PCA to assess meat and meat products quality characteristics (Boyacı et al., 2014;Cañeque et al., 2004;Destefanis et al., 2000;Karlsson, 1992;Liu, Lyon, Windham, Lyon, & Savage, 2004). It is therefore an ideal tool for studying quality characteristics in beef rounds extended with gum arabic for which such analysis is yet to be reported.
Therefore, the aim of this work was to study relationships among the various physicochemical and sensory characteristics and allow the grouping of samples with similar characteristics in beef rounds injected with brines containing gum arabic from Acacia senegal var. kerensis at different levels. This article reports the results of the PCA for 24 beef round quality measurements (physicochemical and sensory attributes) measured.
| Sample preparation
This research was carried out at the Castle Meat Products factory and the Egerton University, Nakuru, Kenya. The beef rounds were injected with brines containing gum arabic from A. senegal var. kerensis. Beef injection brine contained standard recommended amounts of sodium chloride (2%), sodium nitrate (0.02%), sodium tripolyphosphate (0.5%), and sodium ascorbate (0.0547%) with different levels of gum arabic (1%, 1.5%, 2%, 2.5%, and 3%). For comparison, curing brine solutions containing SPC (3.5%) were used. Meat cuts weighing 3.5 kg consisting of pieces from the beef round were trimmed of external fat, skin, membranes, and the silver skin, and injected with curing brines using a manual injector pump (Friedr. DICK Hand Brine Injector Pump) followed by massaging for 3 hr to evenly distribute the brine. Rounds were then kept for 18 hr at 4°C after which they were cooked and sliced uniformly. A total of 24 variables were analyzed on the cooked beef round samples.
| Cook yield and expressible moisture
The centrifugation method was followed in the determination of expressible moisture using a centrifuge DSC-200A (Aron Laboratory Instruments, Taiwan). This was achieved by centrifuging 10 g sample at 860g for 7.5 min at 20°C (Zhang, Mittal, & Barbut, 1995). Cook yield was determined as the percentage of cooked sample weight over the raw weight of the noninjected sample.
| Possible extension level and gum level
Possible injection level was taken as the percent increase in weight of the injected beef after 18 hr storage at 4°C, while the actual gum arabic in injected beef rounds was calculated based on the possible injection level achieved for each extended beef cut.
| Moisture content and protein content
The moisture was determined by the AOAC method 950.46, while the crude protein by AOAC method 981.10 (AOAC, 2000).
| Statistical analysis
This study employed a completely randomized design in a facto-
| RESULTS AND DISCUSSION
The means and standard deviations of the variables determined are shown in Table 2. The coefficients of variation were lower than 5% for moisture and cook yield at 2.27% and 2.33%, respectively. All other measurements had coefficients of variation lower than 30% other than livery, soured, gray color, chemical burn, expressible moisture, gum arabic in beef round, and chemical taste. The highest coefficient of variation was recorded for chemical taste at 60.61%. Table 3 shows the correlation coefficients between the quality attributes tested. Various attributes were found to be highly correlated with each other. The highest significant negative and positive correlations were recorded between color intensity and gray color and between color intensity and brown color, respectively.
This was expected since low-cured color intensity usually results in loss of brown color in the cooked product. Similarly, samples with high-cured color intensity would have higher brown color rating. Color intensity was also negatively correlated to beefy, livery, and sourness attributes, while gray color was positively correlated to these attributes. This is unlike what was reported by Liu et al. (2004). In their study, none of the color attributes was correlated with any of the other sensory attributes. Protein content was positively correlated with juiciness as well as overall acceptability. Gum arabic level was positively correlated to expressible moisture. In et al. (2000). In their study, Destefanis et al. (2000) found overall acceptability and juiciness to be negatively correlated to cooking losses. Expressible moisture was positively correlated to juiciness, but negatively correlated to the gum level, indicating that increasing gum level resulted in a decrease in expressible moisture; meaning that there was an improvement in water-holding capacity in the beef rounds with increased gum level as shown in Figure 1a,b as reported earlier (Mwove et al., 2016).
In PCA, the first seven PCs were extracted explaining over 95% of the total variance for the beef round quality parameters (Table 4).
The first three of these PCs accounted for 73.6% of the variance Similarly, expressible moisture is far from this PC with a positive PC. In addition, hardness and springiness are located close to one another indicating that they are positively correlated with each, other but negatively correlated to juiciness which is located on the opposite quadrant. Possible injection and juiciness were located close to overall acceptability, showing that these two may be very important in defining the acceptability of the products. In this work, samples that were highly desired were also rated highly for juiciness. In addition, samples that had higher possible injection also had higher juiciness (Mwove et al., 2016).
Overall acceptability, possible injection, and cook yield were highly positively correlated with the second PC. As reported earlier in Table 3, these three were positively correlated with each other. This makes possible injection and cook yield very crucial in defining the quality of beef rounds containing gum arabic.
In Figure 2b, color intensity and brown color are placed on opposite quadrants to the gray color showing that these were negatively correlated. They are however placed closer to the first PC showing their less importance in defining it. Nevertheless, in respect to the third PC, gray color and brown color together with moisture and protein content are located far away, and hence their usefulness in defining the third PC. Figure 3a,b shows the scores biplot for PC1 versus PC2 and PC1 versus PC3, respectively. The score plot shows the location of the objects in the multivariate space of two principal component score vectors (Destefanis et al., 2000). Two groups of samples were clearly separated as seen in Figure 3a. Samples extended to 35% level were placed on the upper side of the biplot where most sensory attributes as well as physicochemical attributes were placed except hardness, springiness, color intensity, brown color, and expressible moisture.
Hardness and springiness were earlier reported to be lower in samples extended to 35% level as compared to those extended to 30% (Mwove et al., 2016). However, these samples were high in expressible moisture which is positively loading on PC1 (Table 5). In Figure 3b, samples extended to 30% are located on the left side, while those extended to 35% are located on the right side near juiciness, overall acceptability, and possible injection (Figure 2b). This shows that samples injected to 35% level were juicier and had the highest overall acceptability rating (Mwove et al., 2016).
Beef samples extended with gum arabic were also displayed on the left side, while those containing SPC were placed on the right side ( Figure 3a). In Figure 3b, SPC containing samples are located on their own at the bottom right-side quadrant where sourness and gray color attributes are. This indicates that samples containing SPC were F I G U R E 2 Biplot for (a) factor 1 and factor 2, and (b) factor 1 and factor 3. B, beefy; BF, beef fat; L, livery; H, hardness; DE, denseness; SP, springiness; J, juiciness; ST, saltiness; S, soured; A, astringent; CT, chemical taste; CB, chemical burn; MT, metallic taste; CI, color intensity; GC, gray color; BC, brown color; I, iridescence; OA, overall acceptability; EM, expressible moisture (%); MC, moisture content (%); CY, cook yield (%); PI, possible injection (%); GA, gum arabic in round (%); PC, protein content (%) higher in these attributes as compared to the gum arabic extended beef round samples.
| CONCLUSION
The results of PCA showed that the physicochemical and sensory attributes of beef hams extended with gum arabic and SPC were highly correlated. Several significant correlations were found between beef round quality parameters. The highest significant negative and positive correlations were recorded between color intensity and gray color and between color intensity and brown color, respectively. PCA revealed that texture characteristics (hardness, denseness) as well as expressible moisture and achieved gum arabic in beef round were important in defining PC1. In addition, the distribution of the samples on the axes of the first two PCs allowed for differentiation of samples injected to 30% injection level which were placed on the upper side of the biplot from those injected to 35% which were placed on the lower side. Differentiation of beef samples extended with gum arabic and those containing SPC were also visible when scores for the first and third PCs were plotted. Thus, it was possible to discriminate groups of samples based on types and levels of binders used as well as the levels of injection indicating differences in beef round characteristics.
Thus, PCA was very efficient in analyzing the physicochemical and sensory characteristics of beef rounds extended with gum arabic. | 2018-04-03T01:50:02.287Z | 2018-01-16T00:00:00.000 | {
"year": 2018,
"sha1": "39e0bafbc17ab891bfc57175b09a523887de60f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/fsn3.576",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39e0bafbc17ab891bfc57175b09a523887de60f1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
250940434 | pes2o/s2orc | v3-fos-license | Evolution of Cohesion between USA Financial Sector Companies before, during, and Post-Economic Crisis: Complex Networks Approach
Various mathematical frameworks play an essential role in understanding the economic systems and the emergence of crises in them. Understanding the relation between the structure of connections between the system’s constituents and the emergence of a crisis is of great importance. In this paper, we propose a novel method for the inference of economic systems’ structures based on complex networks theory utilizing the time series of prices. Our network is obtained from the correlation matrix between the time series of companies’ prices by imposing a threshold on the values of the correlation coefficients. The optimal value of the threshold is determined by comparing the spectral properties of the threshold network and the correlation matrix. We analyze the community structure of the obtained networks and the relation between communities’ inter and intra-connectivity as indicators of systemic risk. Our results show how an economic system’s behavior is related to its structure and how the crisis is reflected in changes in the structure. We show how regulation and deregulation affect the structure of the system. We demonstrate that our method can identify high systemic risks and measure the impact of the actions taken to increase the system’s stability.
Introduction
Economic crises negatively impact people's lives. They influence every aspect of individual and social development. Therefore, it is essential to prevent a crisis or alleviate its impact by promptly taking appropriate action. Thus, it is necessary to understand the economic system's functioning and behavior before, after, and during the crisis. Different approaches have been applied towards that end, including economic [1,2] and quantitative approaches [3][4][5][6][7][8][9].
The economic system is a complex system consisting of many interacting units whose collective behavior cannot be inferred from individual units' behavior. The behavior of the complex system is determined by its structure [10,11]. To understand the behavior and function of a complex system, one needs to describe its structure and understand how this structure evolves. Complex networks theory provides tools for the inference of the structure of a wide range of systems, including biological [12], social [11], technological [13], and economic systems [14]. The construction of economic networks is mostly achieved by mapping the flow of funds between companies [15] or transforming time series into a correlation matrix [14]. These two networks are complementary, although they overlap to a certain extent. The former network requires more time-consuming data collection, while the advantage of obtaining a network from time series is in its simplicity and the availability of data. The appropriate method for efficiently extracting information from time series is essential since it provides insights into the system's structure at a relatively low data collection cost.
Time Series Processing
The time series of prices are not suitable for calculating the correlation matrix since they have strong trends and are non-stationary. Different methods were applied in order to overcome these problems. One of the methods is based on the simple transformation of prices into logarithmic returns [6,[21][22][23][24][25], derived as r t = ln P t P t−1 , where P t presents the price at time t. These series fluctuate around the mean, which is constant and close to zero.
The other methods apply detrending techniques on the time series of returns [14,[26][27][28]. These detrending techniques differ according to trend calculation. Zhao et al. [14] make a cumulative time series of returns and calculate the trend for each series separately based on the detrending fluctuation analysis technique [29]. Random matrix theory is used to calculate market component, representing the trend in [26,28]. Musmeci et al. [27] calculate the market component based on average returns for all companies considered in the analysis.
Some works used the auto-correlation of time series of returns to derive residuals, which are then used to calculate the correlation matrix [17,30]. Dynamic conditional correlation multivariate generalized autoregressive conditionally heteroscedastic (GARCH) model, DCC-MVGARCH, is used in these works.
Obtaining Network from Correlation Matrix
Obtaining a network from a correlation matrix suitable for gaining insights into the system's structure is a complex problem. It involves the usage of an appropriate filtering method. The method should ensure that relevant information is present in the network and that redundant edges are removed. Not satisfying any of the two requirements can lead to false conclusions. Existing filtering methods include the minimum-spanning tree (MST) [17,[30][31][32], planar maximally filtered graph (PMFG) [6,14,33,34] and threshold method [18,[20][21][22][23]28].
The threshold method filters out information based on correlation strength, while MST and PMFG combine correlation strength to include all graph nodes and planarity. From the perspective of inter-and intra-connectivity between communities, inclusion and planarity criteria result in a connected graph at the price of not including all relevant edges. Onnela et al. [20] compared the threshold method with MST and showed that a threshold network with the same number of edges as MST results in a disconnected graph. These results imply that intra-community edges are more robust than edges between the communities. Moreover, in [6], PMFG leads to the conclusion that in times of crisis, communities are less connected than they out of crisis, which is in contrast to results obtained using random matrix theory [3,7].
The threshold method is more suitable for analyzing community structure in the network. However, the problem is finding the optimal threshold value. A lower threshold is desirable to include as much information as possible. On the other hand, a higher threshold is preferable since it provides a sparse network, which is easier for analysis. The optimal threshold is the one that filters out noise from the network structure and leaves the edges that carry relevant information about mutual relations between entities. Onnela et al. [20] proposed clustering coefficient as the criteria for determining the threshold value. However, there is no substantial evidence that the clustering coefficient is more relevant than other network measures.
X. Cao et al. [28] calculated the optimal threshold by comparing clustering coefficients, the average shortest path length, and the size of the giant component between random graph and empirical network for different threshold values. They determine the optimal threshold at which the structural difference between empirical and random networks is at the highest level. While these network properties are one of the most investigated ones, they are not inclusive of other topological properties [35]. The work from C. Orsini et al. [35] indicates that the degree sequence, joint degree matrix, average clustering coefficient, and its dependence on the node degree are sufficient to describe the topology of most of the networks. In contrast, the giant component's average shortest path length and size depend on these properties.
Xue Guo et al. [18] determine the threshold based on the community's correlation strength. This approach underestimates the inter-connectivity between communities as higher importance is given to intra-community edges. Inter-community edges impact the diffusion process in a network and should be recognized appropriately. Moreover, according to the max-flow min-cut theorem, edges between communities are essential since the information flow is maximal through them.
S. Kumar et al. [21] set different thresholds to show how network characteristics, such as the component number and maximum clique size, change with the threshold. Xia et al. [23] determine the threshold by using the probability distribution of correlation coefficients and setting the threshold at the expected value plus multiples of standard deviations.
The mentioned threshold methods do not provide quantitative insights into how much information is filtered from the network. The complete correlation matrix carries all information about the structure of the network. Once threshold filtering is applied, a certain amount of information is lost. Therefore, it is essential to have quantitative insight into how much information we included in the network. It is vital to see which edges carry the relevant information about the systems' topology and which are redundant. Here, we propose a quantitative measure based on the network's spectral properties to determine the optimal value of the threshold.
Crisis Examined Using Quantitative Methodologies
Different quantitative methods have been applied to better understand the impact of the crisis on the economic system. V. Filimonov et al. [5] used the Poisson Hawkes model and developed a measure to determine whether price fluctuations are due to an endogenous feedback process as opposed to exogenous news. A. M. Petersen et al. [4] studied cascading dynamics and related the Omori, productivity, and Bath laws with financial shocks. G. Oh et al. [36] used entropy density function in return time series, while K. Yim et al. [37] used the Hurst exponent.
Complex networks theory is also used for the analysis of crisis impact. X. Cao et al. [28] have shown that the crisis impacts the average degree, size of the giant component, and clustering coefficient. S. Kumar et al. [21] presented how the crisis affects the formation of clusters and the structure of minimum spanning trees. A. Nobi [24] showed the impact of the crisis on degree distribution and cluster formation. M. Wilinski [38] showed that MST changes structure from a hierarchical scale-free MST to a superstar-like MST decorated by a scale-free hierarchy of trees. L. Zhao et al. [6] examined how the crisis affects the number of communities and inter-sector edges.
Existing methods that use complex networks to analyze the impact of a crisis primarily consider either mapping country indices [21] or the constituents of leading indices S&P 500 [6]. The former network comprises nodes representing different countries, while the latter network nodes represent companies from different sectors. These companies are, for example, for index S&P 500, the largest 500 companies in the USA. This work demonstrates our approach to studying the evolution of relations between companies in the USA financial sector. We show that laws and policies strongly influence the system's structure. The network's community structure reflects the pre-crisis, crisis, and postcrisis periods.
Data
Innovative solutions such as derivatives and securitization in the financial sector that were not followed by developing the system's regulatory framework created a bubble in the housing and credit supply markets. The bubble burst in 2008 due to the subprime mortgage crisis, which led to a worldwide economic crisis. This work studies the long-term relations between companies in the USA's financial sector and its evolution from 2002 until 2017. This period includes the time before the 2008 crisis, the period during the crisis, and the economic recovery period. The financial sector includes companies whose main economic activity is asset management, real estate investment trusts (REITs), banks, insurance, and municipal funds.
We obtained data from the publicly available Finance Yahoo database https://finance. yahoo.com/ accessed on 27 September 2018 which contains various information data about the company's values and how they have changed with time. The database comprises different data types, for instance, opening, closing, intraday, adjusted closing prices, and trading volume. The information is given for different aggregation intervals: day, week, and month. For this study, we used adjusted daily closing prices. The closing price means that the price is taken at the end of the business day after trading is closed. The price fluctuates between the opening and closing of a business day. Adjusted means that the price is corrected to exclude the effect of dividend pay-out and stock split. The impact of dividend pay-out and splits of stock would provide misleading information. A split or dividend pay-out can significantly change the price, although the company's real value did not change.
For each year T ∈ {2002, . . . , 2017}, we collected the time series x T i (t) of the adjusted closing price at the end of each trading day t for each company i. Each time series' length equals the one-year or 252 trading days. The number of companies in each year N c (T) varies since some companies were founded after 2002, and some of them were closed before 2017. Table 1 shows the number of companies active in year T in the USA financial sector according to the Yahoo Finance database. T) 2002 518 2010 762 2003 558 2011 786 2004 609 2012 804 2005 653 2013 825 2006 695 2014 855 2007 711 2015 884 2008 740 2016 892 2009 748 2017 888 Table 1 shows that the number of companies in the USA financial sector grew by 7.5% per year on average before 2007. The crisis and economic recovery period from 2007 until 2015 had much slower growth, with an average relative increase in the number of companies of approximately 2.7%. There was a certain stagnation of this growth in 2016 and 2017.
Methodology
This work proposes a method for determining the network of relations between companies based on their stock price time series. We use this method to study the evolution of cohesion of financial sector companies whose stocks are publicly traded on the USA's stock exchange. With this method, we explore the evolution of mutual influences and how this evolution is shaped by different critical events, such as the world economic crisis in 2008.
Our method consists of three steps. In the first step, we perform a detrending time series of stock prices for each considered company. In the second step, we calculate the matrix of Pearson correlation coefficients using this detrended time series. In the final step, we apply the threshold filtering of correlation coefficients to extract the companies' network of relations. We then analyze and compare the topology of the networks obtained for different years.
Time Series of Prices for Obtaining Network
In our approach, the companies are represented by nodes, and edges represent companies' relationships. As input data, we use the time series of stock prices. We consider the time series of each company's adjusted daily closing prices. The considered time series are non-stationary and have strong trends, as can be seen in Figure 1 (green line), which are often the consequence of different external influences. The non-stationarity of the time series and the trends can lead to false, highly positive, or negative correlations between companies. To avoid this, we remove the trends by detrending the time series using the method proposed in [29]. The detrended time series is the time series of the fluctuations.
Original time series x T i (t) consists of 252 values of adjusted daily stock prices of the company i during year T. In [29], the authors considered the differential time series of fluctuations and then performed detrending on the cumulative time series. Our original time series are already cumulative, thus omitting this step in our calculations. We divide the time series on k non-overlapping segments of equal size l, so that k = n l . We determine the linear trend of time series x i by fitting the equation y The resulting time series is stationary, and its average value is approximately zero. By removing the trend typical for period l, we only consider fluctuations that result from mutual influence between companies. We apply detrending to each company's time series. The detrended time series are used for the calculation of the Pearson correlation coefficient matrix for year T, where each element of the matrix is calculated using the following formula: where x T i (t) and x T j (t) are the detrended time series of companies i and j in year T,μ x T i andμ x T j are estimated average values over period n = 252 for the detrended time series of companies i and j. The matrix withρ T i,j elements is symmetrical and takes values from −1 to 1.
In order to obtain the network of mutual influences between considered companies for the year T, we only take into account the correlation coefficient with a value above a certain threshold θ, i.e., Determining the threshold value θ is not a simple task. In their approach, Živković et al. [39] assumed that the most optimal value of the threshold can be determined from the relation between the threshold value and the size of the largest component in the network obtained for that value. The giant component is the largest set of connected nodes in the network [10]. The dependence of the size of giant component S on the threshold value θ has a characteristic steep decline in the giant component's size for a particular value of the threshold θ c . The abrupt deterioration implies the detachment of a group of nodes forming separate components. θ c is the threshold value for which one can observe essential changes in the network structure. The threshold value is determined as the one which is slightly smaller than θ c . For these reasons, we adopted a different approach. We determine the optimal threshold for filtering the correlation matrix based on the networks' spectral properties. The probability distribution of the eigenvalues of the adjacency matrix fundamentally describes a system and contains the complete information about its topology [40][41][42]. Different networks, such as Erdos-Reniy and Barabasi-Albert graphs, have different probability distributions of eigenvalues. The difference between the two networks is proportional to their structural differences.
We compare the empirical economic network's spectra with the spectra of different random networks to demonstrate our claims. C. Orsini et al. [35] proposed a method to create a line of random networks, each topologically more similar to an empirical network. We obtained a network from the correlation matrix by applying the threshold method and using it as the empirical network. We generated three random networks (RNs) based on the empirical network properties. RN1 has the same average degree as the empirical network, while other topological properties are random. The RN2 has the same degree sequence and consequently, the average degree as the original network. The RN3 has the same joint degree matrix, degree sequence, average degree, and the most similar topology to the empirical network. Figure 3 shows the spectra of the empirical network obtained for 2005 and three random networks. The RN1 has the most different spectral properties than the empirical network, while the RN3 has the most similar spectra. Each random network only contains a fraction of information about the relations between nodes in the empirical network. The difference between spectra decreases as we increase the number of properties similar to the empirical network. Our analysis demonstrates that we can use the comparison between spectra to evaluate the optimal threshold. We used the same approach to compare the full correlation matrix, represented as a weighted and filtered network.
The correlation matrix contains complete information about the system and can be represented as a weighted graph. Once the threshold is applied to the correlation matrix, edges with weights less than the threshold are removed. A filtered network thus only has a fraction of information about companies' relations. By comparing the probability distributions of eigenvalues for original and filtered matrices, we understand how much information is lost due to filtering.
To quantify this difference, we use the Kolmogorov-Smirnov (KS) distance. We calculate the KS distance between the probability distributions of eigenvalues for the original and filtered correlation matrices for different threshold values. The lower value of KS distance implies a better agreement between spectra and higher similarity between networks' topologies. Therefore, we want the KS distance to be as low as possible. Figure 4 shows the KS dependence on the threshold for 2008, 2009, 2014, and 2015. As we expect, the KS distance increases with the threshold value. At the threshold −0.5, the KS distance is equal to zero as complete information is included in the network, while the KS distance reaches its maximum for a threshold close to 1. The dependence of KS on threshold has a local minimum at the value θ m > 0 and is similar to the KS distance at the threshold θ = 0. We keep the same information about the network structure by fixing the threshold's value at 0 or θ m . However, a network at 0 is denser and thus more complicated for the analysis. By setting the threshold value to θ m for which we observe the local minimum for the KS distance, and we obtain the optimal network with enough information about the relations between companies, which is not excessively dense. The probability distributions of correlation coefficients differ for each year, as can be seen in Figure 5; thus, it is not surprising that the local minimum is different for each year. We calculate the local minimum for each year separately and obtain the network based on corresponding thresholds. Table 2 shows the local minima θ for different years. We are interested in the mesoscopic structure of the networks and how it changes with time. A community is a group of nodes more densely connected than the rest of the network [19]. Communities are an indicator of the system's collective behavior, and the network's community structure provides essential information about its dynamics and function [19]. In this work, we apply the Louvain algorithm [43] to find communities in weighted networks. The results of the Louvain algorithm for a single run may differ due to different initial conditions. We run a Louvain algorithm each year 100 times for these reasons. For each community CM T,r i , where T denotes a year and r denotes the run of the Louvain algorithm, we calculate the ratio between edges inside the community and all edges formed by nodes belonging to that community. We calculate the ratio using the following equation and then we obtain the average over all runs and standard deviation
Results
This work focuses on how the network structure changed when the system went through the 2008 economic crisis. We selected the period between 2002 and 2017, which covers the time before, during, and after the crisis. The number of companies varies between 518 in 2002 and 888 in 2017, as can be seen in Table 2.
We detrended each segment separately and calculated the correlation matrix {ρ i,j } between the companies for each year T ∈ {2002, ..., 2017}. We detrended the time series for the interval l = 21 trading days, which equals one average trading month. We then mapped the correlation matrix to the adjacency matrix using the threshold method and obtained an undirected weighted network for the year T. We use the approach described in Section 3.2 to determine the threshold. We performed community structure analysis and calculated the P T in and σ P T in for each year. The analysis of community structure and the evolution of their cohesion shows how the network structure evolves.
Characteristics of the Correlation Matrix and Obtained Network
Detrending helps extract information about the economic system's internal behavior and relationships between companies. Figure 5 shows a probability distribution of correlation coefficients p(ρ i,j ) for the original time series and detrended time series for years 2009 and 2015. The p ij (ρ i,j ) shown in Figure 5a was calculated for the original time series for the year 2015 and resembles a uniform distribution. Figure 5b shows the probability distribution obtained from the detrended series and is more similar to Gaussian distribution. The center of Gaussian varies between years. The distribution of correlation coefficients changes during the economic crisis period, as can be seen in Figure 5c,d. If we obtain the correlation matrix from the original time series, most companies are highly correlated, with correlation coefficients between 0.9 and 1, as can be seen in Figure 5c. The distribution of correlation coefficients obtained from detrended time series during an economic crisis is a convolution of two Gaussians, Figure 5d.
After detrending the time series and calculating the correlation matrix, we used a method described in Section 3.2 to obtain an undirected weighted network. We ran Louvain on the networks and found the community structure. Figure 6 shows the networks for the years 2004, 2006, 2008, and 2015. Based on examining communities by comparing their constituents' characteristics, we concluded that their edges imply exposure to similar factors. Namely, the nodes belonging to a community, i.e., companies in the same sector, have different owners, operate in different states, and have different clients. Common to these companies is their economic activity, i.e., their functioning is similar. Therefore, we obtain a network where edges reflect exposure to similar factors. Inter-community edges indicate that even companies belonging to different subsectors may operate under similar conditions. For example, a bank and REIT company are exposed to similar external factors if they are linked to the residential project, where a bank lends money to home buyers while REIT invests in project development.
Robust intra-community connectivity indicates that companies in the same community operate under similar conditions and are susceptible to the same factors. The low value of correlation coefficients between companies belonging to different sub-sectors suggests that specific factors typically affect them. Strong connectivity between network nodes is an indicator of its high vulnerability. A system with a distinct community structure and stronger connectivity within the communities than between the communities is more robust than one with similar strengths of connections between and within the communities.
We are interested in the evolution of the ratio between intra-and inter-community connectivity and how this ratio changes when the system is in different states, such as during crisis and out-of-crisis periods.
Relation between Inter and Intra-Connectivity of Communities and Its Evolution
We analyze the community structure of networks for each year from 2002 to 2017 using the Louvain method. The results of applying the Louvain method, which includes the number and structure of communities, depend on the initial conditions. As a result, different runs of the Louvain algorithm on the same network may result in a different number of communities depending on how network nodes are assigned to these communities. For these reasons, we ran the Louvain algorithm 100 times on each of the 16 networks and calculated the average number of communities and the average connectivity of these communities. Figure 7a shows We analyze the intra-and inter-community connectivity for the networks obtained for each year from 2002 to 2017. Figure 7b shows <P T in > for the years from 2002 to 2017. Higher values of <P T in > imply higher community intra-connectivity, while lower values indicate higher community inter-connectivity. The error bars shown in Figure 7b are standard deviations calculated on the sample of 100 runs. Low standard deviation implies similar intra-community connectivity among communities. The peak of intra-community connectivity is observed in the year 2004. The interconnectivity then drops to its minimal value in 2006, where the connectivity within the communities grows and has local maxima in 2008, which slowly decreases until 2014. In 2015, we observed another smaller rise in connectivity.
The networks of the years 2004, 2008, and 2015 have two essential features. They have a very stable number of communities, independent from the initial conditions of the Louvain method. Furthermore, the intra-community connectivity for these networks has a local maximum in these years with a very low standard deviation. Figure 7a shows that the intra-community connectivity P T in has the local minima in the year 2006, indicating high connectivity between communities. In 2006, the system had the highest potential for diffusion between communities, meaning that one community's disturbance could easily be transmitted to any other community. If this disturbance is a failure, the system is at high risk of efficiently spreading failure and breaking down. Our result matches what happened to the USA financial sector since 2006 was the year before the crisis started in 2008. Other researchers have predicted the beginning of the crisis [1]. High and consistent inter-community connectivity in 2006 indicates that companies in different sectors were susceptible to the influence of the same factor. This factor was real estate lending, which pulled most of the financial industry. Many financial sectors were directly or indirectly involved in real estate lending, leading to the relationship network's almost homogeneous structure. The local minima in 2006 preceded a peak in 2004, where communities were well defined.
A crisis is followed by a period of recession, which is recognized by lower values of economic indicators such as employment, gross domestic product, household net worth, and federal surplus or deficit. Figure 8 shows the relative change of these four indicators for the USA economy between 2002 and 2017. We see that the recession period lasted from 2009 and ended in 2014. Our results indicate that the standard deviation for intra-community connectivity has higher values for the same period, while its values decrease between 2014 and 2017. We see from Figure 7b that standard deviation P T in was higher during the economic recovery compared to the post-crisis period. We observe the increase in P T in , see Figure 7a in the years 2007 and 2008, after reaching its minimum in 2006. In 2007, companies in the financial system understood that the economy was in bad condition and that interconnection was high. Communities tried to depart from each other, leading to a high P T in and low σ P T in in the year 2008. However, the number of communities N C (T) decreased in 2008 because two communities merged into one, regional banks and REITs. We observed the homogenizing of the system in a different form where the number of different sectors decreased.
Effect of Regulation on Structure and Behavior
The USA financial system has to be controlled to prevent the system break down and decrease systemic risk [1]. Control is realized through appropriate regulations. High restrictive regulation may prevent a crisis. However, it can jeopardize economic growth since it limits companies' profits [1]. Less stringent regulations enable higher yields but increase systemic risk. Therefore, an optimal level of regulation has to be implemented, allowing a thriving economy while decreasing systemic risk. The regulations impose restrictions on companies' behavior, while deregulation provides companies with a higher degree of freedom. They both define the behavior of comprising elements of the system. The effect of regulation and deregulation on the system results from the collective behavior of incorporating elements. One needs tools to measure the impact of regulations on the system to create optimal regulations. Our methodology provides insight into the influence of regulation and deregulation on a system's structure and behavior.
Deregulation took place in 2004 [44] and proposed a system of voluntary regulations where investment banks can hold less capital in reserve. Having less money in reserve means that companies become more dependent on other companies and more vulnerable. Higher connectivity between companies leads to an increase in systemic risk. Deregulation is considered one of the leading causes of crisis [45]. Our results show that P T in sharply decreased in 2005, indicating higher inter-dependence between communities and higher systemic risk. P T in and the standard deviation σ P T in further decreased in 2006, implying higher homogeneity within the system.
Regulations were implemented between 2011 and 2014 to respond to the crisis. The Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 was designed to increase financial stability and prevent future crises [46]. As this was the most comprehensive overhaul of the financial system [47], it took time to be implemented. Implementation started in 2011 and reached 50% of planned regulations in 2014 [48]. Our results show a sharp increase in P T in in 2014 when the economy recovered. Standard deviation σ P T in is higher during the crisis period compared to the period of the recovering economy, 2014-2017.
Discussion and Conclusions
In this work, we used a novel method to infer network structure from time series to study the cohesion between USA companies in the financial sector. Compared to existing methods, we used detrended prices instead of detrended returns. We introduced a technique for obtaining an optimal network from a correlation matrix and used a measure based on community structure that allows us to examine the evolution of cohesion. Our results show that the USA financial system's network structure between 2002 and 2017 underwent several phases: deregulation, crisis, and post-crisis. Each of these periods is characterized by different intra-community connectivity and standard deviation. The strength of connections between communities is directly related to the system's level of risk and stability.
Understanding the connections between the system's components is crucial for preventing crises. Our approach can identify the points of high systemic risk. This knowledge enables timely actions to increase the system's stability. Moreover, measuring the effect of these actions, such as regulation and deregulation, can be performed using our method. This is of great importance as inadequate efforts can further deteriorate financial stability. In 2008, the government's actions to increase financial stability and save the economy in the form of capital injection into the financial system were inadequate, which further pushed the economy into recession [1]. The price of wrong measures for recovering the economy is high in times of crisis because resources are even more limited. Our results show that the system's structure did not change due to these measures.
The economic system has to be regulated to prevent crises while securing the unrestrained behavior of individual companies to allow economic growth and prosperity. The economic system is dynamic and should be constantly monitored by policymakers to secure an optimal trade-off between of the economic growth and limiting behavior of composing elements. Policymakers must act on time since a delay of adequate actions can have a negative impact. Our method allows policymakers to see whether their actions are adequate and act promptly. As per our analysis, deregulation, which took place in 2004 to enable economic growth, had a strong impact on increasing systemic risk. This signal can be seen in 2005, where P i n sharply decreased, while standard deviation implied that connectivity between a certain number of communities increased. In addition, in 2006, all communities were strongly interconnected, which presents a high systemic risk and can be seen in low P in , standard deviation, and the number of communities. This led to the 2008 crisis, when some of the communities merged.
Existing techniques for constructing networks from the correlation matrix, MST and PMFG, put strict constraints on the network structure. MST forbids cycles between nodes and conditions the number of links to N − 1, where N is the number of nodes. PMFG only allows short cycles and the maximal number of links 3(N − 2). There is no economic reason behind these topological constraints for economic systems. Furthermore, the limit on the number of connections is too strict and may filter out some critical information about the network's connectivity. The lack of this forbids the study of the cohesion of the network and its dynamics.
Our method can be used by researchers interested in studying collective behavior in real systems such as economic, social, biological, and technological systems. The prerequisite is the availability of data in the form of time series. Our method enables discovering hidden relationships between the constituents of the system, leading to a better understanding of the system, predicting its behavior and controlling it. | 2022-07-22T15:05:26.884Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "fa619fbc8314914aeaf8f5a79a3e14d2101dbfe1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2372511008bb8fb8de14779a8cb3d857df8631a9",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"extfieldsofstudy": []
} |
247153603 | pes2o/s2orc | v3-fos-license | Evaluation of pharmacy value-added services in public health facilities: Staff perception and cost analysis
Background Pharmacy Value Added Services (VAS) were introduced in Malaysian public health facilities to facilitate the process of medicine collection. Examples include Drive-through pharmacy, Medicine by Post, SMS Take&Go, Appointment Card and medicine locker, commonly referred to as Medibox. Objectives To assess the perception of VAS among pharmacy staff, and to compare the time and cost needed to prepare medications for VAS and conventional counter service. Methods A cross-sectional study was conducted in 17 public health facilities across Kuala Lumpur and Putrajaya from May until September 2020. There were two parts of this study: 1) a survey on the perception of VAS among pharmacy staff, which assessed respondents' experience of handling VAS and their perception towards the services; and 2) a cost analysis to compare the direct cost of preparing refill medications for VAS and conventional counter service, estimated from average salary and direct non-medical cost. Results 290 respondents answered the survey. Most respondents had a positive opinion about VAS. Lack of storage and insufficient manpower were the top two barriers in VAS utilisation and implementation as perceived by pharmacy staff. The average time (in minutes) needed to prepare one prescription was highest for Medicine by Post service (10.31), followed by Medibox (10.25), Appointment Systems (6.24) and conventional counter service (3.99). Medibox had the highest average cost per prescription (RM5.49), followed by Medicine by Post (RM5.05), Appointment Systems (RM2.89) and conventional counter service (RM1.75). Conclusions The majority of the respondents involved in this study acknowledged the benefits of VAS to patients, but there were aspects of the services that could be improved. Preparation of patient medication for VAS requires significantly more time and cost than conventional counter service, indicating the need to review and streamline implementation of the services.
Introduction
Effective medicines supply management is a key component of accessible, sustainable, and equitable healthcare. Ensuring medication accessibility and an uninterrupted medicine supply is crucial for patients with chronic diseases to maintain their quality of life and reduce long-term healthcare costs. 1 To improve medication accessibility and continuous supply of medicine, various methods have been introduced worldwide, such as mail-order pharmacy and drive-through pharmacy. With the advent of the Coronavirus Disease 2019 (COVID-19) pandemic, the utilisation of these alternative services for medication collection is more important than ever. 2 Reducing congestion in public areas has become a worldwide priority to curb the spread of COVID-19, and pharmacists can contribute by limiting avoidable patient visits to the pharmacy. 3 In Malaysia, government-funded healthcare services provided by public hospitals and health clinics cater for a large proportion of the population due to their affordability and accessibility. The number of outpatient visits in public health facilities in 2018 was reported to be 66.9 million, compared to approximately 3.8 million outpatient visits in private health facilities in the same year. 4 This resulted in congestion and long waiting time for patients in public health facilities. With the aim of improving patient access to medicines and reducing congestion at public health facilities, pharmacy Value Added Services (VAS) were introduced in 2003 by the Pharmaceutical Services Division, Ministry of Health Malaysia (MOHM). 5 Types of VAS available in Malaysian public health facilities include Medicines by Post (better known as Ubat Melalui Pos or UMP), Drive-through pharmacy, medicine lockers (better known as Medibox), Appointment Card and other appointment-based systems where clients can directly contact the pharmacy to choose their preferred medicine collection time using short message service (SMS), telephone, WhatsApp, e-mail and other online-based methods. Medibox is a unique service that allows medicine supply through specialised medicine lockers located in public areas for easy access. 6 These services allow for a faster and more convenient experience for patients who need to refill their medicines.
In the year 2019, all facilities under MOHM were required to supply at least 20% of patients' refill medication through any of the available VAS as an effort to promote their use and improve uptake by patients. The pandemic increased the necessity of VAS, and there was a 27.3% increase in VAS uptake in 2020 compared to 2019 among patients of public health facilities in Kuala Lumpur and Putrajaya. 7 VAS are routinely offered to patients prescribed with long-term chronic medicines, and registration is required for UMP and Medibox services. By default, the first medicine supply after a doctor's appointment needs to be dispensed at the pharmacy counter to ensure that any changes in medication regime are verified and patients can be properly counselled. Proper documentation is important in VAS to ensure records can be traced efficiently. The workflows involved in medicine refills for conventional counter service and VAS are compared in Fig. 1.
Past studies have shown the benefits of these alternative services, including reduced patient waiting time at outpatient pharmacies, 8 increased patient satisfaction 9 and improved medication adherence. 10 Studies that explored patients' knowledge, attitude and perception towards VAS have shown that patients were overall satisfied with the service. 9,11,12 However, there were relatively few studies that look at delivery of these services from healthcare providers' perspective, and most of these focus on the cost analysis of mail-order pharmacy in the United States in terms of co-payment and reimbursement, which is markedly different from the system in Malaysia. 13,14 Due to the pandemic, MOHM increased promotion of VAS and even offered free postal fee for UMP service during the first nationwide movement control order, which resulted in over fourfold increase in uptake by patients. 15 The pandemic highlighted the importance of VAS, and it is imperative that the services are reviewed and evaluated to ensure continuous improvement. The objectives of this study were to assess the perception of VAS among pharmacy staff in public health facilities of Kuala Lumpur and Putrajaya, as well as to assess the impact of VAS implementation on staff workload by analysing the time and cost needed to prepare medications using VAS and conventional counter service.
Study design
A cross-sectional study was conducted in public health facilities of Kuala Lumpur and Putrajaya, involving a government hospital and 16 health clinics actively involved in the provision of VAS. Data collection was conducted from May until September 2020 and consisted of two main parts: (i) questionnaire on the perception of VAS among pharmacy staff, and (ii) cost analysis comparing different types of VAS and conventional counter service. The study was registered with National Medical Research Register (NMRR-19-3881-51845), and permission to conduct the study was obtained from Medical Research and Ethics Committee, MOHM.
Data collection -questionnaire
The target population for the questionnaire was pharmacists and assistant pharmacists (commonly referred to as pharmacy technicians) working in the outpatient pharmacy of a public hospital and health clinics in Kuala Lumpur and Putrajaya. Those who were exclusively involved with administrative work, logistic pharmacy, inpatient service, and other services unrelated to outpatient pharmacy service were excluded from the study. Using Raosoft sample size calculator, a total of 176 respondents were required for this study based on the computed values of population size = 332, margin of error = 5%, confidence level = 95%, and response distribution = 50%. 16 A self-developed questionnaire was used for the study, created based on the input from group discussion among researchers who had been directly involved in VAS, as well as information from past studies that explored patient perception of VAS. [7][8][9] The questionnaire was divided into three main sections: (i) respondent's demographic information, (ii) general information on VAS in respondent's facility, and (iii) general perception towards VAS. The third section evaluated respondent's perception of VAS in terms of benefits to patients, impact on staff, service delivery and effect on pharmacy service using 5-point Likert items. To identify the most important barriers that prevented patients from using VAS, scores were calculated based on frequency distribution and weighted values of the answers (Strongly agree = 5, Agree = 4, Neutral = 3, Disagree = 2, and Strongly Disagree = 1 ). An open-ended question asking for suggestions for improvement of VAS was included at the end of the questionnaire. The responses were read by different investigators, who identified codes and extracted relevant information throughout the entire dataset to identify related themes from the suggestions. Through group discussion, the codes that have similar concepts were combined to create potential themes.
The questionnaire was validated for its content by three experts in pharmacy practice to assess the suitability of language and overall content. The content was further validated in a pilot study through cognitive debriefing involving 18 pharmacy staff from various health clinics to assess comprehension, retrieval, judgment, response, and response burden. The questionnaire was prepared using Google form and distributed to respondents through their facilities' e-mail. A pharmacist from each facility was appointed as data collectors, who distributed the questionnaire to all eligible staff in their facilities through convenience sampling.
Data collectioncost analysis
The cost analysis was conducted from healthcare provider's perspective, based on personnel cost and any additional transport cost needed to prepare repeat prescriptions. The time allocated for dispensing was not included as the duration of dispensing varies widely depending on various other factors. For this study, VAS were divided into three main categories based on the steps involved in medicine preparation: Appointment Systems, UMP, and Medibox. Most VAS were grouped into Appointment Systems, which include Drive-through pharmacy, Appointment Card, SMS Take&Go, and other VAS that involve setting up an appointment date for medicine collection and collection of medicines from pharmacy staff. UMP and Medibox were considered distinct as they required additional steps in drug preparation and did not require medicine collection from pharmacy staff.
New prescriptions and prescriptions for acute conditions were excluded for cost analysis. The number of prescriptions needed for analysis was estimated using Raosoft sample size calculator 16 based on the total number of service uptake from January until December 2019. The minimum number of repeat prescriptions to be analyzed for cost analysis was 337 for both conventional counter service and the Appointment Systems. UMP service and Medibox service required 307 and 92 prescriptions, respectively. The minimum figures were divided among the facilities involved in data collection, based on the service uptake at each facility.
Data collectors appointed from each facility were trained to standardise data recording. A standardised workflow for the steps involved in drug preparation was prepared for each service, and every facility was also given a standard data collection form and time-motion sheet to be filled during the data collection. Only direct medical cost (personnel cost) and direct non-medical cost (the cost of delivery to medicine lockers located in separate public buildings) were included in the analysis. The cost incurred by patients to collect medicine was not included. Personnel cost was calculated based on minute wages, determined by pharmacists' and assistant pharmacists' basic annual salary following the pay scale of the Federal Civil Service Officers of Malaysia. The salary used for calculation was RM0.538/min and RM0.329/min for pharmacist and assistant pharmacist, respectively. Delivery cost for Medibox service was calculated based on the mileage claim for the journey between the health facility and Medibox location. The rate of mileage claim was RM0.85/km. A summary of the processes involved and cost calculation for each service are defined in Table 1.
Statistical analysis
Data were recorded in Microsoft Excel 2013 and analysed using IBM SPSS Statistics version 24. Normality of data was checked before analysis based on skewness, kurtosis and normality test in SPSS. Kruskal-Wallis and Dunnett T3 tests were used to compare the time and cost needed to prepare medicine through VAS and conventional counter service. Descriptive statistics such as frequency and percentage were used to represent the data collected from the survey (e.g.: type of facility, profession, perception of VAS), and mean ± standard deviation was used for continuous data (e.g.: duration of service, time spent doing VAS, score for VAS barriers).
Questionnaire
A total of 290 respondents answered the survey, with a response rate of 87.3%. Out of these, 174 (60%) were directly involved in the preparation of VAS and were directed to answer questions related to VAS involvement, such as duration of VAS involvement and time spent doing VAS activities during and after office hours. Answers from all respondents were pooled together to determine the methods of VAS promotion used in each facility. This is summarised in Table 2. UMP (N = 97, 33.4%), Drive-through pharmacy (N = 66, 22.8%) and SMS Take&Go (N = 33, 11.4%) were the top three preferred VAS, voted based on the perspective of a health care personnel. The types of VAS most frequently voted as least preferred were Integrated Drug Dispensing System (N = 81, 27.9%), UMP (N = 61, 21.0%) and Call&Collect (N = 33, 11.4%). Among the respondents who were directly involved with VAS preparation, 62 (35.6%) respondents from 12 facilities had encountered a medication error among patients using UMP and Medibox services.
Overall, the majority of the respondents were optimistic about the benefits of VAS to patients, and the respondents agreed or strongly agreed that VAS are beneficial to patients (N = 259, 89.3%), help to reduce waiting time at the counter (N = 256, 88.3%), create higher satisfaction among patients (N = 216, 74.5%) and simplify the process of medication collection for patients (N = 222, 76.6%). However, there was relatively less optimism regarding the impact on staff, with 60.7% (N = 176) of respondents feeling that drug preparation for VAS was more time-consuming than standard counter dispensing. Regarding service delivery, less than half of respondents agreed or strongly agreed that they have sufficient space to store prepacked VAS medication (N = 81, 27.9%) and manpower to promote and provide VAS (N = 130, 44.8%). Respondents were also asked about their perception of VAS impact on pharmacy service regarding medication wastage, medication error and adherence, with relatively mixed responses. A summary of respondents' perceptions towards VAS is shown in Table 3. Table 4 summarises the implementation issues and barriers in VAS utilisation. Lack of storage (Score = 4.01 ± 1.01) was considered the biggest barrier, followed by insufficient staff to handle VAS (Score = 3.76 ± 1.01), lack of interest among patients and clients to try new services (Score = 3.69 ± 1.02), high burden of repeat prescription (Score = 3.66 ± 1.01), and lack of standardisation of VAS between different MOH facilities (Score = 3.64 ± 1.01).
A total of 82 respondents provided suggestions to improve the implementation of VAS. Four main themes were identified: space, manpower, system and promotion. With the increasing number of patients using VAS, respondents expressed the need for more space to store prepacked VAS medication and more manpower to handle VAS activities. There were also suggestions to improve the system, such as standardisation of workflow and limiting available types of VAS by emphasising selected services that are more manageable. Respondents also suggested extending VAS promotion to mass media and social media and ensuring that other healthcare personnel are also aware of the types of available services. The suggestions are summarised in Table 5.
Cost analysis
The average time and cost needed to prepare one prescription are summarised in Table 6. Although not included in the final calculation, the time and cost needed to dispense medicine were also measured for conventional counter service (N = 346; Mean time per prescription ± SD = 1.14 ± 0.87 min; Mean cost per prescription ± SD = RM0.61 ± 0.47). We conducted a sensitivity analysis by varying the time needed to prepare each prescription using the 95% confidence interval minimum and maximum values. The change in cost was less than 10%, with the largest difference found in counter service (8.9%), followed by Medibox (8.4%), Appointment Systems (4.8%) and UMP (2.4%). The difference in total cost was also less than 10% when changes in salary were made. Even with the 8.9% change, there was still a substantial difference between the cost of preparing medication through conventional counter service and VAS. Therefore, the cost analysis model was considered to be robust.
The average time needed to prepare each step of the medication preparation process was compared and shown in Fig. 2. The time needed to screen VAS prescriptions was longer than conventional counter service, with the longest time recorded to screen Medibox prescriptions (2.68 min) compared to conventional counter service (1.42 min). The time needed for medication filling was similar across all types of services, but there was again a marked difference in the counter-checking process, with the longest time recorded for UMP (1.90 min) compared to conventional counter service (0.67 min).
Discussion
The results revealed that the time and cost needed to prepare medications with VAS were considerably higher compared to conventional counter service. The cost was highest for Medibox, followed by UMP and Appointment Systems. The elevated cost was attributed to the increased time needed to prepare the medications due to additional steps involved, such as recording in the VAS database, medicine packing, sending reminders, and storing medications. UMP has a lengthy procedure, with the additional steps of preparing consignment, packaging and postage of medicine. Medibox has a relatively shorter workflow than UMP, but the additional transport time and cost of sending prepacked medicines to Medibox located in separate facilities increased the average cost of preparing the medicines. Overall, each pharmacy staff had to spend nine hours per week for VAS activities, roughly equivalent to one full work-day, indicating the need for dedicated staff for this service.
In contrast to our findings, a study by AbuBlan et al. in Jordan found that the average time required for overall processing in mail-order prescriptions was 3.17 min shorter than pharmacy counter service. 17 Their mailorder service also involved additional steps such as obtaining patient contact information, data entry and handing over of medicines to courier service, but they observed shorter time for billing, filling and checking medicine in the mail-order workflow. However, the reason for reduced time for these steps compared to the conventional counter service was not discussed, and it was not clear whether the number of items per prescription was comparable. Their analysis also included time-saving obtained from the dispensing step, which was not incorporated in our study. However, based on our results, the time and cost for dispensing step were only 1.14 min and RM0.61, respectively. Therefore, the time and cost for processing UMP were still considerably higher than conventional counter service.
Satisfaction among VAS users in the Malaysian population had been verified in several local studies. 9,12,18 Our study showed that most pharmacy staff involved with the provision of VAS also believed that the services were beneficial to patients. Similarly, a study assessing perception of drive-through pharmacy among pharmacists in Jordan showed that the pharmacists involved acknowledged the benefits of the service, although there were concerns that the service may negatively affect the public image of pharmacy profession and provision of patient counselling. 18 Because of similar concerns, VAS in Malaysia is primarily reserved for repeat prescriptions and patients with stable conditions. 5 Respondents in this study were less optimistic regarding the impact of VAS on medication Table 5 Respondents' suggestions on how to improve VAS.
Theme Comment Quote
Space More space is needed to store prepacked VAS medications to accommodate the increasing number of patients enrolled with the service. Designated space is needed to ensure medicines can be easily searched when needed.
"increase patient in VAS means we need more space to put medicines" (Respondent 64, Health clinic) "Having sufficient spaces for storing VAS medication properly, with an organized storage that will make the staff finding the medication easier for the patients" (Respondent 69, Health clinic) Manpower Sufficient staff is needed to manage VAS, with more time allocation for VAS activities and the appointment of a designated staff primarily focused on handling VAS-related issues.
"Allocate more staff and device (computer and printer wastage and medication error, but more than 40% of respondents agreed that VAS could help to improve medication adherence. Several overseas studies have shown that patients using mail-order pharmacy had higher adherence than those utilising conventional counter service, with up to 21.4% higher adherence among users of mail-order pharmacy. [19][20][21] Improving access to medicines can positively impact patients, and the study by Schwab et al. even showed that mail-order pharmacy can help improve glucose control. 21 According to the respondents in this study, lack of storage was considered the biggest barrier in VAS implementation. Similarly, the issue of space for medication storage was also mentioned in the study by AbuBlan et al. The authors highlighted that additional space was needed for processing, packing and storing the packages before posting the medicines to patients. 17 A qualitative study among Malaysian patients revealed several barriers preventing VAS uptake among potential users, such as indifference to new services and lack of information on the services due to insufficient promotion to the public. 22 Patient's lack of interest to try new services was also considered an important barrier by the respondents in this study, but this is influenced by personal preferences. Some patients preferred to come to the pharmacy counter as they could directly speak with pharmacists regarding their medicine, as shown in the previous study. 22 However, insufficient promotion was not considered a main issue by the respondents in this study and was ranked the lowest among the barriers listed for consideration. Promotion may have been insufficient five years ago, but after the COVID-19 pandemic, promotion for VAS had increased substantially especially for UMP, to ensure that patients could receive their medication refill amidst the nationwide lockdown. 23 The top three preferred VAS voted in this study were UMP, Drivethrough pharmacy and SMS Take&Go, with various reasons cited for their choice. UMP was generally preferred due to the contactless nature of the service, and the preparation for postage could be done at a convenient time when the pharmacy was less busy. For drive-through pharmacy, the respondents perceived that the ability to communicate with patients made it easier to handle any issues detected with their medication while also reducing congestion at the pharmacy. The respondents who preferred SMS Take&Go perceived that it required the least time to prepare and was more accessible to patients. In comparison, a previous local study that assessed patient preference found that Drive-through pharmacy, Call&collect and UMP were in the top three, 9 while UMP, Appointment card and Integrated Drug Dispensing System were preferred by patients in another study. 12 Preference in VAS is diverse, and the types of VAS offered by each facility should reflect their own patients' needs, which may explain the multiple types of VAS offered by the facilities. However, offering too many types of VAS may also confuse patients and create more workload.
Despite the extra documentation and procedures needed for VAS that resulted in increased cost, the potential benefits of reduced patient waiting time, 8 reduced congestion at health facilities 24 and improved adherence 10 may outweigh these extra costs. Reviewing and streamlining the processes involved may help to alleviate the problem. Ensuring adequate computers and provision of specialised printers for consignment preparation may help to save time. Centralising UMP service based on regions may also help to increase efficiency and limit the cost to prepare equipment and device. In addition, staffing allocation may need to be re-evaluated for facilities with a large number of VAS clients to prevent burnout due to frequent overtime. It may also be beneficial to limit the types of services available and critically select suitable VAS to be implemented, as each facility may have different patient needs and different resources available to them.
This study has limitations. Due to the small number of clients using Medibox service, there were fewer prescriptions evaluated compared to other types of VAS. The average number of medicines per prescription were also lower than the other services, but Medibox still required a longer time for drug preparation than conventional counter service. The comparison was only made for medicine preparation process, and dispensing was not included in the analysis, although the workflow for conventional counter service and the Appointment Systems would only be finished at medicine dispensing. However, as the time needed for drug preparation in Medibox and UMP was considerably higher, the overall results would remain the same if the dispensing step was included. Another limitation was that this study only looked at the direct cost of medicine preparation and did not compare the full costs of service implementation. Future studies Table 6 Summary of time and cost needed to prepare one prescription. may need to conduct a comprehensive cost-benefit analysis to further understand the costs of these services.
Conclusion
The current study showed that the implementation of VAS comes with extra costs for healthcare providers, and some barriers need to be overcome to further improve the delivery of this service. The direct cost needed to prepare medications for VAS was significantly more than traditional counter service, with the highest cost recorded for Medibox, followed by UMP and Appointment Systems. In the era of the COVID-19 pandemic, VAS can potentially contribute to reducing congestion at health facilities, and its importance is now more prominent than ever. The majority of the respondents involved in this study acknowledged the benefits of VAS to patients, but there were aspects of the service that could be improved. Measures need to be taken to ensure a seamless and efficient process. There is a need to streamline and standardise the procedures and supply sufficient workforce capacity to ensure the service can continuously expand and cater for patient needs. | 2022-02-28T16:15:18.023Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "8ff1b3d165d4d51cef5bb5c56f165f6c030e28b3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rcsop.2022.100120",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21c62021498846d536db5fa339b442dcd64af065",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
58009572 | pes2o/s2orc | v3-fos-license | Protocol for systematic review and meta-analysis of treatment success rate among adult patients with tuberculosis in sub-Saharan Africa
Introduction Tuberculosis (TB) is a leading cause of mortality globally. Despite being curable, treatment success rates (TSRs) among adult patients with bacteriologically confirmed pulmonary TB (BC-PTB) in sub-Saharan Africa (SSA) differ considerably. This protocol documents and presents an explicit plan of a systematic review and meta-analysis to summarise TSR among adult patients with BC-PTB in SSA. Methods and analysis Two reviewers will search and extract data from MEDLINE, EMBASE, Ovid, Cumulative Index to Nursing and Allied Health Literature and Web of Science electronic databases. Observational and interventional studies published between 1 July 2008 and 30 June 2018, involving adult patients with BC-PTB will be eligible. Data abstraction disagreements will be resolved by consensus with a third reviewer, while percentage agreement computed with kappa statistics. TSR will be computed with Metaprop, a Stata command for pooling proportions using DerSimonian and Laird random effects model and presented in a forest plot with corresponding 95% CIs. Heterogeneity between included studies will be assessed with Cochran’s Q test and quantified with I-squared values. Publication bias will be evaluated with funnel plots and tested with Egger’s weighted regression. Time trends in TSR will be calculated with cumulative meta-analysis. Ethics and dissemination No ethical approval will be needed because data from previous published studies in which informed consent was obtained by primary investigators will be retrieved and analysed. We will prepare a manuscript for publication in a peer-reviewed journal and present the results at conferences. PROSPERO registration number CRD42018099151.
Introduction Tuberculosis (TB) is a leading cause of mortality globally. Despite being curable, treatment success rates (TSRs) among adult patients with bacteriologically confirmed pulmonary TB (BC-PTB) in sub-Saharan Africa (SSA) differ considerably. This protocol documents and presents an explicit plan of a systematic review and meta-analysis to summarise TSR among adult patients with BC-PTB in SSA. Methods and analysis Two reviewers will search and extract data from MEDLINE, EMBASE, Ovid, Cumulative Index to Nursing and Allied Health Literature and Web of Science electronic databases. Observational and interventional studies published between 1 July 2008 and 30 June 2018, involving adult patients with BC-PTB will be eligible. Data abstraction disagreements will be resolved by consensus with a third reviewer, while percentage agreement computed with kappa statistics. TSR will be computed with Metaprop, a Stata command for pooling proportions using DerSimonian and Laird random effects model and presented in a forest plot with corresponding 95% CIs. Heterogeneity between included studies will be assessed with Cochran's Q test and quantified with I-squared values. Publication bias will be evaluated with funnel plots and tested with Egger's weighted regression. Time trends in TSR will be calculated with cumulative meta-analysis. Ethics and dissemination No ethical approval will be needed because data from previous published studies in which informed consent was obtained by primary investigators will be retrieved and analysed. We will prepare a manuscript for publication in a peer-reviewed journal and present the results at conferences. PrOsPErO registration number CRD42018099151.
IntrOduCtIOn
Tuberculosis (TB) is the ninth leading cause of death globally, and presently the number one cause of death in HIV positive persons. 1 Estimates from WHO indicate that 1.3 million HIV negative, and another 374 000 HIV positive persons died of TB in 2016. 1 Sub-Saharan Africa (SSA) has the highest burden of TB in addition to having the most number of HIV negative TB cases. 2 Existing data indicate that 16 of the 30 high TB burden countries are in SSA. 3 TB is curable with standardised short course regimens of proven and known bioavailability. 4 WHO recommends 85% cure and 90% treatment success rates (TSRs) for well-performing TB programmes, 5 which is adequate in reducing TB transmission, morbidity and mortality. To achieve the needed cure and TSR, WHO introduced the directly observed therapy short course (DOTS) strategy requiring patients with TB to take medications under the direct supervision of a treatment supporter. Following the scale up of DOTS, millions of patients with TB have been successfully treated, and the strategy has proven effective in TB control in low/middle-income countries. 6 Additionally, coverage, access and better treatment outcomes among patients with TB have dramatically improved. 7 One study in Nigeria showed an overall TSR of 84.1% among patients with TB treated under DOTS. 8 Another study showed that patients with TB who are not treated under DOTS were almost 17 times more likely to fail on TB treatment strengths and limitations of this study ► First systematic review and meta-analysis of tuberculosis treatment success rate (TSR) for sub-Saharan Africa. ► Methodological design and statistical analysis plan are very strong and robust. ► Results will inform public health interventions and policy for improving tuberculosis programmes. ► The absence of data on TSR for paediatric and multidrug-resistant tuberculosis is a limitation. ► Restricting the review to published articles between July 2008 and June 2018 is a pitfall.
Open access or to relapse with TB disease compared with those treated under DOTS. 9 Several epidemiological studies across TB programmes from the African continent show conflicting TSR as low as 71% in Ethiopia, 10 and as high as 80% and 85.4% in South Africa 11 and Nigeria, 12 respectively. So TSRs in SSA differ substantially, and at present, there is lack of summarised data particularly for adult patients with bacteriologically confirmed pulmonary TB (BC-PTB). To close this gap, we propose to undertake a systematic review and meta-analysis to summarise and synthesise TSR among adult patients with BC-PTB in SSA. The results of the study will be useful in generating evidence to inform public health interventions and policy for improving TB programme performance.
Objective of systematic review and meta-analysis
The primary objective of this systematic review and meta-analysis will be to summarise TSR among adult patients with BC-PTB (≥15 years of age), both new and retreatment in SSA for a decade.
MEthOds And AnAlysIs Protocol design and registration
We will use a systematic review and meta-analysis study design to summarise observational and interventional studies published between 1 July 2008 and 30 June 2018. This study design is appropriate for summarising and synthesising research evidence to inform policy and practice by integrating results from several independent primary studies that are combinable. 13 The development of this study protocol, the conduct and design, and the reporting of results will be in accordance to the Preferred Reporting Items for Systematic Reviews and Meta-analyses Protocol (PRISMA-P), 14 15 and Meta-analysis of Observational Studies in Epidemiology, 16 guidelines. This study protocol is registered with the International Registration of Systematic reviews (PROS-PERO), a platform for the international registration of prospective systematic reviews, 1 7 and assigned the registration number CRD42018099151 (available at: http:// www. crd. york. ac. uk/ PROSPERO/ display_ record. php? ID= CRD42018099151). 18 Registration reduces duplication of reviews and provides transparency in the review process, with the aim of minimising reporting bias. 19 Table 1 provides WHO standard definitions for TB cases and treatment outcomes that have been adopted and used in this study.
Retreatment BC-PTB cases treated with the 8 months anti-TB regimen containing streptomycin (S) (2RHZES/1RHZE/5RHE) will also be considered. Studies evaluating TB treatment outcomes on all patients with TB will be included, provided the reporting of results for new and retreatment adult patients with BC-PTB are clear.
We will consider articles published between 1 July 2008 and 30 June 2018. This time period is proposed for convenience because our aim is to review data that spanned for a decade, which we believe will be sufficient time frame for a demonstrable trend of events.
We will exclude systematic reviews and meta-analysis, and studies involving non-adult (children, below 15 years of age) TB cases, extra-PTB, clinically diagnosed PTB and multidrug-resistant TB cases. Also, eligible studies with unclear reporting of TSR (or contrary to WHO standard definition of TSR) and conducted outside the SSA will be excluded.
search strategy and searching sources A search strategy will be developed using key concepts in the research question: bacteriologically confirmed tuberculosis, adult, treatment success and sub-Saharan Africa. For each key concept, appropriate free-text words and Medical Subject Headings (MeSH) will be developed. To ensure a comprehensive search of appropriate electronic databases, certain text words will be truncated, while wildcards will be used for some. This will enable the retrieval of relevant articles that might have used different spellings for the same word. The free-text words (truncated or with wildcards) and MeSH terms will be combined using Boolean logic operators: AND, OR and NOT, appropriately. A pretest of the search strategy by coauthor, JI and verified by FB and RS will be performed in PubMed between 2 April 2018 and 29 June 2018. This will ensure the determination of the appropriateness of the search strategy in retrieving relevant articles and its subsequent modification.
Conversely, between 2 July 2018 and 30 November 2018, two independent reviewers (JI and RS) will implement the electronic search strategy in the following electronic databases: MEDLINE through PubMed, EMBASE, Cochrane Library, Ovid, Cumulative Index to Nursing and Allied Health Literature and Web of Science. The search term will be as follows; (Tuberculosis) AND (Treatment AND outcome OR (Successful AND Unsuccessful AND outcome)). Elsewhere (online supplementary material S1), the full electronic search strategy for MEDLINE through PubMed is presented.
study selection
All citations identified by our search strategy will be exported to EndNote, a bibliographic management software and duplicates removed. The remaining citations will be screened by titles and abstracts by two Open access independent reviewers (JI and RS), and ineligible studies will be excluded. The full texts of selected articles will be retrieved and read thoroughly to ascertain the suitability prior to data extraction. A hand search will be performed on the reference lists of selected articles in order to include studies that will not be identified by the search strategy. In addition, a deliberate hand search of the International Journal of Tuberculosis and Lung Disease, WHO and the World Bank websites will be conducted. Experts in TB care and research will be consulted for additional research papers as well. For grey literature, we will search LILACS, OpenGrey, dissertations/thesis and reports. In each electronic database, RS will use an iterative process to refine the search strategy and incorporate new search terms. The search process will be presented in a PRISMA flow chart. data collection/extraction process and data items Data will be extracted by two independent reviewers (JI and DS) using a standardised data abstraction form, developed according to the sequence of variables required from the primary studies. Disagreements in data abstraction between JI and DS will be resolved by a third independent reviewer, FB.
Data will be extracted on the following: author's first name, publication date, location (country in which the research was conducted), study design (cross-sectional, case-control, prospective and retrospective cohort, and interventional studies), sample size, HIV serostatus (HIV positive and HIV negative), TB treatment regimen (2RHZE/4RH, 2RHZE/6HE and 2RHZES/1RHZE/5RHE), TB treatment category (new or retreatment TB cases), and TB treatment outcomes (number of patients with TB who got cured, completed TB treatment or were successfully treated, died, defaulted and failed treatment).
In studies comparing TSR in two or more arms, each study arm will be considered as a single study. Data will be A patient with TB with a biological specimen that is positive on smear microscopy, culture or molecular test like GeneXpert. Clinically diagnosed PTB Patient who does not fulfil the criteria for bacteriological confirmation but has been diagnosed with active TB by a clinician or any other medical practitioner who has prescribed the patient a full course of anti-TB treatment. This also includes X-ray abnormalities or suggestive histology and EPTB cases without laboratory confirmation.
Cure
A patient with PTB with bacteriologically confirmed TB at the beginning of treatment, who is smear or culture negative in the last month of treatment and on at least one previous occasion.
Died
A patient with TB who dies for any reason before starting or during treatment.
Extra-PTB (EPTB) Any bacteriologically confirmed or clinically diagnosed TB case involving organs other than the lungs, such as pleura, lymph nodes, abdomen, genitourinary tract, skin, joints and bones, and meninges among others.
HIV positive TB patient A bacteriologically confirmed or clinically diagnosed TB case who is HIV positive at the time of TB diagnosis or any other evidence of enrolment into HIV care, such as enrolment into pre-ART (Anti-retroviral therapy) register or in ART register once ART has been started.
Lost to follow-up Patients with TB who have previously been treated for TB and were declared lost to follow-up at the end of their most recent course of treatment (these were previously known as treatment after default patients).
New TB case A patient who has never had treatment for TB, or had been on anti-TB treatment for less than 4 weeks in the past.
PTB
Refers to any bacteriologically confirmed or clinically diagnosed case of TB involving the lung parenchyma or the tracheobronchial tree. This also includes miliary TB. Patients with both PTB and EPTB are classified as PTB.
Retreatment TB case These are patients with TB who have relapsed after, defaulted during or failed on first-line treatment.
TB relapse Patient, who has previously been treated for TB, was declared cured or treatment completed at the end of their most recent course of treatment and is now diagnosed with a recurrent episode of TB (either a true relapse or a new episode of TB caused by reinfection).
Treatment completed A patient with TB who completed treatment without evidence of failure but with no record to show that sputum smear or culture results in the last month of treatment and on at least one previous occasion were negative, either because tests were not performed, or results were unavailable.
Treatment failed A patient whose sputum smear or culture is positive at month 5 or later during treatment. Treatment success rate Proportion of new smear-positive TB cases registered under directly observed therapy in a given year that successfully completed treatment, whether with bacteriological evidence of success (cured) or without (treatment completed).
Open access extracted separately from each study arm on the outcome of interest and then added to obtain a single outcome measure. The degree of agreement between the two independent data extractors (JI and DS) will be computed using kappa statistics to indicate the difference between observed and expected agreements between JI and DS, at random or by chance only. Kappa values will be interpreted as follows: (1) less than 0 equals less than chance agreement, (2) 0.01-0.20: slight agreement, (3) 0.21-0.40: fair agreement, (4) 0.41-0.60: moderate agreement, (5) 0.61-0.80: substantial agreement and (6) 0.8-0.99: almost perfect agreement. 20 dealing with missing outcome data We will contact and request first authors through electronic mails to provide missing outcome data, perform sensitivity analysis to assess the robustness of meta-analytic results, and discuss the potential impact of missing data on the review findings. 21 We will not use any one of the several statistical approaches (available case analysis, analysis of worst and best case scenarios, last observation carried forward and data imputation in sensitivity analysis to explore impact of missing data) for dealing with missing outcome data because none is effective. Besides, they cannot reliably compensate for missing data and are less recommended in meta-analysis. 22 data processing Extracted data will then be entered in EpiData V.3.1 (EpiData Association, Odense, Denmark), 23 with quality control measures (skipping, alerts, range and legal values) to ensure data quality.
Quality assessment
Two reviewers (JI and DS) will assess the quality of data in included studies. We will use the National Institute of Health (NIH) quality assessment tools. 24 25 The NIH tool will be preferred because it is more comprehensive and thus enables an exhaustive assessment of quality of included studies. The overall quality of included studies will be rated as good, fair and poor. The rates will be incorporated in the meta-analytic results.
Primary outcome
The primary outcome will be TSR, which will be the proportion of new and retreatment smear-positive TB cases registered under DOT in a given year that successfully completed treatment, whether with bacteriological evidence of success (cured) or without (treatment completed). The numerator will be the number of adult new and retreatment patients with BC-PTB who have either got cured or who have completed TB treatment, while the denominator will be the number of patients initiating TB treatment.
statistical analysis Data will be analysed in Stata V.15.1 (StataCorp). We will present data from eligible studies in evidence table and summarise using descriptive statistics. The effect measure, TSR, will be computed using the Metaprop command for the meta-analysis of proportions in Stata. Metaprop allows the inclusion of studies with proportions equal to 0 or 100% and avoids CIs surpassing the 0 to 1 range, where normal approximation procedures often breaks down. It achieves this by using the binomial distribution to model within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilise the variances. 26 In this study, TSR will be calculated together with the corresponding 95% CI using the Wald method executed with the cimethod (score) command.
A forest plot will be generated to show the individual and pooled TSR, 95% CI, the author's name, publication year and study weights (both for primary studies and this systematic review/meta-analysis).
Prediction intervals
After performing meta-analysis, we will compute prediction interval (PI) to reflect the variation of TSR in different settings, including the direction of evidence in future studies. 27 PI shows the range in which the point estimate (TSR) of future studies will fall, assuming true effect sizes are normally distributed. Reporting PI ensures informative inference in meta-analyses. However, PI is only appropriate when studies included in meta-analysis have low risk of bias. 28 testing for heterogeneity Heterogeneity between the results of the primary studies will be assessed using the Cochran's Q test and quantified with the I-squared statistic. Probability value less than 0.1 (p<0.1) will be considered to suggest statistically significant heterogeneity. Heterogeneity will be considered low, moderate and high when the values are below 25%, between 25% and 75%, and above 75%, respectively. 29 Statistical heterogeneity occurs when differences between study results are beyond those attributable to chance only. Heterogeneity may arise from the study setting, the study participant type, the implementation of intervention, among others.
In statistical analysis, the random-effects model is frequently used to incorporate heterogeneity in meta-analyses. 30 Consequently, we will use the DerSimonian and Laird random effects model for pooling TSR since the studies are anticipated to be heterogeneous. This accounts for heterogeneity among study results beyond the variation associated with fixed-effects model. 31 We will then investigate the sources of heterogeneity with the random-effects meta-regression analysis based on the primary study characteristics: study design, publication year, setting of the study and TB regimen. The meta-regression analysis will be weighted to account for both within-study variances of treatment effects and the residual between-study heterogeneity (ie, heterogeneity not explained by the covariates in the regression). 32
Open access
Assessment of publication bias Publication bias, the tendency of publishing studies with beneficial outcome or studies that demonstrate statistically significant findings, 33 will be assessed using a funnel plot (a plot of effect estimates against sample sizes). Based on the shape of the graph, a symmetrical graph will be interpreted to suggest absence of publication bias, whereas an asymmetrical graph will be interpreted to indicate presence of publication bias. 34 35 Egger's weighted regression will be used to test for publication bias, with p<0.1 considered indicative of statistically significant publication bias. 34 Where publication bias exists, we will perform Duval and Tweedie non-parametric 'trim and fill' analysis to formalise use of funnel plot, estimate number and outcome of missing studies, and adjust for theoretically missing studies.
Cumulative meta-analysis
To determine the 10-year time trends in TSR across SSA, a cumulative meta-analysis (defined as the performance of an updated meta-analysis every time a new trial appears) which is critical in evaluating the results of primary studies in a continuum will be performed. In cumulative meta-analysis, one primary study will be added at a time according to publication date and the results will be summarised until all primary studies will have been added. 36 Cumulative meta-analysis will therefore retrospectively identify the point in time at which treatment effect, in this case TSR, first reached conventional levels of significance. In doing so, cumulative meta-analysis will represent in a compelling way the trends in the evolution of summary (effect size) and will assess the impact of a specific study on the overall conclusion. 37 sensitivity analysis We will perform sensitivity analysis to reflect the extent to which the meta-analytical results and conclusions are altered as a result of changes in analysis approach. 21 This helps in assessing the robustness of study conclusion and the impact of methodological quality, sample size and analysis methods on the meta-analytical results. In particular, the leave-one-out jackknife sensitivity analysis in which one primary study is excluded at a time will be used. We will then compare the new pooled TSR with that of the original TSR.
If the new pooled TSR will lie outside of the 95% CI of the original pooled TSR, we will conclude that the excluded study has a significant effect in the study and should be excluded from the final analysis.
subgroup analysis
We will perform subgroup analysis on TSR based on several study characteristics: HIV serostatus (HIV positive, HIV negative or both HIV positive and negative TB patients), type of patient with BC-PTB (new, retreatment or both new and retreatment), SSA region (Northern, Southern, Eastern, Central and Western Africa), study designs (cross-sectional, case-control, cohort and RCT), interventional versus observational studies, study setting (rural, urban, and both rural and urban) and the recent United Nations Development Programme Human Development Index for included countries (very high, high, medium, and low human development index), where feasible.
Ethics and dissemination
No human subject participants will be involved. On completion of the analysis, we will prepare a manuscript for publication in a peer-reviewed journal and present the results at conferences.
Implications of the review
The aim of this systematic review and meta-analysis will be to summarise TSR among adult patients with BC-PTB in SSA, a region heavily burdened by TB and having the highest TB case fatality rate. The review results may impact on practice, policy and research. Healthcare providers, managers and policy-makers can use the findings to improve the performance of TB programmes by developing strategies and initiating deliberate steps for addressing gaps in TB care. Second, it may provide a foundation for prospective research on TSR among patients with BC-PTB in SSA.
Patient and public involvement
Patients were not involved in the development of the research question, outcome measure and study design.
Contributors JI is the first and corresponding author; JI and FB conceived and designed the study; JI, DS and FB will acquire data; JI and FB will analyse and interpret data; JI, DS, RS, IKT and FB drafted the initial and final manuscripts; JI, DS, RS, IKT and FB performed critical revisions of the manuscript. All authors approved the final version of the manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Ethics approval Ethical approval will not be required because this study will retrieve and synthesise data from already published studies.
Provenance and peer review Not commissioned; externally peer reviewed.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2019-01-22T22:23:26.481Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "9b270ca3918cf953d1cfa65837ecdb94d86a8e0b",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/12/e024559.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b270ca3918cf953d1cfa65837ecdb94d86a8e0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27193308 | pes2o/s2orc | v3-fos-license | Arabin cervical pessary for prevention of preterm birth in cases of twin-to-twin transfusion syndrome treated by fetoscopic LASER coagulation: the PECEP LASER randomised controlled trial
Background Fetoscopic LASER coagulation of the placental anastomoses has changed the prognosis of twin-twin transfusion syndrome. However, the prematurity rate in this cohort remains very high. To date, strategies proposed to decrease the prematurity rate have shown inconclusive, if not unfavourable results. Methods This is a randomised controlled trial to investigate whether a prophylactic cervical pessary will lower the incidence of preterm delivery in cases of twin-twin transfusion syndrome requiring fetoscopic LASER coagulation. Women eligible for the study will be randomised after surgery and allocated to either pessary or expectant management. The pessary will be left in place until 37 completed weeks or earlier if delivery occurs. The primary outcome is delivery before 32 completed weeks. Secondary outcomes are a composite of adverse neonatal outcome, fetal and neonatal death, maternal complications, preterm rupture of membranes and hospitalisation for threatened preterm labour. 352 women will be included in order to decrease the rate of preterm delivery before 32 weeks’ gestation from 40% to 26% with an alpha-error of 0.05 and 80% power. Discussion The trial aims at clarifying whether the cervical pessary prolongs the pregnancy in cases of twin-twin transfusion syndrome regardless of cervical length at the time of fetoscopy. Trial registration ClinicalTrials.gov Identifier: NCT01334489. Registered 04 December 2011.
Background
Fetoscopic LASER coagulation of placental anastomoses (FLC) with or without Solomon technique [1] is the treatment of choice for severe Twin-to-Twin Transfusion Syndrome (TTTS) [2]. This procedure completely changed the natural history of TTTS [3]. However, although this first and most important step was taken and survival rates and neurological outcomes improved, premature births remain a serious concern: 41% of patients will deliver before 32 completed weeks [4]. Therefore, an effort to reduce prematurity is required. Cervical pessary placement could constitute an option to reduce prematurity, regardless of cervical length at the time of fetoscopy.
Transvaginal ultrasound permits the identification of twin pregnancies at high risk of preterm delivery [5,6]. It is known that cervical length (CL) plays a role in the onset of preterm birth. The pathophysiology of premature ripening of the cervix varies between singleton and multiple pregnancies with twin pregnancies having a higher prevalence of short cervix at 23 weeks [7].
Regarding TTTS, Yamamoto et al. showed that short CL before treatment is an independent risk factor for preterm delivery [8].
The prematurity rate in cases of severe TTTS in patients with long cervix treated by FLC remains very high [9][10][11]. Although many cases are delivered prematurely for medical indications (mainly selective intrauterine growth restriction), a large proportion will also develop spontaneous preterm labour [12].
Information is lacking regarding the best therapeutic option in patients with short cervix who also require fetoscopy [9][10][11]. Salomon et al. suggested the use of an emergency cerclage in cases of TTTS with a short cervix at the time of surgery [13]. In this observational study, the emergency cerclage could prolong pregnancy and improve perinatal outcomes. It was the first study to demonstrate a potential benefit to prevent premature delivery in such cases. Nevertheless, emergency cerclage continues to be an invasive procedure requiring anaesthesia. It also bears the risk of infection, local inflammatory reactions and premature rupture of membranes, although these were not observed in the Salomon study [14][15][16]. In current practice an emergency cerclage is usually performed for a cervical length less or equal to 15 mm after fetoscopic surgery in cases of TTTS, although the benefit of the cerclage in these cases has been recently questioned [17].
Previous studies in singleton and twin pregnancies showed that the use of a cervical pessary significantly reduces the frequency of birth before 32 weeks and prolongs the pregnancy [18][19][20]. The advantages of using a cervical pessary are that it is a non-invasive, operatorindependent, easy to use device, does not require anaesthesia and can easily be removed when necessary.
Our group performed a pilot study in which we placed an Arabin cervical pessary in cases of TTTS with FLC and retrospectively compared the outcomes with a control group. There was a lower rate of preterm birth and less neonatal morbidity in the pessary group [21].
The ProTWIN Trial [22], despite concluding that in unselected women with multiple pregnancies prophylactic use of a cervical pessary does not reduce poor perinatal outcome, found a significant difference in perinatal outcome for the subgroup of monochorionic pregnancies (14% vs. 25% in the control group).
The aim of this trial is to investigate whether a prophylactic cervical pessary will lower the incidence of preterm delivery in cases of twin-twin transfusion syndrome requiring fetoscopic laser coagulation.
Methods/design
Aims We will perform a randomised trial (the PECEP LASER Trial: Arabin Cervical Pessary for prevention of preterm birth in cases of Twin-to-Twin Transfusion Syndrome treated by Fetoscopic Laser coagulation) to assess the effect of the Arabin cervical pessary on the rate of preterm delivery in cases of monochorionic pregnancies with twin-twin transfusion syndrome requiring fetal therapy. This is a multicenter study to be conducted in the Hospital Universitari Vall d'Hebron in Barcelona (Catalunya, Spain), the Universitaire Ziekenhuizen in Leuven (Belgium), and the University Medical Centrer Hamburg-Eppendorf (UKE) in Hamburg (Germany). All centers have approval from the respective Medical Ethics Committees to conduct the trial.
The study is open for other centres who wish to participate.
Participants
All women presenting with a monochorionic-diamniotic pregnancy with severe twin-twin transfusion syndrome between 16 and 26 weeks are eligible for the study with the exception of pregnancies complicated by major congenital abnormalities, uterine malformations, placenta praevia, active vaginal bleeding or spontaneous rupture of membranes at the time of randomization. Also patients with contraindication for pessary placement as painful regular uterine contractions, or a cervical cerclage in place will be excluded [20], as well as cases of demise of both twins after surgery.
Gestational age will be determined through menstrual history and first trimester scan.
Procedures, recruitment, randomisation and collection of data
The Fetal Medicine Teams in the participating centres will counsel all women being referred for suspected TTTS. When fetoscopic LASER is planned, one of the Fetal Medicine specialists will confirm that the patient fulfils the inclusion criteria, and the study will be proposed. The patients will be informed about the intended therapeutic effect and possible side effects. The patient will be given an information sheet and if possible, she will have 24 h time to reflect on participation. If they agree, after obtaining their informed consent, they will be randomised in two groups: Usual management or cervical pessary placement.
The randomisation will be computed on-line at the platform for electronic data management based at the Catalan Institute of Pharmacology (ICF) where all data will be collected in a worldwide accessible online database. Each centre will have its own randomisation list and randomisation will be 1:1 for pessary placement versus control. Only a participant identification number in the electronic database will identify the participants. All documents will be stored securely and only accessible by trial staff and authorized personnel.
The study is not blinded. Baseline characteristics of the women, the pregnancies and issues on the TTTS and the surgery will be recorded. The cervical length will be recorded before and after surgery and every 4 weeks until delivery. We will also record complications of the surgery, complications by the pessary and perinatal results.
Intervention
For women allocated into cervical pessary group, the device will be inserted within 72 h after fetal surgery in the assessment room of the outpatients Department. This procedure does not require anaesthesia and does not need to be performed in an operating theatre.
All patients, both the pessary and the expectant management group, will be followed as in routine monochorionic follow-up, with clinical clinical review and ultrasound every two weeks. We will perform a transvaginal ultrasound for measuring cervical length every four weeks and will also assess the correct placement of the pessary.
Further surveillance of the pregnancy will not be influenced by the participation in the study. If the cervical length decreases later on during the pregnancy, the gestation will be managed according to the local protocol. However, cervical cerclage or progesterone will not be allowed in any case during the rest of the pregnancy.
The pessary will be removed at 37 weeks of gestation or before if any unexpected event or spontaneous delivery occurs. The indications to remove the pessary before this time will be: active bleeding, persistent contractions after tocolysis and premature rupture of the membranes after 34 weeks. After removing the pessary, the obstetrical management will be done as usual.
The study will close after the home discharge of the last newborn, as neonatal adverse outcomes will also be recorded.
Outcomes
The primary outcome will be delivery before 32 completed weeks.
Secondary outcomes will be related to the delivery and perinatal results: preterm delivery before 28, 30, 34 and 37 weeks; preterm rupture of membranes; hospitalisation for threatened preterm labour (including the need of tocolysis and fetal lung maturation); fetal or neonatal death; composite neonatal morbidity (any of intraventricular haemorrhage, necrotising enterocolitis, retinopathy of prematurity, proven or suspected sepsis, need of Neonatal Intensive Care, need of ventilation, phototherapy, antibiotics or blood transfusion). We will also record significant maternal adverse effects, mainly due to the pessary.
A long-term follow-up of the babies is not planned.
Expected sample size
We based our sample size calculations on the assumption that the pessary reduces the primary outcome (preterm birth <32 weeks) from 40% in the control group to 26% in the pessary group [18]. With a study power of 80% and an alpha error of 5%, (assuming an effect size of 0.5 (moderate) and expecting a potential drop out rate of 10%), the required sample size has been estimated to be 352 patients, 176 cases in each arm of the study. We will perform 2 interim analyses, so the p-value would be set at 0.045 (O'Brien-Fleming rule) [23] instead of 0.05 and we would therefore need to recruit 182 patients in each arm.
Data analysis
A descriptive analysis by preterm birth will be carried out calculating means and medians for quantitative variables and proportions for categorical variables. T-test or Mann-Whitney test and Pearson Chi-squared or Fisher test will be used for comparing among exposure and outcomes groups per pregnancy. A multivariate logistic regression will be fitted to control for possible confounders. Relative risks and 95% confidence interval will be calculated for the outcomes. For fetal and neonatal outcomes, multilevel models will be used to analyse continuous outcomes and generalized estimating equations will be used to assess categorical outcomes to adjust for the clustering of twins in mothers. A secondary analysis of length of gestation after fetoscopy will be plotted on a staggered-entry Kaplan-Meier curve where gestational age at birth (in days) will be the final event. Elective A Mantel-Cox test will be performed to test statistically for significant differences in the time interval between fetoscopy and birth. All analyses will be carried out with SPSS® 19.0 statistical package (IBM Company SPSS Inc. Headquarters, Chicago, Illinois. USA). Statistical significance will be accepted in all cases with a p < 0.05.
We do not expect to have lost data because this study has a short follow-up time. We will also perform a perprotocol analysis.
We also intend to perform a subgroup analysis according the cervical length: over 25 mm, 15 to 25 mm and less than 15 mm. We decided to perform the subgroup analysis with these cut-off lengths because 25 mm is around the 5th centile for cervical length for twins less than 22 weeks' gestation [24] and some groups perform cervical cerclage in TTTS patients with CL below 15 mm [13].
Interim analysis
An interim analysis of efficacy and futility will be performed after each 100 patients enrolled in the study using the O'Brien & Fleming rule [23] for efficacy and conditional power calculation to assess futility. The trial will be stopped early if after the first 100 patients the difference has a p-value of <0.0001, if after 200 patients the p-value reaches a difference of <0.004. The interim analyses as well as the final outcome adjudication will be performed blinded to group assignment.
Study calendar
We expect patient recruitment to take approximately 3 years. One more year of duration is estimated for analysis and publication of the results.
Discussion
Fetoscopic LASER coagulation of the placental anastomoses is the treatment of choice for severe twin-twin transfusion syndrome as this treatment has completely changed the prognosis in terms of survival of these babies. However, the morbidity of these babies is still very high, both at short and long term, mainly due to the high rates of prematurity.
The rate of spontaneous preterm delivery is high, even if the cervical length is in the normal range at the time of the surgery [25]. Almost half of these twins deliver around 32 weeks and most of them are electively delivered around 34 weeks [11,12], thus we decided that the best cut-off should be "delivery before 32 weeks", gaining at least two weeks in this group. Likewise, most of the babies are delivered preterm due to complications of the monochorionic placenta such as selective intrauterine growth restriction, twin anemia-policytemia sequence (TAPS) or recurrent TTTS. As this is an intention-to-treat study, the rate of iatrogenic births is expected to be the same in both groups.
In the last decade, some strategies to prevent preterm delivery in twins have been published: bed rest is not useful in twin pregnancies [26], and the use of the cervical cerclage has been questioned [27], despite some groups using it in cases of short cervical length after the fetal surgery [13].
The cervical pessary is a simple and easy-to-use method to prevent preterm birth that has been proven to be effective in singleton pregnancies [20], twins [18] and in the subgroup of monochorionic twins in the ProTwin Trial [22].
As far as we know, there is only one pilot study in this field [21], with favourable results for the pessary. Therefore, a trial that compares the use of the cervical pessary to expectant management in cases of twin-twin transfusion syndrome undergoing fetoscopic LASER coagulation is needed.
Acknowledgements
We thank Santiago Perez-Hoyos for his help in calculating the sample size.
Funding
No funding is provided for the study.
Availability of data and materials
The datasets supporting the conclusions of this article are available in a website repository: https://w3.icf.uab.es/nexus/html/en/home. The database can be accessed worldwide, and every participating centre will have its own randomisation list.
Study status
The study is currently recruiting.
Related articles
No publications have been submitted or published so far.
Authors' contributions CR, SA and EC from Barcelona conceived the study, and participated in its design and coordination. They also participate in the acquisition of data. KH and BH from Hamburg participated in the design of the study and also in the acquisition of data. LL and IC from Leuven also participated in the design of the study and also in the acquisition of data. All authors read and approved the final manuscript. Ethics approval and consent to participate The sponsor, participating centres and investigators ensure that this study is conducted in accordance with the protocol, the principles of the Declaration of Helsinki, ICH Guidelines for Good Clinical Practice and in full conformity with relevant regulations as well as applicable national laws and in accordance with regulations and guidelines applicable to clinical trials relating to medical devices. The protocol, informed consent form, participant information sheet and any applicable documents were submitted to the respective Ethics Committees (EC) and regulatory authorities, and written approval has been obtained. All substantial amendments to the originally approved documents will also be sent to the respective authorities for approval. The study did not begin until the approval of the EC and Director's consent was obtained. The Vall d'Hebron university hospital clinical research ethics opinion, as the reference Ethics Committee, first approved this randomised controlled trial. Afterwards, all the participating centres also obtained their own EC approval.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests. | 2017-08-23T05:11:55.733Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "a3b215c0cf8b875b8dd3dc6393f4e7d6fc9af4f1",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-017-1435-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3b215c0cf8b875b8dd3dc6393f4e7d6fc9af4f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17966273 | pes2o/s2orc | v3-fos-license | Phase I Study of Lenalidomide and Sorafenib in Patients With Advanced Hepatocellular Carcinoma
Lessons Learned Combination therapies in patients with hepatocellular carcinoma can be associated with overlapping toxicity and are therefore poorly tolerated. Using sorafenib at the maximum tolerated dose can lead to a higher incidence of toxicities. Consequently, combination studies might evaluate sorafenib at alternative schedules or doses to improve tolerance, recognizing this could affect sorafenib efficacy. Although this combination was poorly tolerated, it does not exclude further evaluation of new-generation immunomodulator drugs or immune checkpoint inhibitors in the hope of optimizing tolerance and safety. Background. Sorafenib is the standard treatment for advanced hepatocellular carcinoma (HCC), and to date, no combination therapy has demonstrated superior survival compared with sorafenib alone. The immunosuppressive microenvironment in HCC is a negative predictor for survival. Lenalidomide is an immunomodulator and antiangiogenic agent, with limited single-agent efficacy in HCC. Based on these data, we designed a phase I study of sorafenib plus lenalidomide to determine the safety and preliminary antitumor activity of this combination. Methods. This was an open-label, phase I study with a 3+3 dose escalation/de-escalation design. The starting dose of sorafenib was 400 mg p.o. b.i.d. and of lenalidomide was 15 mg p.o. daily with a planned dose escalation by 5 mg per cohort up to 25 mg daily. Dose de-escalation was planned to a sorafenib dose of 400 mg p.o. daily combined with two doses of lenalidomide: 10 mg p.o. daily for a 28-day cycle (cohort 1) and 10 mg p.o. daily for a 21- or 28-day cycle (cohort 2). Patients with cirrhosis, a Child-Pugh score of A-B7, and no previous systemic therapy were eligible. Results. Five patients were enrolled. Their median age was 56 years (range 39–61), and the ECOG status was 0–2. Four patients were treated at dose level (DL) 1. Because of the poor tolerance to the combination associated with grade 2 toxicities, one more patient was treated at DL −1. No dose-limiting toxicity was observed as specified per protocol. The most common toxicities were nausea, anorexia, pruritus, elevated liver enzymes, and elevated bilirubin. Three patients experienced one or more of the following grade 3 toxicities: fatigue (DL 1), increased bilirubin (DL 1), skin desquamation (DL −1), and elevated transaminase levels (DL 1). The median duration of therapy was 1 cycle (range 1–3). All patients discontinued the study, 4 because of progressive disease and 1 by patient preference. The best confirmed response was progressive disease. The median progression-free survival was 1.0 month (95% confidence interval 0.9–2.8), and the median overall survival was 5.9 months (95% confidence interval 3.68–23.4). Conclusion. In our small study, the combination of lenalidomide and sorafenib was poorly tolerated and showed no clinical activity. Although the study was closed early because of toxicity concerns, future studies assessing combinations of sorafenib with new-generation immunomodulator drugs or other immunomodulatory agents, should consider lower starting doses of sorafenib to avoid excessive toxicity.
Lessons Learned
x Combination therapies in patients with hepatocellular carcinoma can be associated with overlapping toxicity and are therefore poorly tolerated. x Using sorafenib at the maximum tolerated dose can lead to a higher incidence of toxicities. Consequently, combination studies might evaluate sorafenib at alternative schedules or doses to improve tolerance, recognizing this could affect sorafenib efficacy. x Although this combination was poorly tolerated, it does not exclude further evaluation of new-generation immunomodulator drugs or immune checkpoint inhibitors in the hope of optimizing tolerance and safety.
Author Summary: Abstract and Brief Discussion
Background Sorafenib is the standard treatment for advanced hepatocellular carcinoma (HCC), and to date, no combination therapy has demonstrated superior survival compared with sorafenib alone. The immunosuppressive microenvironment in HCC is a negative predictor for survival. Lenalidomide is an immunomodulator and antiangiogenic agent, with limited single-agent efficacy in HCC. Based on these data, we designed a phase I study of sorafenib plus lenalidomide to determine the safety and preliminary antitumor activity of this combination.
Methods
This was an open-label, phase I study with a 313 dose escalation/de-escalation design.
Results
Five patients were enrolled. Their median age was 56 years (range 39-61), and the ECOG status was 0-2. Four patients were treated atdose level (DL) 1. Because of the poor tolerance to the combination associatedwith grade2 toxicities, onemorepatient was treated at DL 21. No dose-limitingtoxicity was observed as specified per protocol.The most common toxicities were nausea, anorexia, pruritus, elevated liver enzymes, and elevated bilirubin.Three patients experienced one or more of the following grade 3 toxicities: fatigue (DL 1), increased bilirubin (DL 1), skin desquamation (DL 21), and elevated transaminase levels (DL 1). The median duration oftherapy was 1 cycle (range 1-3). All patients discontinued the study, 4because of progressive disease and 1 by patient preference.The best confirmed response was progressive disease.The median progression-free survival was 1.0 month (95% confidence interval 0.9-2.8), and the median overall survival was 5.9 months (95% confidence interval 3.68-23.4).
Conclusion
In our small study, the combination of lenalidomide and sorafenib was poorly tolerated and showed no clinical activity. Although the study was closed early because of toxicity concerns, future studies assessing combinations of sorafenib with new-generation immunomodulator drugs or other immunomodulatory agents, should consider lower starting doses of sorafenib to avoid excessive toxicity.
Discussion
Patients with HCC have limited therapeutic options. Sorafenib, a multi-tyrosine kinase inhibitor, is the only Food and Drug Administration (FDA)-approved systemic therapy for this disease, with marginal improvement in median overall survival. HCC is commonly associated with chronic inflammation and is thought to be capable of evading local immune surveillance.
Tumor infiltration with regulatory T cells (T regs ) has been associated with disease progression and a higher riskof relapse after curative therapy.
Lenalidomide is a second-generation immunomodulator drug (IMID) and has been approved by the FDA for the therapy of multiple myeloma and 5q deletion myelodysplastic syndrome. Lenalidomide exhibits its antitumor effects through antiangiogenic and immunomodulating properties. Lenalidomide modulates mononuclear and activated macrophage secreted cytokines and increases the secretion of the T-cell lymphokines that stimulate clonal T-cell proliferation. In preclinical models, lenalidomide enhanced the antitumor activity of sorafenib, presumably through immune modulation and increased CD8 1 in tumor infiltrating lymphocytes (TILs) and decreased T regs among TILs. Lenalidomide as a single agent demonstrated preliminary efficacy in phase II clinical trials with a partial response (PR) rate of 15%, including 2 patients with durable responses of 32 and 36 months. In another study, the PR and stable disease (SD) rates were 5% and 36%, respectively.
On the basis of these data, we designed a phase I "313" dose escalation/de-escalation study to evaluate the safety, maximum tolerated dose, and preliminary activity of the combination of sorafenib and lenalidomide. In the present phase I study, 3 of 5 patients experienced symptomatic progressive disease (PD) within the first cycle. Poor tolerability was evident, even at substandard treatment doses in 1 patient (sorafenib 400 mg and lenalidomide 10 mg daily). Because of the high toxicity, especially fatigue and elevated transaminase levels, potentially attributed to both study agents, the study was discontinued early. Although no responses were seen on our study, the small sample size precluded the ability to judge the efficacy of this combination.
The prognosis remains poor for patients with advanced HCC, with a median overall survival of less than 12 months. The lack of predictive biomarkers, resistance to cytotoxic chemotherapy, and the underlying liver disease continue to be major challenges in successfully treating HCC. No sorafenib-based combination therapies have shown superior results to sorafenib alone. Although the combination with lenalidomide was intolerable, an ongoing clinical trial is evaluating a newer generation IMID (CC-122) combined with sorafenib (ClinicalTrials.gov identifier, NCT02323906). As novel combinations are being considered for this disease, it is crucial that we better understand the biology associated with different HCC etiologies and any overlapping toxicity with sorafenib. The recent success with immune checkpoint inhibitors in HCC is encouraging, but still, only 20% of patients benefited. With the evolving field of gnomically and other biomarker-driven precision therapeutics, patients with HCC will benefit from rational combinations to further improve their outcome.
Discussion
Hepatocellular carcinoma (HCC) is the third leading cause ofcancer death worldwide [1]. For patients with advanced disease, few effective options exist. Sorafenib is a multi-tyrosine kinase inhibitor against the vascular endothelial growth factor (VEGF) receptor and rapidly accelerated fibrosarcoma. In a randomized controlled clinical trial, sorafenib improved overall survival compared with placebo, 10.7 versus 7.9 months [2]. HCC is an inflammation-associated malignancy with an ability that is thought capable of evading local immune surveillance [3]. Indirect evidence suggests the immune microenvironment plays an important role in tumor progression [4][5][6][7].Tumor infiltration with regulatory T cells (T regs ) has been associated with disease progression [4] and with a higher risk of relapse after curative therapy [5][6][7].
Lenalidomide is a second-generation immunomodulator drug (IMID) that modulates mononuclear and activated macrophage secreted cytokines such as tumor necrosis factor-a and interleukin (IL)-1, IL-6, and IL-12 [8]. Lenalidomide also increases the secretion of the T-cell lymphokines interferon-g and IL-2, which stimulate clonal T-cell proliferation [9]. IMIDs also exhibit antiangiogenic activity by decreasing the secretion of VEGF and fibroblast growth factor from tumor and stromal cells [10]. VEGF has a significant role in impairing dendritic cell differentiation and their role as antigen-presenting cells. VEGF blockade can improve dendritic cell differentiation [11] and synergize with immunotherapy [12]. In preclinical models, lenalidomide enhanced the antitumor activity of sorafenib, presumably through immune modulation and increased CD8 1 of tumor infiltrating lymphocytes (TILs) and decreased T regs among TILs [13].
In a phase II study of thalidomide in advanced HCC, activity included 5% with partial responses (PRs), 5% with minor responses, and 31% with stable disease (SD) [14]. A retrospective analysis of low-dose thalidomide (100 mg/day) showed PR and SD rates of 5% and 21%, respectively, with an overall survival (OS) of 3.2 months [15]. Lenalidomide is a potent thalidomide analog with antiangiogenic and immunomodulating effects and has been approved by the Food and Drug Administration for therapy for multiple myeloma and 5q deletion myelodysplastic syndrome. It has been studied in 40 HCC patients (35 each with Child-Pugh A and B) with progression or intolerance to sorafenib, at a dose of 25 mg p.o. daily 3 21 days in 28-day cycles. Lenalidomide was well tolerated, with rare grade 3 toxicities.The PR rate was 15%, including 2 patients with a durable response of 32 and 36 months [16].
Based on these data, we designed a phase I "313" dose escalation study to evaluate the safety, maximum tolerated dose, and preliminary activity of the combination of sorafenib and lenalidomide. In this phase I study, 3 of 5 patients experienced symptomatic PD within the first cycle. Poor tolerability was evident, even at substandard treatment doses in 1 patient (sorafenib 400 mg and lenalidomide 10 mg daily). Because of the high toxicity, especially fatigue and elevated transaminase levels, potentially attributed to both study agents, and no preliminary signs of efficacy, our study was discontinued early, without further attempts to reduce the sorafenib dose.
The prognosis remains poor for patients with unresectable, advanced HCC, with a median OS of less than 12 months.The lack of predictive biomarkers, relative resistance to cytotoxic chemotherapy, and the underlying liver disease continue to be major challenges in successfully treating HCC. No sorafenib-based combination therapies to date have shown superior results to sorafenib alone [17]. Sorafenib is currently used at the maximum tolerated dose; therefore, combining sorafenib with novel agents that have overlapping toxicities will likely be unsuccessful. Although the combination with lenalidomide was intolerable and ineffective in our small study, an ongoing clinical trial is evaluating a newer generation IMID (CC-122) combined with sorafenib (ClinicalTrials.gov identifier, NCT02323906). As novel combinations and therapies are being considered for this disease, it is crucial that we better understand the biology associated with different HCC etiologies (i.e., hepatitis B and C, alcohol-related cirrhosis versus nonalcoholic steatohepatitis) because these could be associated with differential responses to molecularly or immunologically targeted therapies [18]. The recent success with immune checkpoint inhibitors in HCC is encouraging, but still, only 20% of the patients benefited [19]. With the evolving field of genomically and other biomarker-driven precision therapeutics, patients with HCC will benefit from rational combinations or, rather, select therapeutics to further improve outcomes. Click here to access other published clinical trials. | 2018-04-03T02:16:13.241Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "49c2774c63803c080a50f1e8a25ac2160f15f9eb",
"oa_license": null,
"oa_url": "https://theoncologist.onlinelibrary.wiley.com/doi/pdfdirect/10.1634/theoncologist.2016-0071",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "49c2774c63803c080a50f1e8a25ac2160f15f9eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255943922 | pes2o/s2orc | v3-fos-license | LASSO-based screening for potential prognostic biomarkers associated with glioblastoma
Background Glioblastoma is the most common malignancy of the neuroepithelium, yet existing research on this tumor is limited. LASSO is an algorithm of selected feature coefficients by which genes associated with glioblastoma prognosis can be obtained. Methods Glioblastoma-related data were selected from the Cancer Genome Atlas (TCGA) database, and information was obtained for 158 samples, including 153 cancer samples and five samples of paracancerous tissue. In addition, 2,642 normal samples were selected from the Genotype-Tissue Expression (GTEx) database. Whole-gene bulk survival analysis and differential expression analysis were performed on glioblastoma genes, and their intersections were taken. Finally, we determined which genes are associated with glioma prognosis. The STRING database was used to analyze the interaction network between genes, and the MCODE plugin under Cytoscape was used to identify the highest-scoring clusters. LASSO prognostic analysis was performed to identify the key genes. Gene expression validation allowed us to obtain genes with significant expression differences in glioblastoma cancer samples and paracancer samples, and glioblastoma independent prognostic factors could be derived by univariate and multivariate Cox analyses. GO functional enrichment analysis was performed, and the expression of the screened genes was detected using qRT-PCR. Results Whole-gene bulk survival analysis of glioblastoma genes yielded 607 genes associated with glioblastoma prognosis, differential expression analysis yielded 8,801 genes, and the intersection of prognostic genes with differentially expressed genes (DEG) yielded 323 intersecting genes. PPI analysis of the intersecting genes revealed that the genes were significantly enriched in functions such as the formation of a pool of free 40S subunits and placenta development, and the highest-scoring clusters were obtained using the MCODE plug-in. Eight genes associated with glioblastoma prognosis were identified based on LASSO analysis: RPS10, RPS11, RPS19, RSL24D1, RPL39L, EIF3E, NUDT5, and RPF1. All eight genes were found to be highly expressed in the tumor by gene expression verification, and univariate and multivariate Cox analyses were performed on these eight genes to identify RPL39L and NUDT5 as two independent prognostic factors associated with glioblastoma. Both RPL39L and NUDT5 were highly expressed in glioblastoma cells. Conclusion Two independent prognostic factors in glioblastoma, RPL39L and NUDT5, were identified.
Tian Y, Chen L'e and Jiang Y (2023) LASSO-based screening for potential prognostic biomarkers associated with glioblastoma. Front. Oncol. 12:1057383. doi: 10.3389/fonc.2022.1057383 COPYRIGHT © 2023 Tian, Chen and Jiang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Introduction
Glioblastoma is a malignant primary brain tumor disease and is among the most common malignant tumors of the central nervous system (CNS) (1). It accounts for 30% of all brain and CNS tumors and 80% of all malignant tumors (2). Despite the current multimodal treatment modalities, patients with glioblastoma have a poor prognosis, with a median survival time of only 14.6 months (3). The median age at diagnosis is 64 years (4), and the five-year mortality rate is higher than 90% (5). It has been shown that the incidence of glioblastoma is higher in men than in women (4,6), the incidence is higher in developed countries than in developing countries (4), and the incidence of glioblastoma is higher in Asians, Latinos, and whites (7). Glioblastomas are aggressive tumors with a median survival time of only three months if left untreated (8). Surgery, radiation, and chemotherapy are available to improve the survival rates of glioblastoma patients.
The clinical treatment of glioblastoma may be facilitated by identifying genes and independent factors associated with glioblastoma prognosis. In this study, glioblastoma related data were selected from the Cancer Genome Atlas (TCGA) database, which contains 153 tumor samples and 2,647 paraneoplastic and GTEx undiseased tissues. Whole-gene bulk survival analysis of glioblastoma genes was performed to identify genes associated with glioblastoma prognosis. Differential expression analysis of the genes was conducted after whole-gene bulk survival analysis was performed with genes from normal samples. The interaction network between genes was analyzed using Metascape, and the highest-scoring clusters were identified using the MCODE plug-in. GO functional enrichment analysis was performed, and LASSO prognostic analysis was performed to identify key genes. Gene expression validation was performed to identify genes with significant expression differences in glioblastoma cancer samples compared with paracancerous samples, from which independent prognostic factors associated with glioblastoma prognosis were identified. Finally, qRT-PCR was used to verify the expression of the genes.
Due to the heterogeneity and complex pathogenesis of glioblastoma, the disease is still incurable (9). In this study, we explore the prognosis-related genes and independent prognostic factors of glioblastoma based on the LASSO bioinformatics analysis method.
Methods and materials Sample source
This study is an exploratory study based on TCGA (https:// portal.gdc.cancer.gov/) dataset, which contains 153 tumor samples, 5 paracancerous tissues, and 2647 normal samples of undiseased tissues retrieved from Genotype-Tissue Expression (GTEx, https://commonfund.nih.gov/GTEx) undiseased tissues.The 153 tumor samples were analyzed along with five samples of paracancerous tissues after obtaining gene expression matrices and clinical information data to identify genes associated with glioblastoma prognosis.
Glioblastoma whole-gene bulk survival analysis
RNA-seq data of glioblastoma and corresponding clinical sample information were obtained from the TCGA, and the whole-gene bulk survival of the dataset was analyzed using the R package ggplot.2 Images were plotted using the R package forestplot. The data obtained from the all-gene bulk survival analysis could be used for subsequent analyses. A value of p<0.05 was considered statistically significant.
Glioblastoma differential expression gene analysis
Clinical information was used to classify the samples into disease and control groups, and the Limma software package for R computing was used to analyze the differential expression of the study mRNAs. The results of the differential expression analysis for each data set are presented by volcano plots, and a Venn diagram shows the overlapping portion of differentially expressed genes (DEG) in the two groups; the overlapping genes can be used for subsequent analysis. The screening threshold for DEG is P<0.05, |log2FC|>1.
Glioblastoma protein-protein interaction network analysis
To further investigate the genes associated with glioblastoma functional pathways, we performed PPI analysis on overlapping genes. Overlapping genes were analyzed using STRING (https:// cn.string-db.org/) to obtain a protein-protein interaction relationship network. The relationship network was imported into Cytoscape for visualization, and the densely connected network components were identified using the MCODE plugin to obtain seven gene modules, and the highest-scoring gene module was selected for subsequent analysis. Since the relationship network map drawn by STRING was more complex, the protein-protein interaction network map was redrawn using Metascape (https://metascape.org/).
Glioblastoma prognostic risk model construction
LASSO (least absolute shrinkage and selection operator) is a risk-scoring model based on prognostic factors (10). The analysis used the R software survival package to conduct a multivariate Cox regression analysis, followed by an iterative analysis using the STEP function, thus selecting the optimal model as the final model. The prognostic risk model was constructed using the highest-scoring gene module obtained by the PPI algorithm to obtain a risk scoring formula. For Kaplan-Meier curves, p-values and hazard ratios (HR) with 95% confidence intervals (CI) were derived by the log-rank test and univariate cox regression. A value of p<0.05 was considered statistically significant.
Comparison of glioblastoma gene expression
A comparative expression analysis of genes in the LASSO formula was implemented using the R software ggplot2 package to compare the distribution of the same gene in tumor tissue and normal tissue. A value of p<0.05 was considered statistically significant.
Finding independent prognostic factors for glioblastoma
To further investigate the independent prognostic factors of glioblastoma, univariate Cox and multivariate cox regression analyses of genes were performed using the R software forestplot package; the results are presented using forest plots. A factor was considered to be an independent prognostic factor when the P value for both univariate cox and multivariate cox was less than 0.05.
Detection of RPL39L and NUDT5 gene expression by qRT-PCR
Total RNA was extracted from both groups of cells using TRIzol reagent (Invitrogen, USA). The extracted mRNA was reverse transcribed into cDNA using SuperReal PreMix Plus (SYBR Green) (FP205-02, Tiangen, China), and gene expression was detected using qRT-PCR. The relative expression of the genes was calculated using the 2 -DDCT method. The experiment was repeated three times to determine the average. The expression levels of RPL39L and NUDT5 were detected using GAPDH as an internal reference. The primer sequences used are shown in Table 1.
Results
Glioblastoma whole-gene bulk survival analysis and differentially expressed gene analysis A total of 607 genes significantly associated with prognosis were obtained from a whole-gene bulk survival analysis of 153 glioblastoma tumor samples. To further analyze the distribution of genes in glioblastoma compared with their distribution in paraneoplastic and normal tissues, a differential analysis was performed. The results showed that there were 7,919 upregulated genes and 882 downregulated genes ( Figure 1A). The differential expression heat map demonstrated the expression trends of the largest 50 upregulated and 50 downregulated genes in different tissues ( Figure 1B). Three hundred and twenty-three genes that overlapped two groups were identified using a Venn diagram ( Figure 1C).
Results of PPI analysis
Functional enrichment analysis of 323 DEGs using Metascape revealed that the genes were significantly enriched in functions such as the formation of a pool of free 40S subunits and placenta development (Figure 2A). Protein-protein interaction expression was constructed for 323 genes to identify a gene-to-gene interaction linkage, as illustrated in Figure 2B. Nineteen gene modules were obtained via the MCODE algorithm, and the highest-scoring gene module was selected, which contained 17 nodes and 124 edges ( Figure 2C).
LASSO analysis to identify genes associated with prognosis
Based on the above study, 17 genes associated with glioblastoma were identified, and a prognostic model was constructed using multivariate cox regression analysis. The TABLE 1 Primer sequences of genes. coefficients of the selected features were shown by the l parameter; partial likelihood deviations were plotted against log(l) using the LASSO Cox regression model ( Figures 3A, B). The risk score formula is as follows: Riskscore = (-0.2159) *RPS10 + (-0.0121)*RPS19 + (0.2469)*RPL39L + (-0.0443) *RPS11 + (-0.0357)*RSL24D1 + (-0.3645)*RPF1 + (-0.6918) *NUDT5 + (-0.0858)*EIF3E. Based on the calculation of the risk score formula, the sample was divided into high-risk and low-risk groups; the distribution of the sample is illustrated in Figure 3C. Figure 3D demonstrates that the lowrisk group is effective for prognosis. Figure 3E demonstrates that the model is accurate in predicting the three-and five-year survival of glioblastomas.
Independent prognostic factors for glioblastoma
Gene expression analysis of eight genes obtained from LASSO that were associated with glioblastoma prognosis was performed, and all eight genes were found to be highly expressed in the tumors (Figures 4A-H). These genes are presented as box line plots. An independent prognostic analysis of these eight genes was performed to obtain the results of univariate Figure 5A and multivariate Cox Figure 5B analyses, which are presented as forest plots. As can be seen in the figure, two genes, RPL39L and NUDT5, were significant in both univariate and multivariate Cox regression; therefore, these two genes can be considered independent prognostic factors of glioblastoma.
Results of qRT-PCR
The expression of RPL39L and NUDT5 in glioblastoma cells and normal cells was detected using qRT-PCR. The results showed that both genes were highly expressed in glioblastoma cells (Figures 6A, B).
Discussion
In recent years, the number of cancer patients has increased dramatically, and finding a breakthrough treatment for cancer has become urgent (11). Malignant glioblastoma is the most common type of primary brain tumor in adults and is associated with a disproportionate amount of cancer-related morbidity and mortality (12), making it particularly important to find ways to treat glioblastoma tumors. The rising trend in the incidence of glioblastoma has been accompanied by an increase in concern (2). At present, the diagnosis of glioblastoma still depends mainly on pathological features and medical imaging, such as CT, MRI, DSA, PET, and SPECT, which need to be verified by a surgeon (6). In addition, numerous studies have been conducted on glioblastoma biomarkers, such as the specificity of circRNAs in glioblastoma, which is expected to be a new biomarker for the development of glioblastoma (13) and to lead to a future cure for glioblastoma tumors. In this study, information from clinical sample data was used to study some of the genes that may be correlated with glioblastoma prognosis. Genes significantly associated with glioblastoma prognosis can be identified through a whole-gene batch survival analysis of clinical samples, and then key genes can be further identified by PPI and LASSO. Then, an expression validation analysis of these genes can be performed to identify potential prognostic biomarkers associated with glioblastoma. Finally, independent prognostic factors for glioblastoma can be obtained via an independent prognostic analysis of these genes.
In this study, we used the clinical data of glioblastoma patients available from the TCGA and identified genes associated with glioblastoma prognosis by performing a wholegene bulk survival analysis. We also investigated the functional enrichment pathways of these genes and found that they were significantly enriched in functions such as building free 40S subunits as well as in placental development. A network of genegene interactions was also constructed, and the highest scoring motif modules were further analyzed with the help of algorithms. Eight genes-RPS10, RPS11, RPS19, RSL24D1, RPL39L, EIF3E, NUDT5, and RPF-were found to have an expression in the prognosis of glioblastoma. Finally, univariate cox and multivariate cox analyses were performed, which identified RPL39L and NUDT5 as independent prognostic factors for glioblastoma. The results were verified by qRT-PCR experiments. Ribosomal proteins are synthesized in the cytoplasm by RNA polymerase II and then imported into the nucleus, where they are assembled into small and large ribosomal subunits (14, 15). The small ribosomal subunit contains an 18S rRNA and approximately 32 ribosomal proteins (RPS proteins), and the large ribosomal subunit 60S consists of one of 5S, 5.8S and 28S rRNAs and approximately 47 ribosomal proteins (RPL proteins) (16). Long-term studies have shown that ribosomal proteins not only constitute ribosomes as structural proteins but also play important roles in the cell cycle, proliferation, apoptosis/death, tumorigenesis, DNA repair, and other responses (17).
From the analysis conducted in this study, it is clear that most of the genes associated with glioblastoma prognosis are ribosomal protein genes, represented by RPS10 and RPS11. The RPS10 gene encodes the RPS10 protein, which is part of the small subunit of the mitochondrial ribosome (18) and is involved in ribosome biogenesis, as well as participating in cellular transformation mechanisms (16). RPS10 encodes the 165 amino-acid-long RPS10 protein, which is a component of the 40S ribosomal subunit (19) and can cross-link to the eukaryotic initiation factor 3 (eIF3) of translation. It has been shown that the RPS10 protein is part of the structural domain B A FIGURE 5 Screening of independent prognostic factors for glioma based on univariate and multivariate cox analyses. involved in the binding of the initiation factor to the 40S subunit at the onset of translation (20). RPS11 encodes the RPS11 protein, which is overexpressed in various malignancies and is associated with tumor recurrence (17). Elevated levels of RPS11 have been found to be associated with a poor prognosis in patients with glioblastoma (21). RPL19 is upregulated in many prostate cancers, and its downregulation leads to a milder malignant phenotype in vivo, suggesting a functional role in promoting tumorigenesis (22). It has been shown that RPL19 is associated with glioblastoma prognosis (23). nudt5 encodes NUDT5 hydrolase, which is associated with breast cancer prognosis (24). It can inhibit the propagation of HeLa cells and T47D cells (25). No studies have been conducted on NUDT5 related to glioblastoma. GO analysis of important pathways has identified the composition of the free 40s subunit pool and placental development, among others, and studies have shown that 40S subunits can enter the free and membrane-bound polyribosomes from the cytoplasmic pool of newly made free natural subunits (26), thus affecting protein synthesis and cell development. There is a link between the placenta and cancer cells at the molecular level (27), and the placenta acts as a transport function between the mother and the fetus, during development. Genes influence tumorigenesis by affecting the 40s subunit and placental development, among others.
In conclusion, this study obtained two independent prognostic markers associated with glioblastoma prognosis, which could play a central role in the prognosis of glioblastoma, were obtained by bioinformatics using glioblastoma tumor samples and normal samples.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. | 2023-01-18T14:25:47.819Z | 2023-01-16T00:00:00.000 | {
"year": 2022,
"sha1": "0e094a1f693e317ee468118aee556aa8c51536fa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0e094a1f693e317ee468118aee556aa8c51536fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14346488 | pes2o/s2orc | v3-fos-license | Symmetry-breaking thermally induced collapse of dipolar Bose-Einstein condensates
We investigate a Bose-Einstein condensate with additional long-range dipolar interaction in a cylindrically symmetric trap within a variational framework. Compared to the ground state of this system, little attention has as yet been payed to its unstable excited states. For thermal excitations, however, the latter is of great interest, because it forms the"activated complex"that mediates the collapse of the condensate. For a certain value of the s-wave scatting length our investigations reveal a bifurcation in the transition state, leading to the emergence of two additional and symmetry-breaking excited states. Because these are of lower energy than their symmetric counterpart, we predict the occurrence of a symmetry-breaking thermally induced collapse of dipolar condensates. We show that its occurrence crucially depends on the trap geometry and calculate the thermal decay rates of the system within leading order transition state theory with the help of a uniform rate formula near the rank-2 saddle which allows to smoothly pass the bifurcation.
I. INTRODUCTION
Since the first experimental realization of a Bose-Einstein condensate (BEC) in 1995 [1], the field of ultracold quantum gases has developed rapidly. An important milestone in this development was the condensation of 52 Cr and 164 Dy atoms [2,3], which, due to their large magnetic dipole moments, interact via the anisotropic, long-range dipole-dipole interaction (DDI). Because the latter can be either attractive or repulsive, depending on the orientation of the dipoles, a wealth of new phenomena emerges in these BECs, such as stability diagrams that crucially depend on the trap geometry [4][5][6], isotropic as well as anisotropic solitons [7][8][9], biconcave or structured ground state density distributions [10][11][12], radial and angular rotons [11,13,14], as well as anisotropic collapse dynamics [15,16]. Investigations of the physics of dipolar systems may in the future be extended with the help of heteronuclear molecules [17][18][19][20] or by laser-induced electric dipole-dipole interaction [21].
The stability of a dipolar BEC is in general determined by the interplay of the two-particle interactions present, namely the contact interaction (described by the s-wave scattering length) as well as the DDI, and the geometry and the strength of the trap. In the case of an attractive scattering interaction, the ground state of a harmonically trapped dipolar quantum gas, which we consider in this paper, is metastable and the BEC can decay by a coherent collapse of the condensate. The collapse can be induced by macroscopic quantum tunnelling at T = 0 [22] or by decreasing the s-wave scattering length into a region where the BEC cannot exist anymore [11].
Another possibility investigated in this paper is the coherent collapse due to thermal excitations of the condensate at finite temperature. We consider temperatures which are, on the one hand, small compared to the critical temperature T c where the ground state is populated macroscopically so that we have an almost pure condensate. Although modifications will be caused by the interaction of the bosons, a rough estimate of this regime can be obtained from the ideal Bose gas in a harmonic trap for which the fraction N 0 /N of condensed particles is given by N 0 /N = 1 − (T /T c ) 3 [23]. For temperatures 0 < T 0.2 T c we then have more than 99% of the bosons in the condensate and can neglect the influence of the thermal cloud. For a 52 Cr condensate that we investigate in the following the critical temperature is T c ≈ 700 nK [2]. We will therefore consider temperatures of T 140 nK where the thermal excitations are of collective nature and describe the quasi-particle modes of the whole condensate.
On the other hand, the temperature must be high enough so that collective oscillations of the BEC are activated. As will be discussed below, in the relevant region of the scattering length and for experimentally accessible particle numbers, the frequencies of the collective modes can be assigned to a temperature on the order of T ∼ 1 nK. Thus, in the temperature regime of several tens of nK the latter are sufficiently activated.
Note that, at higher temperatures than discussed above, a significant number of bosons will occupy excited states so that the Gross-Pitaevskii equation (GPE) will no more be adequate to such a system. In this case Hartree-Fock-Bogoliubov theory [24,25] can be applied, allowing the investigation of thermally excited BECs at finite temperatures up to the critical temperature. Note further that, at sub-nK temperatures, where collective oscillations are not present anymore, macroscopic quan-arXiv:1208.2147v1 [cond-mat.quant-gas] 10 Aug 2012 tum tunnelling will be the dominant decay mechanism. Both these temperature regimes are, however, not subject of this paper.
In the temperature regime described above dipolar quantum gases can be well described by a nonlocal GPE, which is usually solved either numerically or by variational approaches. The GPE possesses, apart from the stable ground state, also one or several excited stationary solutions. To date these solutions have received little attention in the literature. However, it is exactly these excited states which form the transition states (TS) on the way to the thermally induced collapse of the BEC, and they therefore play a key role in thermally excited condensates.
In this paper we investigate dipolar BECs using a Gaussian variational approach and reveal a remarkable bifurcation of the TS. The physical interpretation of the emerging additional states directly implies that there exist regions of the physical parameters of the system, i.e. the trap frequencies and the s-wave scattering length, in which a symmetry-breaking thermally induced collapse of the condensate would be observable in an experiment.
The BEC's thermal decay rate can be obtained by applying transition state theory (TST). However, the standard TST rate formula fails near bifurcations. With the help of a suitable normal form of the potential which describes the entire configuration of several saddle points we will derive a uniform rate formula which solves this problem.
The paper is organized as follows: In Sec. II A, we provide the description of the dipolar quantum gas within the variational framework, introduce the equivalent Hamiltonian picture and discuss the behavior of the potential when one varies the s-wave scattering length. Sec. II B demonstates the calculation of the BEC's decay rate, and in Sec. III we present and discuss the results.
A. Description of the BEC
Assuming all dipoles to be aligned along the zdirection, we can write the extended GPE of dipolar BECs in axisymmetric harmonic traps in the form Here,ψ(r,t) is the scaled condensate wave function, γ ρ,z = ω ρ,z /(2ω d ) are the dimensionless trap frequencies in radial and z-direction, a/a d denotes the scaled s-wave scattering length, and θ is the angle between the z-axis and the vector r − r . We use "natural units" [26] for the length a d = mµ 0 µ 2 /(2π 2 ), energy E d = 2 /(2ma 2 d ), frequency ω d = E d / which are defined using the mass m of the bosons, their magnetic moment µ and the vacuum permeability µ 0 . Furthermore, we apply a particle number scaling r = Nr, ψ = N −3/2ψ , E = N −1Ẽ , β = Nβ, ω = N −2ω in order to eliminate the explicit occurrence of the particle number N in the interaction terms in Eq. (1). Also, in what follows the inverse temperature is measured by the dimensionless quantity β = E d /k B T .
Since the unstable excited eigenstates of Eq. (1) are not accessible via imaginary time evolution on a grid, we will resort to a variational approach. Because the bosons are trapped harmonically a natural choice is a Gaussian trial wave function [27][28][29][30][31][32][33][34] which well approximates the true wave function as long as the interactions between the bosons are not too strong.
Here N is the normalization factor of the wave function, d 3r |ψ(r,t)| 2 = 1, and q σ , p σ are time-dependent variational functions. Note that the Cartesian geometry of the ansatz is capable of describing m = 0 (breathing mode) and m = 2 (quadrupole mode) collective oscillations of the condensate, and therefore covers the two most important modes of the system. Even though it is well known that the simple ansatz (2) with a single Gaussian will only yield qualitative results, it is crucial because it is the only access to dipolar BECs that can globally be mapped to an equivalent Hamiltonian system H = p 2 /2 + V (q) [35]. The existence of a Hamiltonian, however, is essential for the application of TST and the derivation of the subsequent rate formula near a rank-2 saddle, since both are formulated in phase space. As shown in Ref. [35] the potential V (q) reads where [36]. For given physical values of the scattering length and the trap frequencies the potential fully describes the dynamics of the BEC in the Hilbert subspace of the variational ansatz (2). In what follows we fix the values of the mean trap frequency to N 2 (γ 2 ρ γ z ) 1/3 = 3.4×10 4 and of the trap aspect ratio to λ = γ z /γ ρ = 50, if not stated otherwise, and vary a/a d .
Note that, because of the large aspect ratio of the trap, the dipoles are predominantly aligned in side-by-side configuration where they repel each other and stabilize the BEC against collapse. In the following, we will, therefore, only consider the regime of a negative s-wave scattering length (a/a d < 0) which counteracts this effect. (Fig. 1a), there exists no stationary point of the potential. Two of these emerge in a tangent bifurcation at a crit ≈ −0.22723, and both are cylindrically symmetric. One represents the stable ground state of the BEC, and the other is an unstable excited state (Fig. 1b). At a scattering length a pb ≈ −0.22657 two additional and non-axisymmetric states emerge from the central saddle in a pitchfork bifurcation, forming two satellite saddles (Fig. 1c-d) and turning the central one into a rank-2 saddle.
The potential (3) allows for a direct interpretation in terms of reaction dynamics of thermally excited dipolar condensates: In the case a crit < a/a d < a pb , i.e. in the region where only the center saddle exists (Fig. 1b), a sufficient thermal excitation of the BEC may allow the system to cross the center saddle, and to escape to q x , q y → 0, which means the collapse of the BEC. In this case the reaction path will always be located on the angle bisector, and thus this represents a condensate which collapses in a cylindrically symmetric way. The situation changes qualitatively when the parameter region a/a d > a pb (Fig. 1c-d) is reached: Since the two satellite saddles are of lower energy than the central one the reaction path now breaks the cylindrical symmetry and crosses one of the satellite saddles, which means that the condensate collapses with an m = 2-symmetry.
B. Calculation of the reaction rate
The particle number scaled reaction rate can be calculated by applying TST and is given by [37] where new variables q are defined in such a way that the reaction coordinate q 1 = 0 defines a dividing surface S that separates the configuration space into a region of reactants (stable BEC) and products (collapsing BEC), d is the system's number of degrees of freedom, and Z 0 is the canonical partition function. Approximating the potential harmonically at the ground state (0) as well as at the activated complex (b) yields the reaction rate [37] Γ = 1 2π where Ω are the products of the oscillation frequencies ω (0,b) i at the ground state and the saddle, respectively, and V ‡ 0 is the energy difference between the TS and the ground state.
In the cases a/a d a pb and a/a d a pb , i.e. far away from the bifurcation, Eq. (5) will yield an appropriate approximation for the reaction rate, since then the reaction will either proceed over the central saddle or over one of the satellites. (In the latter case, the rate (5) must be doubled because there are two saddles.) However, in the vicinity of the bifurcation (a/a d ≈ a pb ), Eq. (5) will fail: Mathematically this is because one of the frequencies ω (b) i occurring in the denominator will vanish at the bifurcation, leading to the divergence of the reaction rate. Physically speaking, it will fail because the center and satellite saddles are separated by energies of k B T or less, and reactive trajectories can pass over the central saddle with nearly the same probability as over the satellites.
Since close to the bifurcation the quadratic expansion of the potential is obviously not adequate to reproduce the correct behavior, we need a more accurate approximation. It is provided by the classifications of catastrophe theory [38,39], and we therefore apply a change of coordinates q → x that maps the potential V (q , q 1 = 0) to a suitable normal form V 0 + U (x). The remaining integral in Eq. (4) then has the form where φ(x) is the Jacobi determinant arising from the transformation, and the reaction rate reads A suitable normal form describing the bifurcation of the transition state in the axisymmetric trap is It is quadratic in all variables but one. The number and type of stationary points of U depends on the value of the parameter u. By a suitable choice of u, we will reproduce the bifurcation of saddle points that is found in the physical potential V . For x i = 0 (i = 2) and u < 0 the function U (x) has a maximum at x 2,cs = 0 (center saddle) and two minima at What remains is to determine the prefactor φ(x) in Eq. (6) in such a way that the flux integral reproduces the standard TST rate far away from the bifurcation. In the case u → ∞ (only the center saddle) we can return to the quadratic approximation of the potential, and because the prefactor varies slowly we can regard it as constant. In this limit we have where "≡" denotes the requirement that the conventional TST result is to be reproduced. Analogously in the limit u → −∞ we require to reproduce the flux over the two satellite saddles. Since φ(x) must be an even function, we finally write as its lowest-order Taylor expansion and, once the values of φ(0) and φ(x ss ) have been determined from Eqs. (9) and (10), we solve the remaining integral in Eq. (7) numerically. In a different setting, the corrections to TST rates that are due to rank-2 saddles were recently estimated by Maronsson et al. [40], who calculated the energy ridge that connects the rank-1 saddle to the rank-2 saddles. In contrast to ours, their method takes account of the precise shape of the potential along the ridge. The present approach provides a rate formula that applies on both sides of, and arbitrarily close to, the bifurcation. It also (5) and the dots show the corresponding reaction rate obtained from the uniform rate formula (7). The temperatures T as well as the decay rate Γ have been calculated for 52 Cr BECs with a particle number of N = 50 000.
offers the advantage of greater simplicity because it only requires information about the saddle points themselves. Via the frequencies ω (0,b) i , the influence of degrees of freedom transverse to the ridge is taken into account. Fig. 2 shows the thermal decay rates of the dipolar BEC in leading-order TST calculated from Eq. (5) in comparison with the results obtained from the uniform rate formula for the rank-2-rank-1 saddle configuration, Eq. (7). The first case solely considers the energetically lowest saddle(s) (lines) while the second case takes into account the complete configuration of saddles (dots). In the calculations using the conventional TST rate formula (lines), the divergence of the decay rate at a/a d ≈ −0.22657 is obvious. By contrast, the uniform solution (dots) passes the bifurcation smoothly. We again emphasize that the collapse of the BEC will be cylindrically symmetric on one side of the bifurcation, and symmetry-breaking on the other side. Near the bifurcation, however, a clear distinction can no longer be made.
III. RESULTS
In the calculation the particle number scaled temperatures have been adapted to a 52 Cr BEC with a magnetic moment of µ = 6µ B (µ B is the Bohr magneton) and a particle number of N = 50 000 as it has been realized experimentally by Griesmaier et al. [2]. For this number of bosons the valuesβ = 0.03 toβ = 0.06 correspond to temperatures between T = 65 nK and T = 130 nK which is clearly below the critical temperature of T c ≈ 700 nK so that the treatment within the Gross-Pitaevskii framework is justified.
Note that, on the other hand, these temperatures are high enough to activate collective oscillations of the BEC: In the relevant region of the scattering length, the frequencies of the monopole and the quadrupole mode are, both, on the order ofω ∼ 10 000. For the above mentioned particle number, this means an oscillation frequency of ω = 107 s −1 . Assigning to this frequency an energy of E ∼ ω as well as the temperature T ∼ E/k B , we find a value of T = 0.8 nK to determine the order on which collective oscillations are activated. Thus, for the temperatures given above the latter are sufficiently present.
For experiments it will be of great interest in which region of the physical parameters (trap frequency and scattering length) a symmetry-breaking collapse is to be expected. Fig. 3 shows that the existence of the symmetry-breaking states and the corresponding regions of the scattering length crucially depend on the trap aspect ratio. While for small λ 2.8 (including prolately trapped condensates λ < 1, not shown) only the cylindrically symmetric excited states exist, the additional symmetry-breaking states appear for oblate condensates with λ 2.8. The more oblate the BEC the larger becomes the region in which these states are present. In contrast, increasing the trap aspect ratio, the parameter region of the scattering length with a crit < a < a pb becomes smaller and vanishes for λ → ∞. We therefore expect the trap aspect ratio to be the decisive tool to switch between the two scenarios in an experiment. Note that the curve in Fig. 3 for the critical scattering length of course corresponds to the one published by Koch et al. [4].
IV. CONCLUSION AND OUTLOOK
We have investigated a thermally excited dipolar Bose-Einstein condensate in a cylindrically symmetric trap. Within a variational framework we observed that the unstable excited state of the system which forms the activated complex on the way to the collapse of the condensate undergoes a bifurcation. This divides the parameter region of the s-wave scattering length into a region with cylindrically symmetrical collapse, and one where the collapse occurs with broken symmetry. With the help of a uniform rate formula, we were able to calculate the corresponding reaction rate over the whole range of the scattering length within leading order TST and to smoothly pass the bifurcation. Moreover, the occurrence of the additional bifurcation strongly depends on the trap geometry which allows one to switch between the two scenarios in experiments.
In order to improve the results quantitatively, the procedure described here can be extended to coupled Gaussian wave functions, which have already proven their power to reproduce or even to exceed the quality of numerical results [41,42]. We have shown elsewhere [43,44] that it is possible to construct a Hamiltonian also for the case of coupled Gaussians which then allows for the application of TST. While in the case of a long-range 1/rinteraction we could show that converged results for the decay rate are only shifted to higher values of the scattering length, the situation is different in dipolar BECs: The bifurcation of the TS leading to the symmetry-breaking stationary states also exists in the case of coupled wave functions, however, in the latter case even more bifurcations occur when the number of Gaussians is increased. The even richer thermal collapse scenarios and decay rates of dipolar BECs described by coupled Gaussians are a challenge for currently ongoing research.
ACKNOWLEDGMENTS
This work was supported by Deutsche Forschungsgemeinschaft. A. J. is grateful for support from the Landesgraduiertenförderung of the Land Baden-Württemberg. | 2012-08-10T11:28:37.000Z | 2012-08-10T00:00:00.000 | {
"year": 2012,
"sha1": "b1354705f0cca210e222e7ca09037d9e9d5d330e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1208.2147",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b1354705f0cca210e222e7ca09037d9e9d5d330e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17913469 | pes2o/s2orc | v3-fos-license | Different flight behaviour of the endangered scarce large blue butterfly Phengaris teleius (Lepidoptera: Lycaenidae) within and outside its habitat patches
Understanding individual movements in heterogeneous environments is central to predicting how landscape changes affect animal populations. An important but poorly understood phenomenon is behavioural response to habitat boundaries and the way animals cross inhospitable matrix surrounding habitat patches. Here, we analyze movement decisions, flight behaviour, and activity of the endangered scarce large blue Phengaris (Maculinea) teleius, focusing on the differences among the patterns observed in patch interior, at patch boundaries and within matrix. The probability of crossing an external patch boundary, regardless of the land use in the adjacent area, was considerably lower than crossing a ‘control line’ within patch interior. Movement distances, flight durations and net squared displacement were largest in matrix, while similarly smaller at patch boundaries and in patch interior. The distribution of angles between successive movements was clearly clustered around 0° (indicating flight in a straight line) in matrix and at patch boundaries, but not in patch interior. There were no differences in time spent on foraging, resting and ovipositing between patch interior and boundaries, but the first two activities rarely, and oviposition never, happened in matrix. Our results suggest that although P. teleius adults do not avoid using the resources located in the boundaries of habitat patches, they often return to the interior of the patches when crossing their boundaries. However, having entered the matrix the butterflies perform relatively long and straight flights. The estimated probability of emigration and net squared distance implies that the dispersal between local populations is common in this species in the studied area.
Introduction
In landscapes altered by human activity many species are forced to live in habitat patches that are spatially isolated from each other (Hanski 1999;Debinski and Holt 2000;Bergman and Landin 2001;Fahrig 2003;Trakhtenbrot et al. 2005). Consequently, dispersal is a key process making it possible for the local populations to be functionally connected into a metapopulation system despite spatial isolation of their habitat patches (Levins 1970;Hanski 1999;Fleishman et al. 2002;Bowne and Bowers 2004).
Many authors studying animal dispersal in metapopulations have focused on the effects of local patch area and isolation (Hanski 1994;Matter 1997;Moilanen and Nieminen 2002). However, such a traditional approach tends to overlook other potentially important factors (Tischendorf and Fahrig 2000;Crone et al. 2001;Schultz and Crone 2001). Among such factors, of particular importance are individual behaviours at patch boundaries (Ovaskainen 2004;Tischendorf et al. 2005) and movement strategy in inhospitable environment separating patches termed matrix (Ricketts 2001;Ross et al. 2005;Kuefler et al. 2010).
Since crossing patch boundary is the first step in emigration, propensity to do so strongly affects the proportion of emigrants (Stamps et al. 1987;Schtickzelle and Baguette 2003). However, boundary crossing may depend on boundary type (Eycott et al. 2012;Schultz et al. 2012). It may be expected that dispersing individuals should easily cross boundaries of low habitat contrast (e.g. those between meadows with and without the foodplant of a focal species), but not hard boundaries between contrasting habitats (e.g. between meadow and forest) (Ries and Debinski 2001;Ross et al. 2005;Haynes and Cronin 2006;Kuefler et al. 2010;Eycott et al. 2012). In turn, movement patterns in matrix determine emigrant chances of reaching other habitat patches (Crone and Schultz 2008;Eycott et al. 2012). For example, these chances are strongly reduced if animals entering matrix move only short distances or tend to return to their natal patch (Conradt et al. 2000;Ries and Debinski 2001;Ross et al. 2005). Conversely, long distances and straight paths of animals moving in matrix may lead to a greater displacement (Kuefler et al. 2010;Schultz et al. 2012) and higher probability of reaching a suitable habitat patch (Schtickzelle and Baguette 2003).
Dispersal may be sex-biased as predicted by several theoretical models (Perrin and Mazalov 2000;Gros et al. 2008) and often confirmed empirically (Bergman and Landin 2002;Nowicki and Vrabec 2011;Schultz et al. 2012). If females are more mobile their dispersal allows the effective colonisation of empty patches (Bergman and Landin 2002) while dispersal restricted to males does not. Male dispersal, although often ignored, may also crucially contribute to gene flow among local populations (Piaggio et al. 2009;Solmsen et al. 2011). However, sex-specific behaviour may change depending on whether the individuals are in the habitat patch interior, at the patch boundaries or in matrix (Schultz et al. 2012). For example, males may be more willing to cross habitat patch boundaries than females but once in matrix, they may move lower distances than female. This has important consequences for predicting levels of functional connectivity across the landscape and, hence, the persistence of metapopulations, but empirical data are still scarce (Ovaskainen 2004;Schultz et al. 2012). In this paper we describe how movements and other activities (foraging, resting and ovipositing) of the endangered scarce large blue butterfly Phengaris (Maculinea) teleius (Lycaenidae) differ between habitat patch interior, patch boundary and matrix. We tested the following predictions: (1) The probabilities of crossing the patch boundary and emigration depend on boundary type, being higher for low-contrast boundaries than for highcontrast ones. (2) If there are inter-sexual differences in dispersal, the more mobile sex should be characterised by higher probability of crossing habitat patch boundaries as well as longer and more linear movements.
(3) One should expect longer movement distances, longer time spent flying and larger net displacement at patch boundaries and in matrix as compared to patch interior. Independently from the above, angles between successive movements in matrix and at boundaries may be more clustered around 0°(indicating continuous flight in a straight line sensu Turchin 1998) than in patch interior (implying zig-zag flights) (Crist et al. 1992;Kindvall 1999;Roslin 2000;Doncaster et al. 2001).
Study species and area
The scarce large blue P. teleius is one of the most endangered butterflies in Europe (Thomas 1995;Wynhoff 1998;Settele et al. 2005). It has a highly specialized life-style, depending on two crucial resources. Females lay their eggs into the flowerheads of the Great Burnet Sanguisorba officinalis foodplants, where the larvae feed for their first weeks (Thomas 1995). The same plant is also the predominant nectar source for adult butterflies. Having reached their fourth instar, the larvae drop to the ground and are taken by the workers of Myrmica ants to their nest, where they lead a parasitic life, feeding on ant brood (Thomas et al. 1998). The host ants of P. teleius are several species of Myrmica: mostly M. scabrinodis, M. rubra and M. rugulosa (Thomas et al. 1989;Wynhoff et al. 2008;Witek et al. 2010Witek et al. , 2011.
The study was carried out in the vast complex of wet meadows located in the Vistula River, west of the Kraków city centre (southern Poland; Fig. 1). The meadows represent various types, however the dominant one is the Molinietalia association with the following typical plant species: Molinia caerulea, Deschampsia caespitosa, Achillea ptarmica, Angelica sylvestris, Carex hartmannii, Cirsium palustre, Galium uliginosum, Lychnis flos-cuculi, Trollius europaeus, S. officinalis. In recent decades a large part of the meadows have been abandoned and an invasion of shrubs, alien goldenrods, reeds and trees has followed . Over 60 patches of S. officinalis were present in these meadows (Nowicki et al. 2007) (Fig. 1). The mean patch size was 2.9 ± 0.7 ha (range: 0.005-33 ha) and the distances between neighbouring patches were usually within the range of 100-300 m (Nowicki et al. 2007). These foodplant patches constitute habitat patches of P. teleius, since the aforementioned host ant species are widespread and abundant in all meadow types in our study area (Witek et al. 2008(Witek et al. , 2010.
Study design
In order to assess the effect of various habitat boundaries on the probability of P. teleius emigration from the patch, six types of patch boundaries of different contrast were defined. The boundary types, listed in the order of decreasing contrast, included boundaries: (1) between a foodplant patch and a forest (difference in vegetation height |d| = 1,500 cm; the percentage share of this boundary type in the total length of all patch boundaries in our study system ps = 18 %), (2) between a foodplant patch and a road (|d| = 130 cm; ps = 19 %), (3) between a foodplant patch and an arable field (|d| = 100 cm; ps = 5 %), (4) between a foodplant patch and a mown meadow (|d| = 90 cm; ps = 6 %), (5) between a foodplant patch and reeds (|d| = 30 cm; ps = 19 %), (6) between a foodplant patch and a meadow without this plant (|d| = 10 cm; ps = 29 %). All boundaries but one were resource boundaries sensu Schultz et al. (2012). The boundary between a patch with foodplant and mown meadow could be regarded as a structural boundary (Schultz et al. 2012), because habitats on both sides of the boundary line differed only in height of vegetation (due to intensive mowing two times per year) and both contained the foodplant. All the investigated boundaries were sharp, there was no ''ecotone'' (transition zone) between the butterfly habitats and the matrix and thus it was straightforward to delineate boundary line (the foodplants grew in high densities within patches, but they were not present in the adjacent areas). In each case we selected at least 80 m long straight line section of a boundary.
In each of the boundary types one point, located in the middle of the 80 m section, was selected, where butterflies were released and their behaviour was observed. Butterflies were captured at the patch and marked individually with a number on their right underwing. They were then put into small paper bags and were placed in a cooler box (at 10°C temperature) for 10 min to calm down (for the rationale see Schultz 1998). Subsequently the butterfly was taken out, gently placed on a foodplant, and observed. Butterflies were always released by placing them on a host plant located inside the patch within 1 m from the boundary line.
Because capturing butterflies and keeping them in paper bags could influence their behaviour, we compared the behaviour in a sample of 27 (15 females and 12 males) individuals randomly encountered in the patch interior with a similar number of butterflies captured, kept in a cooler, and released. Because the recorded parameters (i.e. movement distances, turning angles and flight activity; see below) did not differ Landscape Ecol (2013) 28:533-546 535 significantly between the two groups we assumed that the effect of the experimental manipulation was neglectable.
We also released butterflies in the centre of the habitat patch and they constituted a control group for the butterflies released at the boundaries as well as in the matrix (see below). In the centre of the habitat patches we established a 'control line' (imaginary boundary) at the mid point of which butterflies were released. Finally, to assess P. teleius behaviour in the matrix we released butterflies outside the habitat patch. The matrix selected was a meadow with flowering plants and with a high density of Myrmica nests but without S. officinalis, which is the most common matrix type in our study area. The release point was located 75 m from the habitat patch boundary. The procedure of behavioural observations in patch interior as well as in matrix was identical as for the butterflies released at habitat patch boundaries.
The observers followed the butterflies, keeping at the distance of about 5 m so as not to disturb butterfly behaviour. Wooden sticks with numbered flags were placed wherever the butterfly stopped. Subsequently, for each butterfly we measured the distances between stopping points as well as the angles between successive movements. We measured up to ten distances per individual to prevent the inclusion of flights that were certainly within-patch movements. We also recorded the time the butterfly spent flying, foraging, resting and ovipositing (in females). In addition, we recorded if a butterfly (1) crossed the boundary and (2) emigrated from the patch. Crossing was recorded whenever the butterfly crossed the boundary line. Emigration was recorded when the butterfly once crossed the boundary line and flew at least 20 m from the boundary or reached another habitat patch.
We conducted detailed behavioural observations for at least 30 individuals (15 females and 15 males) for each habitat patch boundary type, patch interior and matrix. Additionally, to obtain more accurate estimates of the probability of crossing habitat patch boundaries we released ca. 20 more butterflies at each boundary type. The only measurement taken for these additional butterflies was a record of whether the butterfly crossed the boundary, emigrated from the natal patch or returned to it. We never used the same individual butterfly twice during the observations.
It is important to note that each boundary type, patch interior and matrix were selected to be similar in respect to the density of Myrmica ants (overall mean ± SE = 3.5 ± 0.3 nest per 40 m 2 as assessed at three 20 9 2 m transects for each release point; one-way ANOVA F 7,16 = 0.307, P = 0.940) and the density of S. officinalis flowerheads (overall mean ± SE = 69.4 ± 4.2 as assessed at ten 1 m circular plots for each release point except for the matrix, one-way ANOVA F 6,63 = 1.891, P = 0.096).
The study was carried out between 10th July and 10th August in 2005 and 2006. Observations were conducted between 10 a.m. and 4 p.m. in favourable weather conditions (minimum temperature of 20°C, maximum wind of 3 on the Beaufort Scale, maximum cloud cover of 50 %). In total 313 individuals were examined and 1,533 distances were measured, with additional 218 individuals used to estimate the probabilities of crossing habitat patch boundaries and emigration.
Statistical analysis
A generalized linear mixed model (GLMM) with logit-link function and binomial error variance was applied to test for the differences in the probability of crossing the habitat patch boundary as well as of emigration (non-returns were treated as emigration).
Explanatory factors in the model were patch boundary type including the control line (imaginary boundary) in patch interior, sex, and their interaction. Temperature was used as a covariate, whereas the year of the study was treated as a random factor. Non-parametric Spearman correlation coefficient was calculated to test if the rate of boundary crossing and probability of emigration were correlated with the contrast of the boundaries (including the imaginary boundary inside the habitat patch). To compare distances covered by butterflies at various boundary types, in patch interior and in matrix we used a GLMM with identity link function. Explanatory factors were release site (all boundary types, patch interior and matrix), sex and their interaction. Temperature was again included as a covariate, while butterfly ID and year of the study constituted random factors. The same GLMM structure was applied to compare flight activity, defined as the proportion of time spent flying, as well as the duration of foraging, resting and egg laying at various release sites. Flight activity was calculated as the time spent by the individual in flight divided by the total time of its observation. The GLMM was also used to test effects of boundary type, matrix and patch interior and sex on the net squared displacement (NSD) per time unit of individual butterflies. NSD is a squared Euclidean distance from the start to position after n moves of the animal movement trajectory (Turchin 1998).
We used Watson U 2 test and Williams-Fischer test (Fisher 1993) to compare respectively the means of turning angles and their distributions among various boundary types, patch interior, and matrix. The turning angle distributions were symmetrical in all the cases and they were expressed in 30°intervals. For instance the turning angle of 15°is equivalent to the angle of -15°or 345°in terms of the circular statistic.
The GLMMs were calculated in SAS 9.1, while the turning angle analysis was done in the Oriana 2.0 software.
Probability of boundary crossing and probability of emigration
The probability of crossing the control line within patch interior was significantly higher than in the case of any type of real patch boundary (GLMM F 6,374.3 = 4.321, P \ 0.001, n = 462 butterflies; Fig. 2a). On the other hand, we did not find significant differences in the probabilities of crossing for different types of real boundaries (GLMM F 5,320.6 = 1.621, P = 0.154; Fig. 2a, control line inside habitat patch excluded, n = 375). Altogether 94 (25 %) of 375 investigated butterflies crossed the habitat patch boundary, but 53 (56 %) of them later returned to the natal patch. The analysis restricted to the remaining fraction of 41 (11 %) individuals regarded as emigrants, also revealed no significant effect of boundary type (GLMM F 5,322.1 = 1.634, P = 0.151; control line inside habitat patch excluded, Fig. 2b, n = 375 butterflies).
Females crossed the external habitat patch boundaries twice as often as males (79 females (35 %) versus 47 males (20 %); GLMM F 1,323.5 = 10.452, P = 0.001; Fig. 2a, n = 375 butterflies) and the probability of emigration (non-return) among individual crossing the boundary was also twice higher in females than in males (33 females (9 %) vs 16 males (4 %); GLMM F 1,325.8 = 5.815, P = 0.016; Fig. 2b, n = 375 butterflies). The interaction between boundary type and sex as well as the effects of temperature and year proved to be nonsignificant in all the cases.
There was no statistically significant correlation between the probability of boundary crossing and the boundary contrast, both for females (r s = -0.678, P = 0.094, n = 7 boundary types, including imaginary one in the interior of the habitat patch) and males (r s = -0.643, P = 0.119, n = 7). We also did not find any statistically significant correlation between the probability of emigration and the boundary contrast both for females (r s = -0.321, P = 0.482, n = 7) and males (r s = -0.486, P = 0.268, n = 7). Movement distances and flight activity at patch boundaries, in patch interior, and in matrix Distances covered by butterflies did not differ among boundary types or between the boundaries and patch interior, but they were over three times shorter than movement distances in the matrix (GLMM F 7,1218 = 37.158, P \ 0.001, n = 313 butterflies; Fig. 2c). Similarly, the proportion of time spent flying was higher in the matrix, and lower in other locations (GLMM F 7,287 = 3.444, P \ 0.001, n = 313 butterflies; Fig. 2d), with no particular differences between patch interior and boundaries or among boundary types (Tukey post hoc tests: P [ 0.05 in each case).
Females typically flew longer distances than males (GLMM F 1,1218 = 26.191, P \ 0.001, Fig. 2c), but on the other hand the proportion of time spent flying was similar in both sexes (GLMM F 1,287 = 1.723, P = 0.201, Fig. 2d). Among all other effects tested in the models, only butterfly ID significantly influenced movement distance (estimate ± SE: 0.10 ± 0.02, Z = 4.15, P \ 0.001), which implies strong heterogeneity in mobility among individuals.
Turning angles at patch boundaries, in patch interior, and in matrix The mean turning angle between successive movements did not differ among all the investigated locations (Watson-Williams F test, F 7,1358 = 1.046, P = 0.397). However, the analysis of the angle distributions showed that they were strongly clustered around 0°in butterflies released at patch boundaries (concentration coefficient = 1.845; mean angle ± SE = 4.8°± 5.5°) and in matrix (concentration coefficient = 1.223; mean angle ± SE = 0.7°± 5.5°), but not in those released in patch interior, for which the distribution was fairly uniform (concentration coefficient = 0.294; mean angle ± SE = 2.2°± 21.3°; Fig. 3). In addition, female turning angles (concentration coefficient = 1.530; mean angle ± SE = 1.3°± 5.4°) were less concentrated around 0°than male one (concentration coefficient = 0.982; mean angle ± SE = 2.0°± 2.4; U Watson test, U 2 = 0.782, df 1 = 602, df 2 = 681, P \ 0.001; Fig. 3), implying that zig-zag movements were performed more frequently by the former sex. The outcome of the U 2 Watson tests applied for comparisons of the turning angle distributions between the investigated locations are given in Table S1 in the Supplementary Material. In general, the distributions of turning angles were similar for various boundary types, although we found a slightly more peaked distribution in the case of the road boundary. Nevertheless, it must be emphasised that any differences between road boundary and other boundary types in this respect became nonsignificant when Bonferroni correction for multiple testing was applied.
The rate of area exploration by butterflies at patch boundaries, in patch interior, and in matrix The NSD of P. teleius individuals was statistically higher in matrix as compared with patch interior and boundaries (GLMM F 7,279.4 = 16.193, P \ 0.001, n = 313 butterflies; with Tukey post hoc test; Fig. 1e). There was no apparent influence of boundary type on the NSD when tested versus patch interior (Tukey post hoc tests: all P [ 0.05). However, the NSD at the reed boundary was higher than at the boundaries with field and meadow without the foodplant (Tukey post hoc test: P \ 0.05). Overall, the NSDs of females was higher than in males (GLMM F 1,279.4 = 10.376, P = 0.001, n = 313; Fig. 1e) at all boundaries, in the patch interior and in the matrix (nonsignificant interaction term between sex and releasing site in GLMM F 7,279.2 = 0.696, P = 0.675, n = 313; Fig. 1e). Any other factor considered in the analysis played a nonsignificant role.
Behaviour at patch boundaries, in patch interior and in matrix
Time spent on foraging and resting by P. teleius individuals was significantly shorter in matrix as compared with patch interior and boundaries (foraging: GLMM F 7,301 = 2.398, P = 0.021, n = 230 butterflies; resting: GLMM F 7,533 = 2.602, P = 0.012; n = 291; Fig. 4). There was no apparent influence of boundary type on the duration of aforementioned activities, except for shorter duration of resting at the road boundary (Tukey post hoc test: P \ 0.05 when tested vs patch interior). Neither sex nor any other factor considered in the analysis played a significant role. Female oviposition time was similar between patch interior and boundaries as well as among different boundary types (GLMM F 6,87 = 0.108, P = 0.996, n = 56 ovipositing females; Fig. 4). Matrix was excluded from the model as no cases of oviposition could be observed there, which is quite obviously due to the lack of foodplants. Interestingly, the oviposition time was positively related to temperature (estimate ± SE: 0.321 ± 0.157, GLMM F 1,87 = 4.618, P = 0.035). Concerning random factors, it was also significantly affected by butterfly ID (estimate ± SE: 0.13 ± 0.06, Z = 2.17, P = 0.015), but not by year.
Discussion
In the light of our results it appears P. teleius adults use the resources located both in the centre and at the edges of their habitat patches. We recorded no differences in the duration of foraging, resting and ovipositing between patch interior and patch boundaries. However, our study has demonstrated that external boundaries of habitat patches may constitute a barrier to P. teleius movements, since the probability of crossing such boundaries was significantly lower than in the case of a control line within patch interior. Avoidance of boundary crossing may have serious consequences for the functioning of the metapopulations. Theoretical metapopulation models assume that emigration is a stochastic process depending on the frequencies of animal encounters with their patch boundary and thus it is a function of the ratio of patch perimeter to its area (Hanski 1994;Haddad 1999;Golden and Crist 2000). Our results, as well as those of several earlier studies (Merckx et al. 2003;Schtickzelle and Baguette 2003;Conradt and Roper 2006;Kuefler et al. 2010;Schultz et al. 2012), suggest that the behaviour at patch boundaries also plays a role and consequently the probability of emigration may be lower than predicted purely on the basis of patch geometry.
The proportion of emigrants assessed in the present study at ca. 10 % butterflies is in good agreement with the coarse estimates of the proportions of P. teleius individuals changing habitat patches derived for the same study area on the basis of mark-recapture studies (Nowicki et al. 2005b(Nowicki et al. , 2007. However, the advantage of present analysis is that it not only documents the pattern, but also helps to understand the underlying processes, indicating that the moderate level of emigration stems from the facts that the prevailing majority of butterflies do not cross patch boundaries, and among those that do so more than half return to the natal patch. Moreover, not all emigrants become immigrants elsewhere; in other words emigration is not tantamount with successful reaching another habitat patch. Although assessing this aspect of dispersal was beyond the scope of our research, the study by Nowicki and Vrabec (2011) revealed that mortality during the dispersal was fairly low, at maximum 28 % in the year when butterflies numbers peaked above carrying capacity and there was an emigration outbreak, but close to zero in 'normal' years, in a Czech region with both habitat configuration and matrix composition being very similar to those in our study area. The mortality of dispersal is highly dependent on the geometry of the landscape as well as the dispersal biology of the species but several other studies suggested that the mortality during dispersal may be low in butterfly metapopulations (Matter 2006;Rabasa et al. 2007;Fric et al. 2010; but see Wahlberg et al. 2002). Moreover, our estimates of the NSD showed that P. teleius may explore relatively large area in very short time when moving in the matrix. The low interpatch patch distances, which are usually between 100 and 300 m, in the study region implies that butterflies are able to cover large area during a few minutes of movements in matrix. This is also in agreement with our earlier study (Nowicki et al. 2007) documenting a little fragmented metapopulation with very high patch occupancy (93-100 %).
Virtually all of the results concerning inter-sexual differences suggest that females are the more mobile sex. They had considerably higher probability of crossing patch boundaries, and lower probability of subsequent return to the natal patch, which is consistent with the higher female emigration rate reported for P. teleius (Nowicki and Vrabec 2011). We also found that P. teleius females flied significantly longer distances than males. They also explored larger area in time unit than males. These are interesting results because heavy body weight of females may negatively affect, for example, flight activity (Kingsolver and Srygley 2000). However, body weight is also often strongly correlated with speed of flight and, thus, distance covered (Dudley and Srygley 1994;Dudley 2000). Larger distances covered by females of P. teleius were reported also by K} orösi et al. (2012). These Landscape Ecol (2013) 28: 533-546 541 findings are in contrast with those studies reporting higher mobility of males in butterfly populations. For example, Schultz et al. (2012) found that males were more mobile and willing to cross habitat patch boundaries than females in another lycaenid, Fender's blue (Icaricia icarioides fenderi).The discrepancy can potentially be explained by different resource requirements in these species. The S. officinalis foodplant is also a nectar source for P. teleius (Thomas et al. 1998), but Icaricia icarioides fenderi uses as nectar sources other plants than its lupine (Lupinus spp.) foodplants (Schultz and Crone 2001). Consequently, while females of Icaricia icarioides fenderi stay close to their foodplant patches as oviposition sites, males may be attracted to leave them in search of nectar sources. In turn, our explanation for higher emigration propensity in P. teleius females is in agreement with the concept of the fitness benefits of distributing reproductive effort over several patches (den Boer 1968;Brown and Ehrlich 1980). All concerned, females probably have higher chances of reaching other habitat patches. Therefore gene flow among local populations of P. teleius appears more dependent on females. More importantly, since even a single female is able to successfully colonise a vacant habitat patch, femalebiased dispersal has positive consequences for colonisation rate, thus enhancing metapopulation viability.
Our study is one of the few that examined animal behaviour at various types of habitat patch boundaries (cf. Kuefler et al. 2010;Schultz et al. 2012). Interestingly, all the investigated boundary types turned out to have fairly similar permeability for P. teleius. Some earlier studies showed that crossing high-contrast boundaries, like those with forests, is avoided particularly by butterflies (Ricketts 2001;Ross et al. 2005;Eycott et al. 2012; but see Kuefler et al. 2010). In our analysis, the boundary with forest, also had the lowest permeability, but the difference in relation to other boundary types was nonsignificant and rather small. Even the boundary between foodplant patch and the meadow without the foodplant, with little structural contrast between the two habitats, acted as a barrier restricting butterfly movements. The above findings confirm the concept of Schultz et al. (2012) that butterfly movements are shaped primarily by their responses to resource distribution rather than to physical structure of habitats. Strong site-fidelity towards foodplant patches is an especially beneficial strategy in strong specialist species, such as Phengaris butterflies. In addition, home ranging behaviour is also likely to play a role. Hovestadt and Nowicki (2008), who reported it for P. teleius, suggested that keeping close to the place of eclosion is an adaptation to myrmecophily.
To our surprise, we have learned that mown fragments also restrict butterfly movements. It is generally believed that a mosaic of mown and abandoned fragments within meadows is helpful for local population persistence of grassland species (Cremene et al. 2005). This is, however, based on the implicit assumption that animals move freely between different parts of their patches. As we demonstrated, this assumption may not necessarily be valid. Even if mown fragments are relatively small in area and hence they have no effect on the overall availability of resources, they may increase functional fragmentation of local populations, possibly impeding gene flow and reducing effective population size. From the conservation point of view, it is yet another argument for mowing to be done after the flight period, in the case of P. teleius habitats preferably no earlier than in mid September (cf. Grill et al. 2008). Mowing should also be done before the flight period in the second week of June every 5-7 years (Wynhoff et al. 2011). Such early mowing is needed when the soil is rich in nutrient or wet (as in our study region), because it is the only way to reduce the coverage of reed and willows effectively (Wynhoff et al. 2011).
Working in a natural landscape is often disadvantageous with respect to experimental design. Our data on the behaviour of P.teleius at different boundaries, in the patch interior and in the matrix have some limitations which should be taken into account when generalizing to other areas and species. All the butterflies within a boundary treatment were released at one point. Having replicates within boundary types would be desirable. However, a sampling design of that nature was unattainable for practical reasons, since, despite extensive efforts in the field, it proved impossible to find other release points with comparable resource densities and boundary shape within our study area. The behaviour of the P. teleius, including movements, is highly dependent on two critical resources; the larval foodplant S. officinalis and the Myrmica host ants (Maes et al. 2004;Batáry et al. 2007Batáry et al. , 2009Wynhoff et al. 2008;Van Langevelde and Wynhoff 2009). To make it possible to take into account the strong effect of a resource availability which is highly variable, several dozen replicates per boundary type would be needed. This was not feasible owing to logistic constraints, such as, for example, the single and relatively heavy cooler box in which the butterflies were cooled before release. Therefore, instead, we decided to control for resource availability by choosing release points with the same foodplant and host ant densities, which were also average for the entire study area. We believe that such an approach made it possible to focus on the main objective of the study which was a comparison of permeability for different patch boundary types, rather than on all the factors determining flight behaviour at boundaries, which are likely to be dominated by the effects of resource densities, which have already been documented in other papers. Adding more release points to one edge line would also result in other uncontrolled effects, such as, for instance, a different boundary amount, perceptible to the butterflies, at release points near the end of the boundary line. Moreover, from another of our studies (Skórka et al. under preparation) on the dispersal behaviour of P. teleius and other lycaenid butterfly species at road verges, we have learned that road crossing by this species is mostly affected by resource density and the replicates of the boundary had no impact on the overall results.
Further conservation implications of our study concerns the ongoing debate about the applicability of corridors and stepping stones as measures facilitating animal movements (Primack 2002). Although we did not test these two approaches, the results on the behaviour of P. teleius inside habitat patches, at their boundaries and in matrix may provide some clues how to increase connectivity between local populations. While the effectiveness of corridors is sometimes questioned (Simberloff et al. 1992;Mann and Plummer 1995), in butterflies they have been frequently shown to enhance inter-patch movements (Dirig and Cryan 1991;Sutcliffe and Thomas 1996;Tewksbury et al. 2002). Assuming that entering a corridor does not require crossing a boundary (which is not always the case), corridors may also be expected to increase the numbers of P. teleius individuals emigrating from their natal patches. This, however, does not seem desirable in our study system. Approximately 10 % emigration, which in fact appears quite typical for Phengaris butterflies and other butterflies (see review in Nowicki et al. 2005a), has been proven to be enough to ensure rapid colonisation of vacant patches within reach (Nowicki et al. 2007;Van Langevelde and Wynhoff 2009). Furthermore, it should be noted that the densities of investigated populations were at an average level in both years of the study (authors' unpublished data). Since Phengaris populations are known to experience strong fluctuations (Nowicki et al. 2009), and positive density-dependence of emigration has been reported in this genus (Nowicki and Vrabec 2011), one should expect that the proportions of individuals leaving their natal patches recorded in the present study are at least doubled in years when the population density is high.
A great majority of emigrants move between the nearest neighbouring patches, and only a few percent of them undertake genuine dispersal that makes it possible to reach distant patches (Hovestadt et al. 2011). Consequently, conservation efforts should be focused on increasing the distances covered by emigrants. As our analysis demonstrated, movements within patches, both in their interior and at boundaries, are very short and they are likely to be so within corridors composed of a similar habitat. In contrast, we found that P. teleius covered much longer distances in straight movements and explored larger area in matrix and spent less time there on resting, foraging or ovipositing. The findings of other authors also indicated that animals try to cross matrix relatively quickly (Miyatake et al. 1995;Schultz 1998;Conradt and Roper 2006;Schtickzelle et al. 2007;Schultz et al. 2012). However, moving through matrix may be associated with lower supply of resources, higher mortality due to predation, and low probability of finding another habitat patch (Rankin and Burchsted 1992;Zollner and Lima 1999;Schtickzelle and Baguette 2003;Stamps et al. 2005;Schtickzelle et al. 2007). Therefore, in order to facilitate movements of P. teleius through matrix the corridors of low quality may be a useful solution. These should be elongated landscape structures (road verges, forest edges) which can canalise dispersal of butterflies between habitat patches. P. teleius is known to utilise very small habitat fragments of a few tens of square metres (Nowicki et al. 2007) so if host-plant was present in the corridors it could actually lead to reproduction there and possibly lower dispersal. If the primary aim is to increase species mobility across a landscape the corridors should not provide enough host plants which is in line with results of the experiments published by Haddad andTewksbury Landscape Ecol (2013) 28:533-546 543 (2005). Nevertheless, they should contain other flowering plants so that butterflies could replenish the energy resources during dispersal. Alternatively, stepping stones that are much smaller landscape elements that corridors may be also useful solution. In specific terms, stepping stones for P. teleius should be located 50-100 m apart, possibly with higher numbers near habitat patches so that butterflies that flew out of the patches are encouraged to continue their movements rather than to return to the patches. Stepping-stones have also an important practical advantage over corridors: they are much easier to create in agricultural landscapes, typically characterised by diverse ownership and mosaic land use. The setting a corridor (as defined above) or managing the existing one would require agreements with all the owners of the land it is going to cross, while in case of stepping stones there is some flexibility in choosing their locations, which among others gives freedom to negotiate only with the landowners that are eager to cooperate. For example in many agricultural landscapes, the stepping stones for this butterfly would be easy to create within the framework of the agroenvironmental schemes by establishing field margins. This is a clear advantage in a region with very diverse landownership like our study area. | 2017-08-02T22:59:35.885Z | 2013-02-07T00:00:00.000 | {
"year": 2013,
"sha1": "5d9fadcdcb2a53bc6a171a70a23fa9bd821aa5f5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10980-013-9855-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d1111b0b2ce3a173476d83c3c685ce5fe1232e17",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
229387725 | pes2o/s2orc | v3-fos-license | Implementation of Passive Radiative Cooling Technology in Buildings: A Review
: Radiative cooling (RC) is attracting more interest from building engineers and architects. Using the sky as the heat sink, a radiative cooling material can be passively cooled by emitting heat to the sky. As a result of the development of material technology, RC research has been revived, with the aim of increasing the materials’ cooling power as well as finding reliable ways to utilize it in cooling for buildings. This review identifies some issues in the current implementation of RC technologies in buildings from an architectural point of view. Besides the technical performance of the RC technologies, some architectural aspects, such as integration with architectural features, aesthetic requirements, as well as fully passive implementations of RC, also need to be considered for building application. In addition, performance evaluation of a building-integrated RC system should begin to account for its benefit to the occupant’s health and comfort alongside the technical performance. In conclusion, this review on RC implementation in buildings provides a meaningful discussion in regard to the direction of the research.
Introduction
Global warming forces buildings to consume more energy for cooling.For buildings in urban areas, they also experience the so-called urban heat island (UHI), and this increases the cooling demand even more in the warm and hot climate region [1].Some locations that used to maintain their thermal comfort via passive cooling also now need mechanical assistance.The International Energy Agency (IEA) reported that energy consumption for cooling has tripled during the last three decades [2].Reliance of the energy generation on fossil fuel makes the cooling demand somewhat paradoxical, i.e., by cooling down our buildings, we made the earth even warmer [3].Thus, passive cooling in buildings plays a crucial role in the environment [4].
Various passive cooling techniques have been developed by researchers and engineers.The general mechanism of passive cooling is actually by dissipating heat from buildings to the environmental heat sink [5].The most commonly utilized mechanism is convection and evaporation.These two heat transfer mechanisms mainly deal with ambient air or ground as the heat sinks.Nowadays, there is an emerging field of study in which thermal radiation is used as a means of cooling with the sky as the heat sink [6].This mechanism is not new in nature, e.g., plants experienced this with the effect of dew and frost formation on their leaves [7].In buildings, radiative cooling (RC) can be applied to building envelopes, especially those that acquire the highest sky view factor (SVF).
Despite their potential, RC techniques have not been widely used in buildings.Challenges of RC application are well defined by researchers.The two most frequently mentioned challenges are technical and cost problems.Technical problems are related to low cooling power, sophisticated material technology to produce the radiator, complicated systems in implementations, durability, and maintenance issues [8].Cost problems consist of high production cost and high installation cost [9,10].
There is also a problem regarding geographical constraints.Generally, the RC panel is highly dependent on climate and geographical conditions.Factors such as sky condition, wind speed, atmospheric particle, etc. strongly affect the performance of RC panels [5,7,[11][12][13].RC performs badly in humid conditions.Moreover, a problem also occurs in the form of a mismatch between the cooling demand and the supply.The highest cooling power of an RC panel occurs at night, while building occupants require cooling in the daytime [5,14].From an architectural point of view, options for design are more crucial.RC panels should be placed on the side that is fully exposed to the sky, which already limits the design option of the building roof, or other smaller sky-facing building components [15].Moreover, the roof-to-floor area ratio is significantly low for multi-story buildings.This again constraints the design option for architects [9].Furthermore, structural considerations could also limit the design options [15].
In short, implementing passive RC in buildings does not appear to be beneficial for utility and architectural demand.RC power is not enough for a larger building.Moreover, the technology needs to be more building-integrated [5,7,16].Many proposals for the implementation of RC in buildings are still in the stage of research and development, and they are mainly about the emitter material or the use of RC to assist active cooling technology [17].These challenges remain further research topics in the RC field.
Although there are several reviews on RC technologies, and some of their conclusions are cited above, to the authors' best knowledge, only two reviews focused on the applications of RC in buildings.The first paper by Lu et al. [7] elaborates on the cooling power of RC materials, which back in 2016 seemed to be the main barrier for the adoption of RC in buildings.A recent review on a similar topic by Chen et al. [18] in 2020 updated the conditions on RC material explorations, which have been improved since 2016 but still require a real building application, not to mention a large-scale one.Chen et al. [18] suggested that the potential for real RC application in buildings might lie in RC combination with a heating, ventilation, and air conditioning (HVAC) system.Moreover, the two reviews also mentioned that there is a need for study on the economic aspect of many prototype RC systems.
However, one stakeholder that is also important for the application of RC in buildings, i.e., the architects, was excluded in the reviews.Architects should be taken into consideration as they implement RC concepts in their designs [19].Thus, this review offers to fill in the gap by analyzing the current development of RC technology in buildings from an architectural point of view and proposes some possible research direction of passive RC application in buildings.This review does not only include works that directly implement RC in building, but also looks at some relevant papers on the technological development of RC.The review is arranged in six sections with the main content, besides the introduction and conclusions, describing RC principles, its state-of-the-art application in buildings, the architectural features that are involved in current applications, and an outlook for architectural application of passive RC.
Radiative Cooling Principles
All solid surfaces radiate heat in the form of electromagnetic radiation, whose power is proportional to temperature and emissivity, and is distributed across the frequency spectrum.Thus, the term total emissive power is differentiated from spectral emissive power, where the former refers to energy emitted over the entire spectrum, while the latter suggests energy emitted at a specific wavelength interval.The spectral emissive power of an ideal blackbody is governed by Planck's law as stated by Equation (1).
E Bλ (T, λ) is the spectral emissive power of a blackbody at a certain temperature T for a particular wavelength λ, h is Planck's constant (6.626 × 10 −34 J•s), c 0 is light speed in vacuum (2.998 × 10 8 m/s), n is Buildings 2020, 10, 215 3 of 28 the refractive index of the medium (1 for vacuum), and k is Boltzmann's constant (1.3807 × 10 −23 J•K −1 ). Figure 1 plots the distribution of blackbody emissive power for some temperatures against the wavelength.It indicates that emissions from different blackbody temperatures peaked at different wavelengths.The maximum wavelength λ max , the wavelength in which a blackbody at a certain temperature T emits the maximum power of radiation, can be calculated from Equation (2), which is obtained from Equation (1).When applied to terrestrial bodies with a temperature around 300 K, Equation (2) gives λ max of 9.6 µm [20].
Buildings 2020, 10, x FOR PEER REVIEW 3 of 29 (, ) is the spectral emissive power of a blackbody at a certain temperature T for a particular wavelength λ, h is Planck's constant (6.626 × 10 −34 J • s), is light speed in vacuum (2.998 × 10 8 m/s), n is the refractive index of the medium (1 for vacuum), and k is Boltzmann's constant (1.3807 × 10 −23 J • K ). Figure 1 plots the distribution of blackbody emissive power for some temperatures against the wavelength.It indicates that emissions from different blackbody temperatures peaked at different wavelengths.The maximum wavelength , the wavelength in which a blackbody at a certain temperature T emits the maximum power of radiation, can be calculated from Equation (2), which is obtained from Equation (1).When applied to terrestrial bodies with a temperature around 300 K, Equation (2) gives of 9.6 µm [20].
Figure 1.Distribution of spectral emissive power of a blackbody for different temperatures, replotted from SpectralCal [21].
The earth's thermal radiation, which peaks at 9.6 µm, is radiated to outer space through the atmosphere.Fortunately, the earth's atmosphere is relatively transparent for thermal radiation at 8-13 µm, the wavelength in which the of terrestrial radiation peaks.This band is called the atmospheric window.This atmospheric window makes possible the cooling of the earth's surface via radiation in the direction of the sky. Figure 2 superimposes the thermal radiation of a terrestrial body with 300 K temperature on the value of atmospheric transmittance.Passive RC techniques utilize this phenomenon.A typical RC panel places the emitter inside a fully insulated frame and protects it from convective heat loss by a transparent cover, usually a polyethylene film, as illustrated by Figure 3.The cooling power ( ) of a radiative surface is defined as net heat transfer from the surface.A surface exposed to the sky absorbs radiation from the sun The earth's thermal radiation, which peaks at 9.6 µm, is radiated to outer space through the atmosphere.Fortunately, the earth's atmosphere is relatively transparent for thermal radiation at 8-13 µm, the wavelength in which the λ max of terrestrial radiation peaks.This band is called the atmospheric window.This atmospheric window makes possible the cooling of the earth's surface via radiation in the direction of the sky. Figure 2 superimposes the thermal radiation of a terrestrial body with 300 K temperature on the value of atmospheric transmittance.
Buildings 2020, 10, x FOR PEER REVIEW 3 of 29 (, ) is the spectral emissive power of a blackbody at a certain temperature T for a particular wavelength λ, h is Planck's constant (6.626 × 10 −34 J • s), is light speed in vacuum (2.998 × 10 8 m/s), n is the refractive index of the medium (1 for vacuum), and k is Boltzmann's constant (1.3807 × 10 −23 J • K ). Figure 1 plots the distribution of blackbody emissive power for some temperatures against the wavelength.It indicates that emissions from different blackbody temperatures peaked at different wavelengths.The maximum wavelength , the wavelength in which a blackbody at a certain temperature T emits the maximum power of radiation, can be calculated from Equation (2), which is obtained from Equation (1).When applied to terrestrial bodies with a temperature around 300 K, Equation (2) gives of 9.6 µm [20].The earth's thermal radiation, which peaks at 9.6 µm, is radiated to outer space through the atmosphere.Fortunately, the earth's atmosphere is relatively transparent for thermal radiation at 8-13 µm, the wavelength in which the of terrestrial radiation peaks.This band is called the atmospheric window.This atmospheric window makes possible the cooling of the earth's surface via radiation in the direction of the sky. Figure 2 superimposes the thermal radiation of a terrestrial body with 300 K temperature on the value of atmospheric transmittance.Passive RC techniques utilize this phenomenon.A typical RC panel places the emitter inside a fully insulated frame and protects it from convective heat loss by a transparent cover, usually a polyethylene film, as illustrated by Figure 3.The cooling power ( ) of a radiative surface is defined as net heat transfer from the surface.A surface exposed to the sky absorbs radiation from the sun Passive RC techniques utilize this phenomenon.A typical RC panel places the emitter inside a fully insulated frame and protects it from convective heat loss by a transparent cover, usually a polyethylene film, as illustrated by Figure 3.The cooling power (Q net ) of a radiative surface is defined as net heat transfer from the surface.A surface exposed to the sky absorbs radiation from the sun (Q sun ) and the atmosphere (Q atm ) and emits its radiation (Q rad ).In addition to the radiative heat emitted or received by the surface, it also gains and loses heat from conduction and convection (Q cond+conv ).Thus, the energy balance equation of a radiative surface can be written as Equation (3).
Buildings 2020, 10, x FOR PEER REVIEW 4 of 29 ( ) and the atmosphere ( ) and emits its radiation ( ).In addition to the radiative heat emitted or received by the surface, it also gains and loses heat from conduction and convection ( ).Thus, the energy balance equation of a radiative surface can be written as Equation (3). = (, ) − ( ) − ± (3) Detailed equations for each component of the above equation are summarized by Raman et al. [12]. is the emissive power of an RC surface with area A at temperature T for the wavelength λ, as shown in Equation ( 4).The absorbed heat due to incident atmospheric thermal radiation ( ) at ambient temperature is given by Equation (5).
Further, the absorbed solar radiation is formulated using Kirchoff's radiation law as shown by Equation (7), where (, ) is the emissivity of the emitter at the angle of the sun's position ( ).In Equation ( 7), IAM 1.5 is solar intensity using AM 1.5 spectra.Conduction and convection heat transfer (, ) between the absorber/emitter with the surroundings are also considered in Equation (8), where hc is heat transfer coefficient, is ambient temperature, and is the surface temperature of the RC emitter.
𝑄
= (, ) .() (7) Detailed equations for each component of the above equation are summarized by Raman et al. [12].Q rad is the emissive power of an RC surface with area A at temperature T for the wavelength λ, as shown in Equation ( 4).The absorbed heat due to incident atmospheric thermal radiation (Q atm ) at ambient temperature T amb is given by Equation (5). where, In Equations ( 4) and ( 5), A represents the area of the emitter, and dΩ denotes angular integral over a hemisphere as shown in Equation (6).Moreover, E Bλ (T, λ) represents the spectral emissive power of blackbody at the emitter's temperature T for the wavelength λ as formulated by Equation (1), whereas (λ, θ) is the angle-dependent emissivity of the emitter.For Equation (5), there is an additional emissivity, atm (λ, θ), which is the angle-dependent emissivity of the atmosphere.
Further, the absorbed solar radiation is formulated using Kirchoff's radiation law as shown by Equation (7), where (λ, θ sun ) is the emissivity of the emitter at the angle of the sun's position (θ sun ).
In Equation (7), I AM 1.5 is solar intensity using AM 1.5 spectra.Conduction and convection heat transfer Q cond+conv (T, T amb ) between the absorber/emitter with the surroundings are also considered in Equation (8), where h c is heat transfer coefficient, T amb is ambient temperature, and T emitter is the surface temperature of the RC emitter.
From the mathematical model of a typical RC emitter as shown above, factors for a successful RC emitter can be derived.Firstly, the RC emitter needs to be properly insulated from conduction and protected from unwanted convective loss.Secondly, the emitter must have high emissive power in the atmospheric window band.This also means the convection cover needs to be transparent in the same band to transmit the thermal radiation from the emitter.Thirdly, the emitter needs to reflect as much as possible the incident solar radiation to work in the daytime.Another important factor is the atmospheric or sky condition, i.e., a humid atmosphere limits the transparency of the atmospheric window.In other words, a clear sky is more beneficial to the RC emitter than an overcast sky.
Research on the Application of Radiative Cooling in Buildings
Attempts to utilize RC in buildings can be traced back to the 1970s when Bartoli et al. [23] and Harrison and Walton [24] conducted experiments using two similar RC emitter designs with different materials, namely TEDLAR (a polyvinyl fluoride film) and TiO 2 white paint as the emitter, respectively.In the same period, Givoni [25] proposed another design of a passive RC system that can provide heating and cooling for buildings.The field started to attract more attention from researchers during the 1990s.During the decade, besides explorations that focused on the cooling power of emitter materials [26][27][28][29][30], some proposed RC systems involving working fluid to extract the cooling and other elements such as thermal storage [31][32][33] and desiccant [34] to improve the system's performance.Since then, the number of explorations of the application of RC in buildings has grown significantly.
There are different classifications of RC technologies applied in buildings.Erell and Santamouris [15] classified RC technologies into two categories based on how the RC system is utilized, namely movable insulation and heat exchangers.In an RC system with movable insulation, the emitter, which was actually a solar thermal collector, was protected from solar radiation at daytime and exposed to the sky at nighttime by turning the insulation off from the emitter [31,35].In contrast, in an RC system with a heat exchanger, a working fluid, either water or air, was used as a medium to "carry the coldness" of the emitter to the building interior [16,36].On the other hand, Zeyghami et al. [16] classified the RC technologies into two categories based on working time, namely nocturnal and diurnal.Nocturnal cooling consists of two general designs, i.e., a gray emitter which emits in the whole range of the wavelength, and a selective emitter which was designed to have emissivity higher or lower at a certain wavelength.Meanwhile, diurnal RC prefers to be equipped with selective emitter material and assisted with a cover shield.
The classifications of RC technology that have been performed by researchers mark a historical or rather a sequential development of RC technology.In this review, we classify RC technologies based on the type of improvement carried out by researchers in order to obtain technology applicable to buildings.The first category is "material improvement", which includes studies that focus on new materials or that enhance the current materials for emitters.The second category is "design improvement", i.e., researchers tried to modify the panel configuration, design, or supporting element to improve the emitter performance.The last category is "combination with other technologies", which includes applications of RC "to assist" or "with the assistance of" other technologies.Table 1 summarizes the classifications of improvement strategies and the reported performance.Glazing energy saving for cooling between 40.9-63.4%
Material Improvement
The development of nanomaterial technology has helped to increase the cooling power of RC materials.The material improvement involves two parts, namely emitter material and convection cover material.Detailed reviews of the emitter materials have been carried out by Zhao et al. [14] and Family and Menguc [106].In the work of Zhao et al. [14], they categorize the material technologies for passive RC into four categories, i.e., a natural emitter, film-based emitter, nanoparticle-based emitter, and photonic emitter.Examples of these different emitters are shown in Figure 4.In this review, only some material examinations that are relevant to the application of RC in buildings are included.
There are at least three goals in the RC emitter material field of study, namely, improving cooling power in daytime [12], improving performance in humid conditions [44], and making a cost-efficient material [25,58].Efforts on daytime RC were conducted using different approaches by material researchers, i.e., film-based emitters [8,[41][42][43][44], nanoparticle-based emitters [46,47], and photonic emitters [45,[48][49][50].The engineered material must reflect most of the solar and atmospheric radiation and at the same time be able to produce thermal radiation in the specific atmospheric window band.The film-based emitter could produce all-day cooling between 2-9 • C for buildings on a typical sunny day in northern US latitudes [41].The photonic emitter recorded a 110 W/m 2 cooling power under direct sunlight [45].Further, nanoparticle-based emitters in the works of Liu et al. [46] and Kim and Lenert [47] were also recorded at sub-ambient temperature at daytime, with 25.5 • C and 35 • C below ambient, respectively.There are at least three goals in the RC emitter material field of study, namely, improving cooling power in daytime [12], improving performance in humid conditions [44], and making a cost-efficient material [25,58].Efforts on daytime RC were conducted using different approaches by material researchers, i.e., film-based emitters [8,[41][42][43][44], nanoparticle-based emitters [46,47], and photonic emitters [45,[48][49][50].The engineered material must reflect most of the solar and atmospheric radiation and at the same time be able to produce thermal radiation in the specific atmospheric window band.The film-based emitter could produce all-day cooling between 2-9 °C for buildings on a typical sunny day in northern US latitudes [41].The photonic emitter recorded a 110 W/m 2 cooling power under direct sunlight [45].Further, nanoparticle-based emitters in the works of Liu et al. [46] and Kim and Lenert [47] were also recorded at sub-ambient temperature at daytime, with 25.5 °C and 35 °C below ambient, respectively.
In the subtropical climate, daytime RC emitters in the work of Jeong et al. [49] give a remarkable result, which is 7.2 °C below ambient at daytime.However, in the region of tropical climate with higher humidity, daytime RC is hardly achieved [39,40,44].The best results in experimenting with RC materials for humid climate came from an enhanced specular reflector (ESR) material by 3M [109] that could be at sub-ambient temperature on a very humid and cloudy night [39,40].
Despite many scientists pursuing higher RC power both in the daytime and humid conditions, only a few have focused on the affordability of the materials.For instance, Givoni, [25] and Erell and Etzion [58] proposed cheaper RC emitter options, but their examination resulted in a very low cooling In the subtropical climate, daytime RC emitters in the work of Jeong et al. [49] give a remarkable result, which is 7.2 • C below ambient at daytime.However, in the region of tropical climate with higher humidity, daytime RC is hardly achieved [39,40,44].The best results in experimenting with RC materials for humid climate came from an enhanced specular reflector (ESR) material by 3M [109] that could be at sub-ambient temperature on a very humid and cloudy night [39,40].
Despite many scientists pursuing higher RC power both in the daytime and humid conditions, only a few have focused on the affordability of the materials.For instance, Givoni [25] and Erell and Etzion [58] proposed cheaper RC emitter options, but their examination resulted in a very low cooling power of RC compared to the other materials.Current high-performance RC materials are still expensive to produce and have limited durability [8].
In terms of convection cover materials, the spectral properties and durability are key issues.Benlattar et al. [53,54] are among the first to modify the spectral properties of convection cover.Using a chemical solution deposition method, they create a cadmium sulfide (CdS) thin film that is transparent for infrared radiation in the 8-13 µm band.They estimated a temperature reduction of 65 K between the uncovered nocturnal emitter and the covered one [54].In another study, Naghshine and Saboonchi [52] compared different thin film multilayer structures for RC convection cover.Among the 30 possible multilayers structures from a combination of 16 thin film materials, structures that involved cubic ZnS in their layers are better at protecting the RC emitter from parasitic heat loss during the day and night.Their schematic thin film multilayer structure is shown in Figure 5.Moreover, investigations for durable alternatives of convection cover were conducted by Bathgate and Bosi [51].They found that zinc sulfide (ZnS) was the most promising material for the RC emitter cover, as shown in Figure 6.
In terms of convection cover materials, the spectral properties and durability are key issues.Benlattar et al. [53,54] are among the first to modify the spectral properties of convection cover.Using a chemical solution deposition method, they create a cadmium sulfide (CdS) thin film that is transparent for infrared radiation in the 8-13 µm band.They estimated a temperature reduction of 65 K between the uncovered nocturnal emitter and the covered one [54].In another study, Naghshine and Saboonchi [52] compared different thin film multilayer structures for RC convection cover.Among the 30 possible multilayers structures from a combination of 16 thin film materials, structures that involved cubic ZnS in their layers are better at protecting the RC emitter from parasitic heat loss during the day and night.Their schematic thin film multilayer structure is shown in Figure 5.Moreover, investigations for durable alternatives of convection cover were conducted by Bathgate and Bosi [51].They found that zinc sulfide (ZnS) was the most promising material for the RC emitter cover, as shown in Figure 6.expensive to produce and have limited durability [8].
In terms of convection cover materials, the spectral properties and durability are key issues.Benlattar et al. [53,54] are among the first to modify the spectral properties of convection cover.Using a chemical solution deposition method, they create a cadmium sulfide (CdS) thin film that is transparent for infrared radiation in the 8-13 µm band.They estimated a temperature reduction of 65 K between the uncovered nocturnal emitter and the covered one [54].In another study, Naghshine and Saboonchi [52] compared different thin film multilayer structures for RC convection cover.Among the 30 possible multilayers structures from a combination of 16 thin film materials, structures that involved cubic ZnS in their layers are better at protecting the RC emitter from parasitic heat loss during the day and night.Their schematic thin film multilayer structure is shown in Figure 5.Moreover, investigations for durable alternatives of convection cover were conducted by Bathgate and Bosi [51].They found that zinc sulfide (ZnS) was the most promising material for the RC emitter cover, as shown in Figure 6.
Design Improvement
Besides enhancements in RC materials, improvements in the design of RC systems have also been proposed.Most researchers focused on two main aspects of RC system design, namely emitter insulation, and emitter contact to the working fluid.The emitter's insulation is one of the crucial elements in the roof-integrated RC systems designed by Dimoudi and Androutsopoulos [56] and Khedari et al. [62].Craig et al. [110] went further by suggesting that improving the RC emitter's insulation on the roof not only increases its performance, but by modifying the configuration of the roof's insulation, a conventional roof material could even be an RC emitter.Figure 7 shows the roof-integrated RC system and how the roof insulation was structured [56].
Design Improvement
Besides enhancements in RC materials, improvements in the design of RC systems have also been proposed.Most researchers focused on two main aspects of RC system design, namely emitter insulation, and emitter contact to the working fluid.The emitter's insulation is one of the crucial elements in the roof-integrated RC systems designed by Dimoudi and Androutsopoulos [56] and Khedari et al. [62].Craig et al. [110] went further by suggesting that improving the RC emitter's insulation on the roof not only increases its performance, but by modifying the configuration of the roof's insulation, a conventional roof material could even be an RC emitter.Figure 7 shows the roofintegrated RC system and how the roof insulation was structured [56].Regarding the insulation from convection between the emitter material and the cover shield, some researchers proposed the use of vacuum to minimize parasitic thermal load to the emitter [60,63].Chen et al. [63] are the first to experiment with a vacuum-enhanced RC emitter.Their design intends to achieve daytime RC and succeeds in obtaining a maximum of 42 °C below ambient under intense solar radiation.Tso et al. [60], however, did not achieve a daytime RC effect but could deliver nocturnal RC in a more humid climate in Hong Kong.Their design, shown in Figure 8, could provide a cooling power of 38 W/m 2 at night.
A different approach was used in the cover design by Falt et al. [61,64].They proposed a tripleglazing skylight which features high absorptivity.The gas blocks the infrared part of solar radiation, and, thus, reduces heat gain to the building's interior and at night releases heat via radiation to the sky.The novelty of their design lies in the middle that could tilt, allowing the formation of a gap between the glass and the skylight's edge.This gap, in turn, enables the gas to move between the upper and lower part of the skylight, thus, when the upper gas is cooled by nocturnal radiation, it is replaced by the warmer lower gas.See Figure 9 for the illustration of the design.Regarding the insulation from convection between the emitter material and the cover shield, some researchers proposed the use of vacuum to minimize parasitic thermal load to the emitter [60,63].Chen et al. [63] are the first to experiment with a vacuum-enhanced RC emitter.Their design intends to achieve daytime RC and succeeds in obtaining a maximum of 42 • C below ambient under intense solar radiation.Tso et al. [60], however, did not achieve a daytime RC effect but could deliver nocturnal RC in a more humid climate in Hong Kong.Their design, shown in Figure 8, could provide a cooling power of 38 W/m 2 at night.
A different approach was used in the cover design by Falt et al. [61,64].They proposed a triple-glazing skylight which features high absorptivity.The gas blocks the infrared part of solar radiation, and, thus, reduces heat gain to the building's interior and at night releases heat via radiation to the sky.The novelty of their design lies in the middle that could tilt, allowing the formation of a gap between the glass and the skylight's edge.This gap, in turn, enables the gas to move between the upper and lower part of the skylight, thus, when the upper gas is cooled by nocturnal radiation, it is replaced by the warmer lower gas.See Figure 9 for the illustration of the design.
Moreover, to utilize cooling from an RC emitter in the building, the most feasible way is by using a working fluid, which can either be a water-based or an air-based system.Furthermore, the water-based system is divided into an open system and a closed system.In the water-based open system, the cold storage water directly contacts the RC emitter, without any circulation of any working fluid.In the closed system, however, circulated water is used as a working fluid to deliver coldness either to storage or to a heat exchanger.The conceptual drawing of the typical water-based and air-based RC system is shown in Figure 10.The advantages and disadvantages of the water and air-based system are already summarized by Lu et al. [7] and Zhang et al. [81].The lower installation cost and the simplicity of the system are among the advantages of the air-based system, while the water-based system is better in terms of cooling performance because water has a higher heat capacity than air.It is important to note that the effectiveness of the heat transfer between the emitter and the fluid has also been the focus of investigations for both the water-and air-based RC [11,[57][58][59].The investigations prescribed the optimum mass flow rate of the fluid to obtain the maximum cooling effect, as Hosseinzadeh and Taherian [57] indicated that the mass flow rate of the fluid is critical in achieving the best cooling performance of an RC emitter.Moreover, to utilize cooling from an RC emitter in the building, the most feasible way is by using a working fluid, which can either be a water-based or an air-based system.Furthermore, the waterbased system is divided into an open system and a closed system.In the water-based open system, the cold storage water directly contacts the RC emitter, without any circulation of any working fluid.In the closed system, however, circulated water is used as a working fluid to deliver coldness either to storage or to a heat exchanger.The conceptual drawing of the typical water-based and air-based RC system is shown in Figure 10.The advantages and disadvantages of the water and air-based system are already summarized by Lu et al. [7] and Zhang et al. [81].The lower installation cost and the simplicity of the system are among the advantages of the air-based system, while the water-based system is better in terms of cooling performance because water has a higher heat capacity than air.It is important to note that the effectiveness of the heat transfer between the emitter and the fluid has also been the focus of investigations for both the water-and air-based RC [11,[57][58][59].The investigations prescribed the optimum mass flow rate of the fluid to obtain the maximum cooling effect, as Hosseinzadeh and Taherian [57] indicated that the mass flow rate of the fluid is critical in achieving the best cooling performance of an RC emitter.Moreover, to utilize cooling from an RC emitter in the building, the most feasible way is by using a working fluid, which can either be a water-based or an air-based system.Furthermore, the waterbased system is divided into an open system and a closed system.In the water-based open system, the cold storage water directly contacts the RC emitter, without any circulation of any working fluid.In the closed system, however, circulated water is used as a working fluid to deliver coldness either to storage or to a heat exchanger.The conceptual drawing of the typical water-based and air-based RC system is shown in Figure 10.The advantages and disadvantages of the water and air-based system are already summarized by Lu et al. [7] and Zhang et al. [81].The lower installation cost and the simplicity of the system are among the advantages of the air-based system, while the water-based system is better in terms of cooling performance because water has a higher heat capacity than air.It is important to note that the effectiveness of the heat transfer between the emitter and the fluid has also been the focus of investigations for both the water-and air-based RC [11,[57][58][59].The investigations prescribed the optimum mass flow rate of the fluid to obtain the maximum cooling effect, as Hosseinzadeh and Taherian [57] indicated that the mass flow rate of the fluid is critical in achieving the best cooling performance of an RC emitter.Other small design considerations were also studied, such as the aesthetic appearance of the emitter.The appearance of the RC emitter is obviously interesting for architects and might accelerate the implementation of RC in building design.Lee et al. [66] and Son et al. [111] employed different techniques to create colored emitters.By adding a photonic nanolayer in the order of metal-insulator-metal (MIM) below the emitter, Lee et al. [66] could decorate their RC emitter.The MIM layers consisted of Ag-SiO 2 -Ag, and a variation of the colors was achieved by varying the thickness of the SiO 2 layer.On the other hand, Son et al. [111] coated the emitter with silica-embedded perovskite to color it.Figure 11 displays the colored RC emitter by Son et al. [111].Both colored RC emitters could achieve sub-ambient temperature during the daytime.Other small design considerations were also studied, such as the aesthetic appearance of the emitter.The appearance of the RC emitter is obviously interesting for architects and might accelerate the implementation of RC in building design.Lee et al. [66] and Son et al. [111] employed different techniques to create colored emitters.By adding a photonic nanolayer in the order of metalinsulator-metal (MIM) below the emitter, Lee et al. [66] could decorate their RC emitter.The MIM layers consisted of Ag-SiO2-Ag, and a variation of the colors was achieved by varying the thickness of the SiO2 layer.On the other hand, Son et al. [111] coated the emitter with silica-embedded perovskite to color it.Figure 11 displays the colored RC emitter by Son et al. [111].Both colored RC emitters could achieve sub-ambient temperature during the daytime.
Combination with Other Technologies
Application of RC in buildings often appears in combinations with other cooling technologies.These combinations can be categorized as active systems and passive systems.Many of the waterbased RC systems are active systems, i.e., assisted by a pump to circulate the water.The passive water-based system is found where a thermosyphon mechanism drives the flow of the fluid.Airbased systems, on the other hand, are more often found as a passive system.Some strategies involve a fan as the active part of an air-based RC to maintain the airflow to the RC panel.A detailed comparison for the precedent of RC combinations is shown in Table 2. Other small design considerations were also studied, such as the aesthetic appearance of the emitter.The appearance of the RC emitter is obviously interesting for architects and might accelerate the implementation of RC in building design.Lee et al. [66] and Son et al. [111] employed different techniques to create colored emitters.By adding a photonic nanolayer in the order of metalinsulator-metal (MIM) below the emitter, Lee et al. [66] could decorate their RC emitter.The MIM layers consisted of Ag-SiO2-Ag, and a variation of the colors was achieved by varying the thickness of the SiO2 layer.On the other hand, Son et al. [111] coated the emitter with silica-embedded perovskite to color it.Figure 11 displays the colored RC emitter by Son et al. [111].Both colored RC emitters could achieve sub-ambient temperature during the daytime.
Combination with Other Technologies
Application of RC in buildings often appears in combinations with other cooling technologies.These combinations can be categorized as active systems and passive systems.Many of the waterbased RC systems are active systems, i.e., assisted by a pump to circulate the water.The passive water-based system is found where a thermosyphon mechanism drives the flow of the fluid.Airbased systems, on the other hand, are more often found as a passive system.Some strategies involve a fan as the active part of an air-based RC to maintain the airflow to the RC panel.A detailed comparison for the precedent of RC combinations is shown in Table 2.
Combination with Other Technologies
Application of RC in buildings often appears in combinations with other cooling technologies.These combinations can be categorized as active systems and passive systems.Many of the water-based RC systems are active systems, i.e., assisted by a pump to circulate the water.The passive water-based system is found where a thermosyphon mechanism drives the flow of the fluid.Air-based systems, on the other hand, are more often found as a passive system.Some strategies involve a fan as the active part of an air-based RC to maintain the airflow to the RC panel.A detailed comparison for the precedent of RC combinations is shown in Table 2.
Researcher Architectural Feature Means of Implementation Combination
[25]
Active System
Specifically, the solar collector [25,34,77,90], photovoltaics (PV) [17,76,77,85], air conditioning (AC) [81,83,86,95,96], and cold water storage [33,89,99,102] are among the frequently studied combinations for the active systems.Givoni [25] is among the first to utilize a solar collector panel as an RC panel.The strategy is to utilize the absorber of the solar collector during the day as an emitter at night.This so-called dual-functional collector is further developed using more advanced techniques and materials [17,88,90].Spectral-selective coating on the solar thermal absorber was used, as well as a low-density polyethylene (LDPE) film as the cover, replacing the glass cover in the conventional solar thermal collector.The latest results by Hu et al. [90] produce 55.1 W/m 2 cooling power at night.Their design is illustrated in Figure 12.
combinations for the active systems.Givoni [25] is among the first to utilize a solar collector panel as an RC panel.The strategy is to utilize the absorber of the solar collector during the day as an emitter at night.This so-called dual-functional collector is further developed using more advanced techniques and materials [17,88,90].Spectral-selective coating on the solar thermal absorber was used, as well as a low-density polyethylene (LDPE) film as the cover, replacing the glass cover in the conventional solar thermal collector.The latest results by Hu et al. [90] produce 55.1 W/m 2 cooling power at night.Their design is illustrated in Figure 12.Combining PV with RC was initially visualized in the design of Harbeman House by Saitoh and Fujino [77], a so-called sustainable house proposal that attempted to integrate various sustainable technologies in the house.A more persistent study on the possible application of the PV-RC Combining PV with RC was initially visualized in the design of Harbeman House by Saitoh and Fujino [77], a so-called sustainable house proposal that attempted to integrate various sustainable technologies in the house.A more persistent study on the possible application of the PV-RC combination is conducted by Zhao et al. [70,75,92,115].Using photonic material, they develop several strategies in PV-RC ranging from nocturnal to diurnal cooling.In terms of building energy consumption, PV-RC can be more beneficial because the combined electricity and cooling energy resulting from the system is more than the output energy from PV alone [74,75].Both RC combinations with solar thermal and PV can be divided into two types when installed on the roof, i.e., similar orientation or opposite orientation.With similar orientation, the researchers placed the RC panel on the same side of the roof as the solar collector or PV, normally the sun-facing side [17,70,88,90].In contrast, the opposite orientation used the opposite side of the roof to reduce solar heat gain to the emitter [75,77,85].
Furthermore, RC is also commonly used to assist HVAC systems.Usually, the emitter is used to provide chilled water for the cooling coil of AC, enabling the system to be more energy efficient [95,96].The design by Jeong et al. [95], for instance, used two types of cooling coils, conventional cooling coils and RC-supplied cooling coils, thus, RC acted as a supplementary cooling supplier.The system was claimed to be able to reduce cooling energy consumption by 35%.Another variant of the RC-HVAC system came from Zhang et al. [81], who added a cold water storage to stock cooling energy from RC. Figure 13 displays the schematic diagram of an RC-assisted HVAC system.
Buildings 2020, 10, x FOR PEER REVIEW 15 of 29 combination is conducted by Zhao et al. [70,75,92,115].Using photonic material, they develop several strategies in PV-RC ranging from nocturnal to diurnal cooling.In terms of building energy consumption, PV-RC can be more beneficial because the combined electricity and cooling energy resulting from the system is more than the output energy from PV alone [74,75].Both RC combinations with solar thermal and PV can be divided into two types when installed on the roof, i.e., similar orientation or opposite orientation.With similar orientation, the researchers placed the RC panel on the same side of the roof as the solar collector or PV, normally the sun-facing side [17,70,88,90].In contrast, the opposite orientation used the opposite side of the roof to reduce solar heat gain to the emitter [75,77,85].Furthermore, RC is also commonly used to assist HVAC systems.Usually, the emitter is used to provide chilled water for the cooling coil of AC, enabling the system to be more energy efficient [95,96].The design by Jeong et al. [95], for instance, used two types of cooling coils, conventional cooling coils and RC-supplied cooling coils, thus, RC acted as a supplementary cooling supplier.The system was claimed to be able to reduce cooling energy consumption by 35%.Another variant of the RC-HVAC system came from Zhang et al. [81], who added a cold water storage to stock cooling energy from RC. Figure 13 displays the schematic diagram of an RC-assisted HVAC system.
Passive System
In terms of the passive system, RC has been combined with more diverse techniques such as wall-mounted RC [68,98,103], phase change material (PCM) [100,101], thermal mass [30,31], and the Trombe wall [67,113].Oliveti et al. [103] attempted to include thermal radiation from the wall to the sky to the overall heat exchange model of a wall.Yong et al. [68] went further than developing a
Passive System
In terms of the passive system, RC has been combined with more diverse techniques such as wall-mounted RC [68,98,103], phase change material (PCM) [100,101], thermal mass [30,31], and the Trombe wall [67,113].Oliveti et al. [103] attempted to include thermal radiation from the wall to the sky to the overall heat exchange model of a wall.Yong et al. [68] went further than developing a mathematical model by proposing an RC system mounted on the wall.Their system is a dual-functional solar collector that can provide heating in winter and cooling in summer, as shown in Figure 14.However, the system is an active system, involving pumps to circulate water to be stored in the cold and hot water storage.The fully passive implementations of a wall-mounted RC were performed by Shen et al. [100] and He et al. [101].Their designs are quite similar in principle, using the thermosyphon method to extract cooling from the wall and storing the heat in a PCM (see Figure 15).In terms of cooling performance, the wall-mounted dual-functional heating-cooling emitter was predicted to be able to reduce building energy consumption by 47.9% [68].Whereas the aforementioned researchers used PCM to regulate the heat gain and loss in the wall or building enclosure, some researchers have attempted to use insulation and thermal mass to regulate heat transfer from an RC emitter to the building.Etzion and Erell [31] mentioned at least two functions of thermal mass or other types of thermal storage strategies when combined with nocturnal RC for a building.Firstly, thermal mass can absorb the excessive heat received by the RC emitter during the daytime.Secondly, it maintains the cooling rate of the RC emitter to a desired rate, thus, heat does not dissipate rapidly from the building, and the RC emitter becomes steady.Thus, Etzion and Erell [31] examined the best location for placing thermal mass.They found that thermal mass should be placed on the roof or, in more general terms, should be closely coupled with the radiative emitter [31].
Furthermore, Liu et al. [112] also developed a temperature-regulating module (TRM) for solar heating and RC.The TRM consists of polyethylene film as the convection cover, a porous RC material, an aluminum sheet, and a solar absorber (Figure 16).The layer order was reversed for heating mode.The TRM maintained a maximum indoor temperature of 27.5 °C in the hottest days of summer and 25 °C for some hours on winter days.The heating and cooling provided by the TRM correspond to 42.4% saving in the electricity bill.Whereas the aforementioned researchers used PCM to regulate the heat gain and loss in the wall or building enclosure, some researchers have attempted to use insulation and thermal mass to regulate heat transfer from an RC emitter to the building.Etzion and Erell [31] mentioned at least two functions of thermal mass or other types of thermal storage strategies when combined with nocturnal RC for a building.Firstly, thermal mass can absorb the excessive heat received by the RC emitter during the daytime.Secondly, it maintains the cooling rate of the RC emitter to a desired rate, thus, heat does not dissipate rapidly from the building, and the RC emitter becomes steady.Thus, Etzion and Erell [31] examined the best location for placing thermal mass.They found that thermal mass should be placed on the roof or, in more general terms, should be closely coupled with the radiative emitter [31].
Furthermore, Liu et al. [112] also developed a temperature-regulating module (TRM) for solar heating and RC.The TRM consists of polyethylene film as the convection cover, a porous RC material, an aluminum sheet, and a solar absorber (Figure 16).The layer order was reversed for heating mode.The TRM maintained a maximum indoor temperature of 27.5 • C in the hottest days of summer and 25 • C for some hours on winter days.The heating and cooling provided by the TRM correspond to 42.4% saving in the electricity bill.Whereas the aforementioned researchers used PCM to regulate the heat gain and loss in the wall or building enclosure, some researchers have attempted to use insulation and thermal mass to regulate heat transfer from an RC emitter to the building.Etzion and Erell [31] mentioned at least two functions of thermal mass or other types of thermal storage strategies when combined with nocturnal RC for a building.Firstly, thermal mass can absorb the excessive heat received by the RC emitter during the daytime.Secondly, it maintains the cooling rate of the RC emitter to a desired rate, thus, heat does not dissipate rapidly from the building, and the RC emitter becomes steady.Thus, Etzion and Erell [31] examined the best location for placing thermal mass.They found that thermal mass should be placed on the roof or, in more general terms, should be closely coupled with the radiative emitter [31].
Furthermore, Liu et al. [112] also developed a temperature-regulating module (TRM) for solar heating and RC.The TRM consists of polyethylene film as the convection cover, a porous RC material, an aluminum sheet, and a solar absorber (Figure 16).The layer order was reversed for heating mode.The TRM maintained a maximum indoor temperature of 27.5 °C in the hottest days of summer and 25 °C for some hours on winter days.The heating and cooling provided by the TRM correspond to 42.4% saving in the electricity bill.
Architectural Features of Current Radiative Cooling Systems
As the previous section summarizes, RC for buildings has been prototyped in very diverse ways.Design alternatives are even numerous when RC is combined with other cooling technologies.Nevertheless, analysis of the RC systems from an architectural point of view should be conducted before it is widely accepted by the architectural community as one of the promising passive design strategies for sustainable buildings [19].One way of doing so is by analyzing the precedents of architectural features involved in the proposals of passive applications of RC.As compiled in Table 2, some building components or architectural features that have been involved in passive RC systems are revealed.Theoretically, the roof is the best location to place an RC emitter compared to other building envelopes.However, architects might want more flexibility in their design, and few researchers have applied RC in the wall and façade.These researches, although very few in number, offer alternatives in architectural implementation.
Roof
Installing the RC emitter on the roof is the simplest and most promising way.Besides its highest sky view factor compared to the wall or other building components, the roof is also a common place for building service installations.Available roof-integrated passive RC systems consist of both airand water-based systems.The roof water-based RC is an open system which is quite similar to a roof pond design [94,116] (Figure 17).Disadvantages of the roof water-based RC are more or less the same as the roof pond, such as difficulty in waterproofing the roof, additional load to the roof structure, and maintenance of the cleanliness of the water.It can also only be installed on top of flat roofs and affects the accessibility of the roof for other uses [117].
Architectural Features of Current Radiative Cooling Systems
As the previous section summarizes, RC for buildings has been prototyped in very diverse ways.Design alternatives are even numerous when RC is combined with other cooling technologies.Nevertheless, analysis of the RC systems from an architectural point of view should be conducted before it is widely accepted by the architectural community as one of the promising passive design strategies for sustainable buildings [19].One way of doing so is by analyzing the precedents of architectural features involved in the proposals of passive applications of RC.As compiled in Table 2, some building components or architectural features that have been involved in passive RC systems are revealed.Theoretically, the roof is the best location to place an RC emitter compared to other building envelopes.However, architects might want more flexibility in their design, and few researchers have applied RC in the wall and façade.These researches, although very few in number, offer alternatives in architectural implementation.
Roof
Installing the RC emitter on the roof is the simplest and most promising way.Besides its highest sky view factor compared to the wall or other building components, the roof is also a common place for building service installations.Available roof-integrated passive RC systems consist of both airand water-based systems.The roof water-based RC is an open system which is quite similar to a roof pond design [94,116] (Figure 17).Disadvantages of the roof water-based RC are more or less the same as the roof pond, such as difficulty in waterproofing the roof, additional load to the roof structure, and maintenance of the cleanliness of the water.It can also only be installed on top of flat roofs and affects the accessibility of the roof for other uses [117].
Moreover, the roof air-based system offers more techniques.The most straightforward use of an RC emitter was firstly proposed by Etzion [31], where the RC emitter is attached to a concrete roof slab.By this design, the cooling effect of the RC emitter is absorbed by the thermal mass of the roof slab and in turn transmitted to the room.Another air-based roof system uses an air channel to extract the cooling from the RC emitter [25,98].By using an air channel attached to an RC emitter, cooling is provided by means of cool airflow from the air channel instead of convection of the interior air with the building envelops.This is arguably better for the distribution of the cool and fresh air. Figure 18 shows one example of how the air channel was used to extract cooling from the RC emitter [98].In the design, the air was used for heat removal in the attic, although it can be further explored for the room's heat removal as well.Moreover, the roof air-based system offers more techniques.The most straightforward use of an RC emitter was firstly proposed by Etzion [31], where the RC emitter is attached to a concrete roof slab.By this design, the cooling effect of the RC emitter is absorbed by the thermal mass of the roof slab and in turn transmitted to the room.Another air-based roof system uses an air channel to extract the cooling from the RC emitter [25,98].By using an air channel attached to an RC emitter, cooling is provided by means of cool airflow from the air channel instead of convection of the interior air with the building envelops.This is arguably better for the distribution of the cool and fresh air. Figure 18 shows one example of how the air channel was used to extract cooling from the RC emitter [98].In the design, the air was used for heat removal in the attic, although it can be further explored for the room's heat removal as well.Moreover, the roof air-based system offers more techniques.The most straightforward use of an RC emitter was firstly proposed by Etzion [31], where the RC emitter is attached to a concrete roof slab.By this design, the cooling effect of the RC emitter is absorbed by the thermal mass of the roof slab and in turn transmitted to the room.Another air-based roof system uses an air channel to extract the cooling from the RC emitter [25,98].By using an air channel attached to an RC emitter, cooling is provided by means of cool airflow from the air channel instead of convection of the interior air with the building envelops.This is arguably better for the distribution of the cool and fresh air. Figure 18 shows one example of how the air channel was used to extract cooling from the RC emitter [98].In the design, the air was used for heat removal in the attic, although it can be further explored for the room's heat removal as well.
Wall
The second most appropriate architectural feature to place the RC emitter on is the wall.The wall has the advantage of providing a large surface when compared to the roof or other building envelopes.As with the roof RC, the wall RC also appeared in two systems, air-based and water-based systems.It is worth noting that most of the existing proposals on passive RC systems mounted on the wall are dual-functional (heating-cooling) modules.For instance, the air-based wall RC system is a combination with Trombe wall, which was developed by Sameti and Kasaeian [67] and consists of a glass cover and a thermal mass located directly behind the glass.The thermal mass function is to collect the sunlight entering the façade during heating mode and dissipate the heat to the night sky during cooling time.The glass cover is open during the heating days to protect it from solar radiation and closed during the heating nights to prevent radiative heat loss.The reverse is applied for cooling days.Nevertheless, it is important to note that an RC-Trombe wall system has some features that can affect its performance such as external glazing material, vents geometry and position, thermal storage, and Trombe wall area [118,119].
For the water-based RC wall, the system is accompanied by PCM to store the coldness and uses thermosyphon phenomena to extract it from the emitter, as described in Section 3 (see Figure 15) [101].A similar design to that of He et al. [101] was also tested by Shen et al. [100].Compared to the testing room with a brick wall, the cooling load in the RC-PCM wall room was 42% and 25% lower at ideal and moderate conditions.Furthermore, there are concerning factors that affect the performance of an RC-PCM wall system, i.e., the parasitic heat loss due to outdoor wind.Their system was not equipped with a convection cover, thus, the effect of wind speed was significant.The implementation of the RC-PCM wall in the testing room can be seen in Figure 19.
Wall
The second most appropriate architectural feature to place the RC emitter on is the wall.The wall has the advantage of providing a large surface when compared to the roof or other building envelopes.As with the roof RC, the wall RC also appeared in two systems, air-based and water-based systems.It is worth noting that most of the existing proposals on passive RC systems mounted on the wall are dual-functional (heating-cooling) modules.For instance, the air-based wall RC system is a combination with Trombe wall, which was developed by Sameti and Kasaeian [67] and consists of a glass cover and a thermal mass located directly behind the glass.The thermal mass function is to collect the sunlight entering the façade during heating mode and dissipate the heat to the night sky during cooling time.The glass cover is open during the heating days to protect it from solar radiation and closed during the heating nights to prevent radiative heat loss.The reverse is applied for cooling days.Nevertheless, it is important to note that an RC-Trombe wall system has some features that can affect its performance such as external glazing material, vents geometry and position, thermal storage, and Trombe wall area [118,119].
For the water-based RC wall, the system is accompanied by PCM to store the coldness and uses thermosyphon phenomena to extract it from the emitter, as described in Section 3 (see Figure 15) [101].A similar design to that of He et al. [101] was also tested by Shen et al. [100].Compared to the testing room with a brick wall, the cooling load in the RC-PCM wall room was 42% and 25% lower at ideal and moderate conditions.Furthermore, there are concerning factors that affect the performance of an RC-PCM wall system, i.e., the parasitic heat loss due to outdoor wind.Their system was not equipped with a convection cover, thus, the effect of wind speed was significant.The implementation of the RC-PCM wall in the testing room can be seen in Figure 19.
Glazing Material
Openings on the building envelope are the source of solar fenestration into the building's interior.Various glazing materials have been developed to reduce their transmissivity in the solar and infrared bands.Furthermore, researchers intended to also maximize thermal radiation of the glazing material in the atmospheric window band.With this strategy, the glazing materials not only reduce heat gain but also produce cooling for the building.Two prototypes, namely transparent film and coating to be added on top of glazing materials, have recently been developed [105,114,120].Currently, the transparent RC materials are only studied for skylight application, as shown in Figure
Glazing Material
Openings on the building envelope are the source of solar fenestration into the building's interior.Various glazing materials have been developed to reduce their transmissivity in the solar and infrared bands.Furthermore, researchers intended to also maximize thermal radiation of the glazing material in the atmospheric window band.With this strategy, the glazing materials not only reduce heat gain but also produce cooling for the building.Two prototypes, namely transparent film and coating to be added on top of glazing materials, have recently been developed [105,114,120].Currently, the transparent RC materials are only studied for skylight application, as shown in Figure 20.Future development for transparent RC film or coating might be evaluated for window application.
Paints
Recently, paint was proposed as a means to act as a scalable RC emitter.It is well known by researchers of passive RC that the currently available technologies are not yet scalable and feasible for building use.Considering paint as a mature technology and always certainly used in buildings (either for roof, walls, or other parts of a building), for Mandal et al. [121], an RC paint might be the answer to the problem of scalability of RC technologies.For them, the material technologies have the capability to develop a scalable and effective RC paint.The current development of cool roof coating is an example of the success of material technologies to enhance paint performance.However, they also highlighted some general challenges for the development of RC paint besides the technical difficulties.The challenges are the assessment of geographical conditions in which RC paint benefits the most, as well as the examination of the effect of pollution, dirt, and dust on the durability and performance of the paint.
Furthermore, RC paint can also be seen in the perspective of the aesthetic appearance of RC surfaces.Currently, research on this aspect is scarce.The study conducted by Lee et al. [66] is one example of the attempt to answer the aesthetic appearance of an RC emitter (see Figure 21).Research on RC paint might promote the progress on aesthetic studies of RC surfaces.
Paints
Recently, paint was proposed as a means to act as a scalable RC emitter.It is well known by researchers of passive RC that the currently available technologies are not yet scalable and feasible for building use.Considering paint as a mature technology and always certainly used in buildings (either for roof, walls, or other parts of a building), for Mandal et al. [121], an RC paint might be the answer to the problem of scalability of RC technologies.For them, the material technologies have the capability to develop a scalable and effective RC paint.The current development of cool roof coating is an example of the success of material technologies to enhance paint performance.However, they also highlighted some general challenges for the development of RC paint besides the technical difficulties.The challenges are the assessment of geographical conditions in which RC paint benefits the most, as well as the examination of the effect of pollution, dirt, and dust on the durability and performance of the paint.
Furthermore, RC paint can also be seen in the perspective of the aesthetic appearance of RC surfaces.Currently, research on this aspect is scarce.The study conducted by Lee et al. [66] is one example of the attempt to answer the aesthetic appearance of an RC emitter (see Figure 21).Research on RC paint might promote the progress on aesthetic studies of RC surfaces.
Paints
Recently, paint was proposed as a means to act as a scalable RC emitter.It is well known by researchers of passive RC that the currently available technologies are not yet scalable and feasible for building use.Considering paint as a mature technology and always certainly used in buildings (either for roof, walls, or other parts of a building), for Mandal et al. [121], an RC paint might be the answer to the problem of scalability of RC technologies.For them, the material technologies have the capability to develop a scalable and effective RC paint.The current development of cool roof coating is an example of the success of material technologies to enhance paint performance.However, they also highlighted some general challenges for the development of RC paint besides the technical difficulties.The challenges are the assessment of geographical conditions in which RC paint benefits the most, as well as the examination of the effect of pollution, dirt, and dust on the durability and performance of the paint.
Furthermore, RC paint can also be seen in the perspective of the aesthetic appearance of RC surfaces.Currently, research on this aspect is scarce.The study conducted by Lee et al. [66] is one example of the attempt to answer the aesthetic appearance of an RC emitter (see Figure 21).Research on RC paint might promote the progress on aesthetic studies of RC surfaces.
Outlook for Architectural Application of Passive Radiative Cooling
The previous sections have discussed the development of RC technologies and how they have been applied to reduce the cooling energy of buildings.From an architectural perspective, some issues regarding the implementation of passive RC in buildings arise from the discussions.
•
In the range of the current RC power and the challenge to overcome the mismatch in cooling supply and demand, studies on the application of RC in buildings can search for an efficient RC system or effective storage mechanism.Additionally, exploration of the combination of RC with other passive or active cooling techniques should be continued and even extended, because in this way, the disadvantages of RC technologies can be compensated by the advantages of other cooling techniques.Moreover, in terms of exploration of potential RC combinations, there might be other passive design strategies in architecture, outside the cooling techniques, that have not yet been taken into consideration by RC researchers, such as natural ventilation and daylighting strategies.Therefore, a review on the type of strategies of passive design architecture that are suitable for combination with RC is still outstanding.
•
Regarding architectural aspects, there are many considerations neglected by current RC studies.The roof may have an advantage in regard to the sky view factor, but another building element, such as the wall, may offer advantages in surface area as well as design flexibility.Additional façade elements on the wall, such as a shading device, secondary skin, cladding, and window, are potential locations for the RC emitter.In addition, the aesthetic aspect is also important.Thus, research on transparent and colored RC materials or even RC paints would encourage more flexibility in the architectural application.
•
Following the notions on architectural aspects, another important point arises, that is, the lack of research on the integration of RC systems in building design.Observations of the implementation of the RC system into real buildings should be introduced.The design process of such an observation and the observation itself might reveal some influential details that have not yet been considered.
•
Most of, if not all, the investigations of RC in buildings have focused on reducing cooling energy.Besides, the benefit of the RC system, if working ideally, may lead to healthy and comfortable buildings.This area of study, namely the contribution of passive RC in creating thermal comfort for building occupants as well as its further effect on health (and productivity in the working space), will eventually arise.
•
Lastly, two general factors should not be forgotten, namely, the durability of the radiative material and the cost of the material.Since there are many studies still in the lab scale, these factors have not been calculated by many researchers.Nevertheless, these two aspects can be determinant in terms of real application.Architects and building owners usually prefer to directly know the cost of installation, saving potential, and payback period of the implemented RC systems.Full life-cycle analysis of the system can also be an object of study by researchers in the field.
Conclusions
The RC research field was revived by the development of new materials.There have been many high-performance RC materials that resulted from the experimentations.The present challenge in this field is to provide scalable and durable RC materials.Besides these two purposes, research on colored and transparent RC materials could also widen the application of RC in buildings.Likewise, pursuing RC paints might be an alternative way to create scalable and colorful emitters, and thus could attract more attention from the architectural community.Furthermore, the available RC materials have been implemented in various RC module designs, as their utilization to reduce cooling energy demand for buildings has also been conducted.Such efforts can continue to be pursued with emphasis on the combination of RC with other passive design strategies.The combination is not limited only to other passive cooling techniques but could also be carried out with natural ventilation, heat storage, daylighting, etc.In addition to this, the designs of building-integrated RC should begin to look at building components other than the roof to be the place for installation.Only a handful of building features have been involved in the current explorations such as walls and skylights.
Another research direction for the application of RC in buildings is the evaluation of the RC performance in terms of the occupant's health and comfort.The two indicators could be supplementary to the current performance evaluation, i.e., cooling power or energy saving.This is especially relevant when the RC is combined with other passive design strategies, which may require multi-perspective performance evaluation.At the latter stage, a life-cycle analysis of a building-integrated RC system could also be included.Nevertheless, the efforts to apply RC in buildings need to be more integrated into the architectural design.One way of achieving this is by implementation of the currently available RC materials or panels to a real building, which can be an existing building or a newly constructed building.This type of case study using real building would necessitate a design integration and could uncover some unanticipated aspects of building-integrated RC.Moreover, the uncovered aspects will be further examined in future studies in the field.Funding: This review is supported with a PhD studentships funded by Indonesia Endowment Fund for Education (Lembaga Pengelola Dana Pendidikan), Ministry of Finance, Republic of Indonesia, reference number S-2401/LPDP.4/2019 and H2020 Marie Skłodowska-Curie Actions-Individual Fellowships (842096).
Conflicts of Interest:
The authors declare no conflict of interest.
Figure 3 .
Figure 3. Illustration of typical radiative cooling (RC) panel design and heat transfer on the panel.
Figure 3 .
Figure 3. Illustration of typical radiative cooling (RC) panel design and heat transfer on the panel.
Figure 4 .
Figure 4. Different mechanism used in four categories of passive RC materials: (a) natural emitter, a mechanism found in Saharan silver ants [107] (reprinted from Solar Energy Materials and Solar Cells, 206, Jeong et al., Daytime passive radiative cooling by ultra-emissive bio-inspired polymeric surface, 110296, Copyright (2020), with permission from Elsevier); (b) film-based emitter design by Fan et al. [44] (reprinted from Applied Thermal Engineering, 165, Fan et al., Yttria-stabilized zirconia coating for passive daytime radiative cooling in humid environment, 114585, Copyright (2020), with permission from Elsevier); (c) nanoparticle-based emitter design by Huang and Ruan [108] (reprinted from International Journal of Heat and Mass Transfer, 104, Huang and Ruan., Nanoparticle embedded double-layer coating for daytime radiative cooling, 890-896, Copyright (2017), with permission from Elsevier); (d) photonic emitter design by Gao et al. [50] (reprinted Solar Energy Materials and Solar Cells, 200, Gao et al., Approach to fabricating high-performance cooler with nearideal emissive spectrum for above-ambient air temperature radiative cooling, 110013, Copyright (2019), with permission from Elsevier).
Figure 4 .
Figure 4. Different mechanism used in four categories of passive RC materials: (a) natural emitter, a mechanism found in Saharan silver ants [107] (reprinted from Solar Energy Materials and Solar Cells, 206, Jeong et al., Daytime passive radiative cooling by ultra-emissive bio-inspired polymeric surface, 110296, Copyright (2020), with permission from Elsevier); (b) film-based emitter design by Fan et al. [44] (reprinted from Applied Thermal Engineering, 165, Fan et al., Yttria-stabilized zirconia coating for passive daytime radiative cooling in humid environment, 114585, Copyright (2020), with permission from Elsevier); (c) nanoparticle-based emitter design by Huang and Ruan [108] (reprinted from International Journal of Heat and Mass Transfer, 104, Huang and Ruan, Nanoparticle embedded double-layer coating for daytime radiative cooling, 890-896, Copyright (2017), with permission from Elsevier); (d) photonic emitter design by Gao et al. [50] (reprinted Solar Energy Materials and Solar Cells, 200, Gao et al., Approach to fabricating high-performance cooler with near-ideal emissive spectrum for above-ambient air temperature radiative cooling, 110013, Copyright (2019), with permission from Elsevier).
Figure 6 .
Figure 6.Typical RC panel design with insulation and convection cover [51] (reprinted from Solar Energy Materials and Solar Cells, 95/10, Bathgate and Bosi, A robust convection cover material for selective radiative cooling applications, 2778-2785, Copyright (2011), with permission from Elsevier).
Figure 6 .
Figure 6.Typical RC panel design with insulation and convection cover [51] (reprinted from Solar Energy Materials and Solar Cells, 95/10, Bathgate and Bosi, A robust convection cover material for selective radiative cooling applications, 2778-2785, Copyright (2011), with permission from Elsevier).
Figure 6 .
Figure 6.Typical RC panel design with insulation and convection cover [51] (reprinted from Solar Energy Materials and Solar Cells, 95/10, Bathgate and Bosi, A robust convection cover material for selective radiative cooling applications, 2778-2785, Copyright (2011), with permission from Elsevier).
Buildings 2020 , 29 Figure 8 .
Figure 8. Design of vacuum chamber as insulation and part of the convective cover for the RC emitter [60] (reprinted from Renewable Energy, 106, Tso et al., A field investigation of passive radiative cooling under Hong Kong's climate, 52-61, Copyright (2017), with permission from Elsevier).
Figure 9 .
Figure 9. Design of a triple-glazing skylight that can operate as an RC emitter by incorporating a highabsorptivity gas.RC occurs for the upper gas, while the lower gas obtains heat from the building's interior.The middle glass then switches the cool and warm air in the upper and lower part of the glazing, enabling the cycle to continue[61] (reprinted from Building and Environment, 126, Falt et al., Modified predator-prey algorithm approach to designing a cooling or insulating skylight, 331-338, Copyright (2017), with permission from Elsevier).
Figure 8 . 29 Figure 8 .
Figure 8. Design of vacuum chamber as insulation and part of the convective cover for the RC emitter [60] (reprinted from Renewable Energy, 106, Tso et al., A field investigation of passive radiative cooling under Hong Kong's climate, 52-61, Copyright (2017), with permission from Elsevier).
Figure 9 .
Figure 9. Design of a triple-glazing skylight that can operate as an RC emitter by incorporating a highabsorptivity gas.RC occurs for the upper gas, while the lower gas obtains heat from the building's interior.The middle glass then switches the cool and warm air in the upper and lower part of the glazing, enabling the cycle to continue[61] (reprinted from Building and Environment, 126, Falt et al., Modified predator-prey algorithm approach to designing a cooling or insulating skylight, 331-338, Copyright (2017), with permission from Elsevier).
Figure 9 .
Figure 9. Design of a triple-glazing skylight that can operate as an RC emitter by incorporating a high-absorptivity gas.RC occurs for the upper gas, while the lower gas obtains heat from the building's interior.The middle glass then switches the cool and warm air in the upper and lower part of the glazing, enabling the cycle to continue[61] (reprinted from Building and Environment, 126, Falt et al., Modified predator-prey algorithm approach to designing a cooling or insulating skylight, 331-338, Copyright (2017), with permission from Elsevier).
Figure 10 .
Figure 10.Configuration of different ways to utilize cooling from the RC emitter: (a) water-based open system; (b) water-based closed system (c) air-based system [81] (reprinted from Applied Energy, 224, Zhang et al., Energy saving and economic analysis of a new hybrid radiative cooling system for single-family houses in the USA, 271-281, Copyright (2018), with permission from Elsevier).
Figure 10 .
Figure 10.Configuration of different ways to utilize cooling from the RC emitter: (a) water-based open system; (b) water-based closed system (c) air-based system [81] (reprinted from Applied Energy, 224, Zhang et al., Energy saving and economic analysis of a new hybrid radiative cooling system for single-family houses in the USA, 271-281, Copyright (2018), with permission from Elsevier).
Figure 10 .
Figure 10.Configuration of different ways to utilize cooling from the RC emitter: (a) water-based open system; (b) water-based closed system (c) air-based system [81] (reprinted from Applied Energy, 224, Zhang et al., Energy saving and economic analysis of a new hybrid radiative cooling system for single-family houses in the USA, 271-281, Copyright (2018), with permission from Elsevier).
Figure 12 .
Figure 12.Cross-section schematic drawing of photo-thermal and RC (PTRC) design by Hu et al. [90] (reprinted from Renewable Energy, 139, Hu et al., Experimental study on a hybrid photo-thermal and radiative cooling collector using black acrylic paint as the panel coating, 1217-1226, Copyright (2019), with permission from Elsevier).
Figure 12 .
Figure 12.Cross-section schematic drawing of photo-thermal and RC (PTRC) design by Hu et al. [90] (reprinted from Renewable Energy, 139, Hu et al., Experimental study on a hybrid photo-thermal and radiative cooling collector using black acrylic paint as the panel coating, 1217-1226, Copyright (2019), with permission from Elsevier).
Figure 13 .
Figure 13.Schematic of a typical RC-assisted HVAC system [81] (reprinted from Applied Energy, 224, Zhang et al., Energy saving and economic analysis of a new hybrid radiative cooling system for singlefamily houses in the USA, 271-281, Copyright (2018), with permission from Elsevier).
Figure 13 .
Figure 13.Schematic of a typical RC-assisted HVAC system [81] (reprinted from Applied Energy, 224, Zhang et al., Energy saving and economic analysis of a new hybrid radiative cooling system for single-family houses in the USA, 271-281, Copyright (2018), with permission from Elsevier).
Figure 14 .
Figure 14.Structure of solar absorber that serves, i.e., two functions for heating and cooling as designed by Yong et al. [68]: (a) design without an air gap for insulation purposes; (b) design with an air gap for an area with nighttime ambient temperature that is low enough to be used as cooling, nocturnal cooling thus provided by both the sky and the surroundings; (c) schematic diagram showing the mechanism in which the dual-functional system works-red line for heating, blue line for cooling (reprinted from Renewable Energy, 74, Yong et al., Performance analysis on a buildingintegrated solar heating and cooling panel, 627-632, Copyright (2015), with permission from Elsevier).
Figure 14 .
Figure 14.Structure of solar absorber that serves, i.e., two functions for heating and cooling as designed by Yong et al. [68]: (a) design without an air gap for insulation purposes; (b) design with an air gap for an area with nighttime ambient temperature that is low enough to be used as cooling, nocturnal cooling thus provided by both the sky and the surroundings; (c) schematic diagram showing the mechanism in which the dual-functional system works-red line for heating, blue line for cooling (reprinted from Renewable Energy, 74, Yong et al., Performance analysis on a building-integrated solar heating and cooling panel, 627-632, Copyright (2015), with permission from Elsevier).
Figure 15 .
Figure 15.Dual-functional RC-PCM wall design by He et al. [101].During the daytime, the absorbed heat is stored by PCM and is later released at night via RC, thus, the temperature of the room can be kept comfortable, and the PCM can "recharge" (reproduced from Energy and Buildings, 199, He et al., Experimental study on the performance of a novel RC-PCM-wall, 297-310, Copyright (2019), with permission from Elsevier).
Figure 15 .
Figure 15.Dual-functional RC-PCM wall design by He et al. [101].During the daytime, the absorbed heat is stored by PCM and is later released at night via RC, thus, the temperature of the room can be kept comfortable, and the PCM can "recharge" (reproduced from Energy and Buildings, 199, He et al., Experimental study on the performance of a novel RC-PCM-wall, 297-310, Copyright (2019), with permission from Elsevier).
Buildings 2020 , 29 Figure 15 .
Figure 15.Dual-functional RC-PCM wall design by He et al. [101].During the daytime, the absorbed heat is stored by PCM and is later released at night via RC, thus, the temperature of the room can be kept comfortable, and the PCM can "recharge" (reproduced from Energy and Buildings, 199, He et al., Experimental study on the performance of a novel RC-PCM-wall, 297-310, Copyright (2019), with permission from Elsevier).
Figure 16 .
Figure 16.Temperature-regulating module by Liu et al. [112]: (a) cooling mode; (b) heating mode; (c) when applied on the roof (reproduced from Energy Conversion and Management, 205, Liu et al., Research on the performance of radiative cooling and solar heating coupling module to direct control indoor temperature, 112395, Copyright (2020), with permission from Elsevier).
Figure 18 .
Figure 18.Air-based RC system on the roof using an air channel to utilize the cooling [98]: (a) illustration of how the system works to remove heat from the attic; (b) schematic of the air channel and the RC panel (reproduced from Energy and Buildings, 203, D. Zhao et al., Roof-integrated radiative air-cooling system to achieve cooler attic for building energy saving, 109453, Copyright (2019), with permission from Elsevier).
Figure 18 .
Figure 18.Air-based RC system on the roof using an air channel to utilize the cooling [98]: (a) illustration of how the system works to remove heat from the attic; (b) schematic of the air channel and the RC panel (reproduced from Energy and Buildings, 203, D. Zhao et al., Roof-integrated radiative air-cooling system to achieve cooler attic for building energy saving, 109453, Copyright (2019), with permission from Elsevier).
Figure 18 .
Figure 18.Air-based RC system on the roof using an air channel to utilize the cooling [98]: (a) illustration of how the system works to remove heat from the attic; (b) schematic of the air channel and the RC panel (reproduced from Energy and Buildings, 203, D. Zhao et al., Roof-integrated radiative air-cooling system to achieve cooler attic for building energy saving, 109453, Copyright (2019), with permission from Elsevier).
Figure 19 .
Figure 19.The drawing of an experimental room for an RC-PCM wall by Shen et al. [100].The RC-PCM wall was mounted on the south wall.The measurement data from this room were compared with a conventional brick wall room (reprinted from Applied Thermal Engineering, 176, Shen et al., Investigation on the thermal performance of the novel phase change materials wall with radiative cooling, 115479, Copyright (2020), with permission from Elsevier).
Figure 19 .
Figure 19.The drawing of an experimental room for an RC-PCM wall by Shen et al. [100].The RC-PCM wall was mounted on the south wall.The measurement data from this room were compared with a conventional brick wall room (reprinted from Applied Thermal Engineering, 176, Shen et al., Investigation on the thermal performance of the novel phase change materials wall with radiative cooling, 115479, Copyright (2020), with permission from Elsevier).
20 .
Future development for transparent RC film or coating might be evaluated for window application.
Figure 20 .
Figure 20.A transparent RC emitter used on a skylight to provide daylighting as well as passive cooling for buildings [114] (reproduced from Solar Energy Materials and Solar Cells, 213, Ziming et al., Low-cost radiative cooling blade coating with ultrahigh visible light transmittance and emission within an "atmospheric window", 110563, Copyright (2020), with permission from Elsevier).
Figure 20 .
Figure 20.A transparent RC emitter used on a skylight to provide daylighting as well as passive cooling for buildings [114] (reproduced from Solar Energy Materials and Solar Cells, 213, Ziming et al., Low-cost radiative cooling blade coating with ultrahigh visible light transmittance and emission within an "atmospheric window", 110563, Copyright (2020), with permission from Elsevier).
Buildings 2020 ,
10, x FOR PEER REVIEW 21 of 2920.Future development for transparent RC film or coating might be evaluated for window application.
Figure 20 .
Figure 20.A transparent RC emitter used on a skylight to provide daylighting as well as passive cooling for buildings [114] (reproduced from Solar Energy Materials and Solar Cells, 213, Ziming et al., Low-cost radiative cooling blade coating with ultrahigh visible light transmittance and emission within an "atmospheric window", 110563, Copyright (2020), with permission from Elsevier).
Author Contributions: Conceptualization, S. and M.H.; writing-original draft preparation, S.; writing-review and editing, S., M.H., Y.S., J.D. and S.R.; visualization, S. All authors have read and agreed to the published version of the manuscript.
Table 1 .
Improvement strategies for RC technology application in buildings. | 2020-12-03T09:05:23.398Z | 2020-11-26T00:00:00.000 | {
"year": 2020,
"sha1": "109140aba3fa605521f5e6b7f54f32ff2291976f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/10/12/215/pdf?version=1608519719",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "13193e031aecd654df1c06dc122675ddf9add41f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54827049 | pes2o/s2orc | v3-fos-license | of methods to calculate congestion cost allocation in deregulated
With maturing deregulated environment for electricity market, cost of transmission congestion becomes a major issue for power system operation. Uniform Marginal Price and Locational Marginal Price (LMP) are the two practical pricing schemes on energy pricing and congestion cost allocation, which are based on different mechan-isms. In this paper, these two pricing schemes are introduced in detail respectively. Also, the modified IEEE-14-bus system is used as a test system to calculate the allocated congestion cost by using these two pricing schemes.
Introduction
Since 1989, many countries followed the trend to unbundle their vertically integrated power utilities into several components in order to bring competition to the energy supply industry [1]. However, transmission congestion has added complication to the operation of the system. With the deregulation process, congestion management becomes more complex since transmission network access has to be open to all market participants and each participant should take responsibility for their congestion contribution [2]. Congestion could cause cheaper power not being delivered to the most desired load and that the congestion relief cost increases [3]. It is a challenge for the system operator to draw up a set of rules which must be robust, fair and transparent to the market and maintain the efficiency and reliability of the network [4]. Congestion cost allocation is based on two pricing schemes: uniform marginal price and locational marginal price [3]. The major difference between them is that uniform marginal price Zhao allocates cost uniformly to all loads without considering their locations and power flow contribution while locational marginal price does [5]. This paper reviewed above two pricing schemes with their mechanisms, pricing calculation and pros and cons. Then an IEEE-14-bus system is used as a test system simulated by the software Matpower to test the two methods on congestion cost allocation.
Mechanism
The old England & Wales Pool was one of the pioneers for electricity industry deregulation in the world [2]. Uniform Marginal Price was implemented in this market as pricing scheme and congestion management method. Here the National Grid Company has two roles: transmission asset owner (TO) and Independent System Operator (ISO) [5] [6]. The ISO adopts the principle referred to as "re-dispatch first, compensate later" to manage transmission congestion which means it is a two stages operations [3]. In unconstrained dispatch stage, generation companies send generation bidding quantity and price of the following day to the ISO who already forecasted the power demand for each half hour period [1]. Then the ISO starts to accept bids from the cheapest price to higher price until the forecasted demand is satisfied. Then, the ISO sorts out a bid list which contains the generation companies who have been chosen to generate electricity.
Those generation companies are called "in merit" generation companies and those who have not been accepted are called "out of merit" generation companies [2]. If there is no congestion violation, the unconstraint dispatch will be executed [5]. When transmission congestion occurs, it comes to the security-constrained stage and the ISO will re-dispatch the generation list, at the meantime ensuring the re-dispatch cost is the minimum. An inequality constraint will be added and the security re-dispatched is decided by the new algorithm. The congestion relief cost is the generation cost in security-constrained dispatch minus the generation cost in unconstrained dispatch. The congestion cost is allocated in equal proportional to each load while generators are not charged for transmission congestion [7].
Pricing Calculation
Congestion management is implemented through power energy prices and transmission usage charges [8]. In unconstrained dispatch, bid price of the last dispatched generator becomes the system marginal price (SMP) [2]. If there is no congestion violation, the ISO will execute the unconstrained dispatch and market participants will be paid and charged at the SMP. Once transmission congestion occurs, the ISO will implement re-dispatch and set a group of prices for generators and loads as follows [2]: LOLP is the probability that electricity power capacity is unable to support the actual demand [2]. After security-constrained dispatch, the ISO pays generators at PPP and charges loads at PSP. Neglecting power losses and ancillary service, the Uplift can be regarded as the cost of congestion relief and expressed as follows [2]: Congestion cost will be assigned to all loads and congestion cost assigned to load i as follows [5]:
Pros and Cons
Uniform marginal price is a good innovation scheme to manage congestion after industry deregulation. Electricity prices barely reflect the congestion cost since the ISO ignores loads' locations and power flow contributions. Generators are not charged for congestion so that correct signals are unable to pass to new market participant and transmission investment [7].
Mechanism
Locational Marginal Price (LMP) is the primary pricing scheme in the US electricity markets for congestion cost allocation. The definition of the LMP is the minimum marginal cost of the next increment of 1 megawatt hour power at a specific bus [9]. If there are no transmission congestion and losses, the LMP of each node will be the same. However, in reality, transmission congestion and losses will no doubt exist. When congestion happens, LMPs of different nodes become distinct due to variability of supply cost and available transmission capacity [10]. At a node, Generators are paid at their bid prices and loads are charged based on the LMP which is determined by the SO [5].
There is a possible trend that LMP will become the dominate congestion management since it has been adopted by many electricity markets in the US [11]. Take the PJM as an example, the LMP is utilized to calculate charges and payments in power delivery including spot market price and congestion cost [12]. There are two major markets in the PJM: a day-ahead market and a real-time balancing market. Both market prices calculation are based on the concept of the LMP [13]. LMP calculation is based on an optimization problem that maximizes the total social welfare function with balance equality which is equivalent to a minimization of an economic objective function subject to equality and inequality constraints of transmission network operation [10]. The SO uses optimal power flow (OPF) to calculate dispatch of each generator [14]. OPF model has two types: DCOPF and ACOPF. With non-linear equations, ACOPF demands a very long time to simulate large scale power system data. Compared with ACOPF, DCOPF is much simpler and a more convenient approach since it is linear and only considering active power flow whilst neglecting voltage, reactive power and transmission loss. As a result, DCOPF is often used for generator dispatch and LMP calculation [15].
Pricing Calculation
It is known that the sum of all power injected into all nodes is equal to sum of all power withdrawn from all nodes plus transmission losses which can be written as [2]: (6), equation of LMP of node i can be obtained as follows [2]: where: 1 i T − : Sensitivity factor for real power at node i with line l constraint From Equation (7), the LMP of a node i can be divided into three components as follows [11]:
Pros and Cons
By utilizing LMP approach, economic signals are indicated and can be reflected to market participants. The influence of transmission congestion and losses will be reflected in the LMP variation of nodes so that electricity market is transparent. For longer-term view, LMP gives incentives for generation and transmission investments.
Nevertheless, LMP cannot be regarded as a perfect approach. Because generation bids submitted to the SO is bid-based rather than cost-base, generator companies still have chance to act gaming behaviors [16]. Under transmission congestion circumstances, even though LMP can be effective, congestion revenue collecting from the SO will cause inefficiency for economic operation of the electricity market [14]. Power system networks are always huge so the designing work of LMP is significant complex and required large degree of coordination [2].
Case Study
A modified IEEE-14-bus system has been built by the software package Matpower as shown in .
The diagram of the modified IEEE-14 bus model. Using Matpower, the locational marginal prices of each load in unconstrained dispatch and security-constrained dispatch are obtained respectively as follows in .
Generation and load data. Branch flows of unconstrained dispatch and security-constrained dispatch. Pick up the fourth column of 4 and fifth column of 6, a comparison in congestion cost allocation on each load between two methods is shown in ; Pick up the third column of Table IV and the sixth column of Table VI, a comparison in congestion cost allocation per MW between two methods is also shown in .
Comparison in congestion cost allocation and congestion cost allocation per MW.
Conclusion
From , it is indicated that locational marginal pricing considers load's location and power flow contribution so that the allocated congestion cost for L2 is larger than other loads since the transmission congestion occurred on branch 1-2. It is observed that congestion cost allocation in uniform marginal pricing is based on one non-discriminate price. It does not reflect load's contribution to transmission congestion which means every load in the market shares the congestion cost uniformly. Locational marginal pricing provided economic signals to tell market participants where the congestion occurred. The participant, who contributes the congestion more, is required to pay for the congestion relief at a higher price. | 2018-12-07T08:47:22.179Z | 2016-10-20T00:00:00.000 | {
"year": 2016,
"sha1": "d8cb90095e9d28b448bc494bb74d4a7d5cf9ebe7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71320",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c8aa8c2a22263e03f1aa015ef687db173a667d87",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Economics"
]
} |
18395548 | pes2o/s2orc | v3-fos-license | Mobile Data Offloading through A Third-Party WiFi Access Point: An Operator's Perspective
WiFi offloading is regarded as one of the most promising techniques to deal with the explosive data increase in cellular networks due to its high data transmission rate and low requirement on devices. In this paper, we investigate the mobile data offloading problem through a third-party WiFi access point (AP) for a cellular mobile system. From the cellular operator's perspective, by assuming a usage-based charging model, we formulate the problem as a utility maximization problem. In particular, we consider three scenarios: (i) successive interference cancellation (SIC) available at both the base station (BS) and the AP; (ii) SIC available at neither the BS nor the AP; (iii) SIC available at only the BS. For (i), we show that the utility maximization problem can be solved by considering its relaxation problem, and we prove that our proposed data offloading scheme is near-optimal when the number of users is large. For (ii), we prove that with high probability the optimal solution is One-One-Association, i.e., one user connects to the BS and one user connects to the AP. For (iii), we show that with high probability there is at most one user connecting to the AP, and all the other users connect to the BS. By comparing these three scenarios, we prove that SIC decoders help the cellular operator maximize its utility. To relieve the computational burden of the BS, we propose a threshold-based distributed data offloading scheme. We show that the proposed distributed scheme performs well if the threshold is properly chosen.
I. INTRODUCTION
The rapid development of mobile phones and mobile internet services in recent years has generated a lot of data usage over the cellular network [1]. The unprecedented explosion of mobile data traffic has led to overloaded cellular networks. For example, in metro areas and during peak hours, most 3G networks are overloaded [2]. Mobile users in overloaded areas will have to experience degraded cellular services, such as low data transmission rate and low quality phone calls.
A straightforward approach to address the above problem is to upgrade the cellular network to the more advanced 4G network. Another approach is to deploy more base stations (BSs) with smaller cell size such as femtocells [3], [4]. However, these approaches incur increase in infrastructure cost. A more cost-effective approach is to offload some of the mobile traffic to WiFi networks, which is often referred to as WiFi offloading. It has a few advantages: (i). No user equipment upgrading is required. This is because most of the mobile data services are created by smartphones which already have built-in WiFi modules. (ii). No licensed spectrum X. Kang is required. WiFi devices operate in unlicensed and worldunified 2.4GHz and 5GHz bands. (iii). High data rates. IEEE 802.11n WiFi can deliver data rates as high as 600Mbps and IEEE 802.11ac can deliver up to 6.933Gbps [5], which is much faster than 3G. (iv). Low infrastructure cost. The WiFi routers are much cheaper than the cellular BSs.
For the aforementioned reasons, WiFi offloading becomes a hot research topic and has attracted the attention of many researchers all over the world [6]- [20]. The feasibility of augmenting 3G using WiFi was investigated in [6]. The performance of 3G mobile data offloading through WiFi networks for metropolitan areas was studied in [7]. The numbers of APs needed for WiFi offloading in large metropolitan area was studied in [8]. Different approaches to implement WiFi offloading and to improve the performance of WiFi offloading were investigated in [9]- [14]. The load-balancing and userassociation problem for offloading in heterogeneous networks with cellular networks and small cells are investigated in [15]- [18]. In [15], the authors investigated the outage probability and ergodic rate when a flexible cell association scheme is adopted. In [16], the authors developed a general and tractable model for data offloading in heterogeneous networks with different tiers of APs. In [17], the authors investigated the downlink user association problem for load balancing in a heterogeneous cellular networks. In [18], the authors investigated the data offloading schemes for load coupled networks, and showed that the optimal loading is tractable when proportional fairness is considered. Recent works [19]- [22] investigated the network economics of data offloading through WiFi APs using game theory [23]. Different from the above work, in this paper, we consider the scenario that there is a third-party WiFi AP providing data offloading service with a usage-based charging policy. We investigate the data offloading problem through such a third-party WiFi AP for a cellular mobile communication system. From business perspective, the cellular operator aims to maximize its revenue. Thus, in this paper, we investigate the data offloading problem from the economic point of view. We formulate the problem as a utility maximization problem and derive the corresponding data offloading schemes for the cellular operator. In particular, we consider three scenarios, namely, SIC available at both the BS and the AP, SIC available at only the BS, and SIC available at neither the BS nor the AP. We study the different utility functions and propose different data offloading schemes.
The main contribution and results of this paper are summarized as follows.
• SIC available at both the BS and the AP: The utility maximization problem for this case is solved by considering its relaxation problem. We show that the relaxation problem is a convex optimization problem. By using the convex optimization techniques, we prove that there is at most one user with fractional indicator function. A data offloading scheme is then obtained by rounding the fractional indicator function to its nearest integer. It is strictly proved that the proposed data offloading scheme is near-optimal when the number of users is large. • SIC available at neither the BS nor the AP: For this case, we rigorously prove that when the number of users is large, the optimal solution is One-One-Association, i.e., the user with the best user-to-BS channel connects to the BS and that with the best user-to-AP channel connects to the WiFi AP. • SIC available at only the BS: For this case, we show that when the number of users is large, there is at most one user connecting to the WiFi AP, and all the other users connect to the BS. A polynomial-time algorithm is developed to find the optimal offloading scheme. • SIC is beneficial for the cellular operator: We rigorously prove that SIC decoders are beneficial for the cellular operator in terms of maximizing its utility. • Distributed data offloading scheme: We propose a threshold-based distributed data offloading scheme for the case when SIC decoders are available at both the BS and the AP. We prove that the proposed distributed scheme can achieve the same performance as the centralized data offloading scheme once the threshold is properly chosen. The rest of this paper is organized as follows: In Section II, we describe the system model and the problem formulation. In Section III, we present the results obtained for the case when SIC decoders are available at both the BS and the WiFi AP. In Section IV, we present the results obtained for the case when SIC decoders are not available at both the BS and the WiFi AP, and the results for the case when the SIC decoder is available at the BS side are given in Section V. Then, in Section VI, we show that SIC decoders are beneficial for the cellular operator. We also present a high-efficiency distributed data offloading scheme for the case when SIC decoders are available at both the BS and the WiFi AP. Simulation results are given in Section VII. Section VIII concludes the paper.
II. SYSTEM MODEL In this paper, as shown in Fig. 1, we consider a cellular network with N users served by a base station (BS). We assume that there is a third-party WiFi access point (AP) within the coverage area of the BS. The WiFi AP and the BS use orthogonal frequencies. Thus, there is no inter-network interference between WiFi and cellular network. To maximize the network throughput and improve the overall network performance, the cellular operator may direct some of its users to be served by the WiFi AP. Since the WiFi AP belongs to a third-party operator, data offloading through AP is thus not for free. The cellular operator has to reward the AP operator an incentive while guaranteeing an optimized utility.
In this paper, we focus on the uplink scenario. We assume that all the users adopt fixed power transmission, i.e., P i for user i. For the convenience of analysis, we assume that P i = P, ∀i. We also assume the users are uniformly distributed in the coverage area. The channel power gain between user i and the BS is denoted by g i,B , and that between user i and the WiFi AP is denoted by g i,A . Unless otherwise specified, we assume that g i,B 's and g i,A 's are strictly positive, mutually independent, and have continuous probability distribution function (pdf). The power of the additive Gaussian noises at the BS and the AP are denoted by σ 2 B and σ 2 A , respectively. We also assume that all the channel state information (CSI) and users' transmit power are known at the BS. Now, we define x i ∈ {0, 1} and y i ∈ {0, 1} as two indicator functions to indicate user i's connection to BS and AP, respectively. If user i connects to BS, x i = 1; otherwise, x i = 0. Similarly, if user i connects to AP, y i = 1; otherwise, y i = 0. Besides, at any time, user i is only allowed to connect to either BS or AP, but not to both of them simultaneously, i.e., x i + y i ≤ 1, ∀i.
In this paper, we assume that the cellular operator charges its users at λ per nat of data usage, and it pays the thirdparty WiFi operator at µ per nat of data usage over the AP. For convenience, throughout the paper, we use the natural logarithm. Hence, the data is measured in nats rather than in bits. Then, the utility function of the operator is defined as where R B (x) is the sum-rate at the BS, and R A (y) is the sumrate at the WiFi AP. The exact form of the sum-rate depends on whether the SIC decoder is available. As implied by the name, in a receiver with a SIC decoder, users' signals are extracted from the composite received signal successively, rather than in parallel. The SIC decoder is able to remove the interference of the most recently decoded user from the current composite received signal by subtracting it out. According to [24], if a SIC decoder is available at the BS, the sum-rate at the BS can be written as R w B (x) = ln 1 + N In the rest of the paper, we study the optimal data offloading schemes for the above four cases.
III. WITH SIC DECODERS AT BOTH SIDES
In this Section, we investigate the case that both the BS and the WiFi AP are equipped with a SIC decoder. Thus, the utility maximization problem of the cellular operator can be formulated as Problem 3.1: where S i,B gi,B P σ 2 B and S i,A gi,AP σ 2 A . It is observed from this problem formulation that the thirdparty operator's pricing strategy µ has a great influence on the optimal solution of the above problem. When µ is larger than λ, the cellular operator will not assign any user to the AP. This is rigorously proved by the following proposition. Proposition 3.1: When λ ≤ µ, the optimal solution of Problem 3.1 is x * = 1 N , y * = 0 N , where 1 N and 0 N denote the N-dimension all-one vector and all-zero vector, respectively.
Proof: To prove x * = 1 N and y * = 0 N is the optimal solution of Problem 3.1, we have to show that f (x * , y * ) is larger than f (x, y), where f (x, y) denotes the objective function of Problem 3.1 and (x, y) is any feasible solution of Problem 3.1. Suppose (x,ỹ) is a feasible solution of Problem 3.1, then it follows that where "a" follows from the fact that λ − µ ≤ 0 and R w A (y) is always nonnegative, and "b" follows from the fact that R w B (x) is an increasing function of x, and thus the equality holds only when x * = 1 N . Proposition 3.1 indicates that the cellular operator will not offload any mobile data to the WiFi AP if the thirdparty operator charges at a price higher than its revenue, i.e., µ ≥ λ. On the other hand, from the third-party operator's perspective, if the cellular operator does not offload mobile data through its WiFi AP, it will earn nothing, which is a lose-lose situation. Thus, a reasonable third-party operator will charge a price lower than λ, which is the scenario we consider in the following studies, i.e., µ < λ.
Proposition 3.2: The optimal solution of Problem 3.1 is obtained when (9) holds with equality for arbitrary i.
Proof: This can be proved by contradiction. Suppose (x * , y * ) is the optimal solution of Problem 3.1, and it has an element (x * k , y * k ) satisfying x * k + y * k < 1. Then, from (7) and (8), it follows that x * k = 0, y * k = 0. Now, we show that we can always find a feasible solution (x,ỹ) with its elements satisfyingx * i +ỹ * i = 1, ∀i with a higher value of (6). We let where the minus sign before the letter k in the subscript of a vector refers to all the elements of the vector except the kth element. Then, since the logarithm function is an increasing function, it is clear that if we set x * k = 1,ỹ * k = 0 orx * k = 0,ỹ * k = 1 will result in a higher value of (6) than that resulted by x * k = 0, y * k = 0. This contradicts with our presumption. Proposition 3.2 is thus proved.
With the results given in Proposition 3.2, we can reduce the complexity of Problem 3.1 by setting y i = 1 − x i . Problem 3.1 can be converted to the following problem. Problem 3.2: This is a nonlinear integer programming problem. When the number of users is small, it can be solved by exhaustive search. However, when the number of users is large, exhaustive search is not applicable due to the high complexity. In this paper, we solve Problem 3.2 by solving its relaxation problem, and rigorously prove that the gap between the relaxation problem and Problem 3.2 is negligible when the number of the users is large.
The relaxation problem of Problem 3.2 is given as follows: Problem 3.3 is a convex optimization problem. To show its convexity, we only need to show that the objective function is convex or concave since all the constraints are linear. Denote the objective function of the relaxation problem as f r , then f r is convex/concave if its Hessian is positive/negative semidefinite. Denote the Hessian of f r as H, we show that H is negative semidefinite by the following proposition.
Proposition 3.3:
The Hessian H is negative semidefinite.
Proof:
The Hessian of f can be written as where the diagonal elements and off-diagonal elements can be obtained as
It is observed H can be rewritten as
where matrices B and A have the same structure as the following matrix X It can be shown that for any vector c = [c 1 · · · c N ] T , c T Xc can be obtained as Thus, it is clear that both B and A are positive semidefinite. Then, since both λ and λ − µ are non-negative, it is easy to see that H is negative semidefinite. Therefore, the objective function is strictly concave. Problem 3.3 is shown to be convex, and it can be easily verified that Slater's condition holds for this problem. Thus, the duality gap between Problem 3.3 and its dual problem is zero, and solving its dual problem is equivalent to solving the original problem. Now, we consider its dual problem. The Lagrangian of Problem 3.3 is where α = [α 1 · · · α N ] T and β = [β 1 · · · β N ] T are the nonnegative dual variables associated with the constraints.
The dual function is q(α, β) = max x L(x, α, β). The Lagrange dual problem is then given by min α 0,β 0 q(α, β). Therefore, the optimal solution needs to satisfy the following Karush-Kuhn-Tucker (KKT) conditions [25]: Due to the complexity of the problem, solving the above KKT conditions will not render us a closed-form solution. However, from these KKT conditions, we are able to gain some significant features of the optimal solution. Theorem 3.1: The optimal solution of the relaxation problem has at most one user indexed by k (k ∈ {1, 2, · · · , N }), with a fractional x k satisfying 0 < x k < 1.
Proof: This proposition can be proved by contradiction. Suppose that there are two arbitrary users denoted by m and n having fractional x m and x n , respectively, i.e., 0 < x m < 1 and 0 < x n < 1. From (20) and (21), it follows that α m = 0, α n = 0, β m = 0, and β n = 0. Then, applying these facts to (24), it follows that Then, for these two users, the following equality must hold . (27) It is easy to observe that is satisfied with a zero probability since the channel power gains are mutually independent and have continuous pdf. This result contradicts with our presumption. Thus, it is concluded that there is at most one user with a fractional x k , i.e., 0 < x k < 1. Theorem 3.1 is thus proved. From Theorem 3.1, it is observed that there is at most one user with a fractional indicator for the optimal solution of Problem 3.3. This indicates that the optimal solution of Problem 3.3 is either equal to or just one-user away from that of Problem 3.2. Thus, the following scheme is proposed to find the optimal solution of Problem 3.2. [26], or existing solvers such as CVX [27]. 2). Convert the obtained solution into a feasible solution of Problem 3.2 by rounding the fractional indicator function to its nearest integer (0 or 1).
In general, the above algorithm provides a sub-optimal solution to Problem 3.2. However, due to the special feature presented in Theorem 3.1, we are able to prove that the proposed solution given in the Table I is near-optimal when the number of users is large.
Theorem 3.2: The gap between the optimal solution of Problem 3.2 and the proposed solution given in Table I is negligible when the number of users is large.
Proof: For the convenience of exposition, we denote the maximum values of Problem 3.2 attained at the optimal solution and at the proposed solution given in Table I as f * o and f * s , respectively. Since the solution given in Table I is also a feasible solution of Problem 3.2. Thus, it follows that On the other hand, it is clear that the maximum value of Problem 3.2 is upper bounded by its relaxation problem. Thus, if we denote the maximum values of the relaxation problem attained at the optimal solution as f * r , it follows that Combining the above facts together, we have Thus, if we are able to show that the gap between f * s and f * r is negligible when the number of users is large, it is clear that the gap between f * s and f * o will also be negligible when the number of users is large. Now, we show that the gap between f * s and f * r is negligible when the number of users is large. Suppose x * is the optimal solution of the relaxation problem, and user k is the user with a fractional indicator function x * k .
while the value of f * s is obtained by either setting x k = 0 when x k < 0.5 or set- , which corresponds to the scenario that user k connects to neither the BS nor the AP.
Then, the gap ∆ between f * s and f * r satisfies Since the users are uniformly distributed in the area, thus when the number of users is large, it is inferred that the denominators of the above equation will be very large. Consequently, the value of ∆ is close to zero. Theorem 3.2 is thus proved.
IV. WITHOUT SIC DECODERS AT BOTH SIDES
In this section, we consider the scenario that neither the BS nor the WiFi AP implements the SIC decoder. The utility maximization problem of the cellular operator for this case can be formulated as Denote the optimal solution of Subproblem 4.1a as x * i , ∀i ∈ {1, 2, · · · , N } and that of Subproblem 4.1b as y * i , ∀i ∈ {1, 2, · · · , N }. Then, it is clear that if x * i and y * i satisfy the constraints (35) for all i ∈ {1, 2, · · · , N }, x * i and y * i will be the optimal solution for Problem 4.1. In the following, we will show that when the number of users is large, Problem 4.1 can be solved by individually solving Subproblem 4.1a and Subproblem 4.1b. It is seen that Subproblem 4.1a and Subproblem 4.1b have the same structure. As a result, the optimal solutions of these two subproblems should also have the same structure. In the following, using Subproblem 4.1b as an example, we present the optimal solution of the two subproblems. Lemma 4.1: i) Sort the users according to their channel power gains in descending order: g 1,A ≥ g 2,A ≥ · · · ≥ g N,A . At an optimal solution, only the first k * (≤ N ) users transmit, That is, only the user with the largest channel gain transmits.
Proof: To solve Subproblem 4.1b, we first consider its relaxation problem, which is given as follows.
Let P i y i P, ∀i, it is not difficult to observe that Problem 4.2 can be converted to the following problem, This problem is shown to be Schur convex in [28]. By using the Schur convex properties, it is shown in [28] that the optimal power allocation is binary power allocation, i.e., either 0 or P for all i. This indicates that the optimal solution for Problem 4.2 is either 0 or 1 for all i. We now show that, when the number of users is large, these three conditions ((a) -(c)) hold with high probability. Define • Event C: There exists one user having the best user-to-BS channel, and simultaneously having the best user-to-AP channel. Hence, the probability that at least one of the three conditions ((a) -(c)) is violated can be written as where the inequality results from the well-known union bound.
In the following, we show that Prob{A} → 0, Prob{B} → 0, and Prob{C} → 0 go to zero as N → ∞. First, we look at Prob{A}, which is given by where the equality "a" results from the fact that the channel power gains are i.i.d., and F (g A ) denotes the CDF of the channel power gain. Since (e−1)σ 2 A /P 0 dF (g A ) is strictly less than 1, Prob{A} → 0 as N → ∞.
where the equality "a" results from the fact that the channel power gains are i.i.d. and have continuous pdf. From (48), Prob{C} → 0 as N → ∞.
Combining the above results, 1 − Prob{A ∪ B ∪ C} → 1 as N → ∞, which completes the proof of Theorem 4.1.
V. WITH A SIC DECODER AT ONE SIDE
In this section, we consider the scenario that the SIC decoder is only available at one side. Particularly, we only study the case that the SIC decoder is only available at the BS. The case that the SIC decoder is only available at the WiFi AP is a symmetric case, and thus can be solved in the same way.
Problem 5.1: Similar to Problem 4.1, we are not able to solve this problem directly or by solving its relaxation problem. To solve Problem 5.1, we first consider the following two subproblems. Subproblem 5.1a: Denote the optimal solution of Subproblem 5.1a as x * i , ∀i ∈ {1, 2, · · · , N } and that of Subproblem 5.1b as y * i , ∀i ∈ {1, 2, · · · , N }. Subproblem 5.1a is easy to solve, and the optimal solution is x * i = 1, ∀i. Subproblem 5.1b is exactly the same as Subproblem 4.1b, and thus the optimal solution of Subproblem 5.1b can be obtained from Lemma 4.1. It is obvious that x * i and y * i cannot satisfy the constraints (52) for all i ∈ {1, 2, · · · , N }. Thus, Problem 5.1 cannot be solved by directly solving Subproblem 5.1a and Subproblem 5.1b. This makes Problem 5.1 more challenging than Problem 4.1.
To solve Problem 5.1, we need the following lemma. Lemma 5.1: The optimal solution of Problem 5.1 is obtained when (52) holds with equality for all i.
Proof: This can be proved by contradiction. Suppose (x * , y * ) is the optimal solution of Problem 5.1, and it has an element (x * k , y * k ) satisfying x * k + y * k < 1. Then, from (50) and (51), it follows that x * k = 0, y * k = 0. Now, we show that we can always find a feasible solution (x,ỹ) with its elements satisfying x * i + y * i = 1, ∀i, will result in a higher value of (49). We letx −k = x * −k ,ỹ −k = y * −k . Clearly, if we set x * k = 1,ỹ * k = 0 will result in a higher value of (49) than that resulted by x * k = 0, y * k = 0 since the logarithm function is an increasing function. This contradicts with our presumption. Lemma 5.1 is thus proved.
Based on Lemma 5.1 and the optimal solutions of Subproblems 5.1a and 5.1b, we are now able to obtain the following lemma, Lemma 5.2, which will be used to prove Theorem 5.1. Proof of Lemma 5.2 requires assuming the path loss model for the users' channel gains. That is, the channel gain is given by g = αz −γ , where γ = 2 is the path loss exponent, z is the distance to either the AP or the BS and α ≥ 0 is a constant factor. Consequently, the results in Lemma 5.2 and Theorem 5.1 rely on the path loss model, a geometry for the users, BS and AP and a probability distribution of the users over the specified geometry. For the convenience of exposition, we consider a 1 by 1 square area with the BS at coordinate (0, 0) and the WiFi AP at (1, 1). We will assume that the users are uniformly distributed. For simplicity, we give the proof of Lemma 5.2 based on the geometry specified in Fig. 2, but it is worth pointing out that the proof extends to more general geometries with minor modifications. Suppose there is at least one user in the quarter circle with radius D and centered at the AP, as shown in Fig. 2, then at the optimal solution to Problem 5.1, at most one user connects to the AP and all the other users connect to the BS.
Proof: We first consider subproblem 5.1b. Since there is at least one user in the stated quarter circle, the user with the strongest channel gain to the AP has a channel gain of at least (e−1)σ 2 A P . From Lemma 4.1 part (ii), at the optimal solution to subproblem 5.1b, only the user with the strongest channel gain transmits. We denote the transmitting user as user k * , and refer to the transmitting user as the dominant user.
Next, returning back to Problem 5.1, let S * be the set of users connected to the WiFi AP under the optimal solution. Based on Lemma 5.1, all users in S * C will connect to the BS. Let where | · | denote the cardinality of a set. We now show that |S * | ≤ 1, where | · | by contradiction. Suppose first that |S * | = 2. We have two possible cases:
WiFi AP
• Case 1: With a dominant user in S * . That is, user k * ∈ S * . In this case, if we assign the non-dominant user to the BS, the utility at the BS side λR w B (x) will increase. On the other hand, from Lemma 4.1 part (ii), the utility at the AP side is maximized when only user k * connects to the AP. Thus, by assigning the non-dominant user to the BS, we can also increase the utility at the AP side (λ − µ)R o B (y). Hence, the total utility of the operator increases if we assign only user k * to the AP, and the rest to the BS. This contradicts the assumption that |S * | = 2. • Case 2: Dominant user not in S * . That is, user k * / ∈ S * . Denote the channel power gain of the channel between the dominant user and the BS as h k * ,B , and the channel power gains of the channels between the two non-dominant users and the BS as h m,B and h n,B , respectively. Now, consider the case where we switch the connections of k * , and m and n. In this case, the utility of the AP clearly increases by Lemma 4.1 part (ii), but the utility at the BS may not increase. However, it is straightforward to verify that the utility at the BS increases if the following condition holds.
Now, referring to Fig. 2, let the dominant user be a distance of √ 2 − d away from the BS, where d ≤ D under the conditions stated in the Lemma. The two users in S * have to be at least distance d away from the AP, since their channel gains to the AP is weaker than the dominant user's. Considering the worst case scenario as given in Fig. 2, we have where "a" follows from d ≤ D ≤ 0.67. Hence, inequality (57) holds under the conditions stated in Lemma 5.2. Therefore, the total utility increases by switching the two non-dominant users with the dominant user, which contradicts our assumption that |S * | = 2. Using the same arguments, we can show that any |S * | > 2 results in a contradiction under the conditions stated in Lemma 5.2. Hence, |S * | ≤ 1, which completes the proof of Lemma 5.2.
We are now ready to prove Theorem 5.1. Theorem 5.1: When the number of users N is large, with high probability the optimal solution for Problem 5.1 under path loss model is: At most one user connects to the AP and all the other users connect to the BS.
Proof: Since the users are uniformly distributed over the square of area one given in Fig. 2, the probability that there is at least one user in the quarter circle with radius D and centered at the AP is Prob(AP ) = 1 − (1 − πD 2 /4) N . Since D > 0, Prob(AP ) → 1 as N → ∞. Hence, the condition in Lemma 5.2 holds with high probability, which implies that the assertion in Theorem 5.1 holds with high probability.
Based on the result given in Theorem 5.1, the optimal solution of Problem 5.1 can be easily found by the following algorithm, which is given in Table II. TABLE II Proposed Data Offloading Scheme for Problem 5.
A. Benefit of SIC Decoders
In this subsection, we investigate the role of SIC decoders in the utility maximization of the cellular operator. We rigorously prove that the SIC decoder is beneficial for the cellular operator in terms of maximizing its utility. Theorem 6.1: Let (x * , y * ), (x * ,ŷ * ), (x * ,ỹ * ) be the optimal solutions of Problem 3.1, 4.1, and 5.1, respectively. In general, the following inequality always holds, Proof: To prove Theorem 6.1, we first show that U ww (x * , y * ) ≥ U wo (x * ,ỹ * ). It can be observed that U ww (x * , y * ) ≥ U ww (x * ,ỹ * ). This is due to the fact that (x * ,ỹ * ) is a feasible solution of Problem 3.1, while (x * , y * ) is the optimal solution of Problem 3.1. Thus, if we can show that U ww (x * ,ỹ * ) ≥ U wo (x * ,ỹ * ) holds, U ww (x * , y * ) ≥ U wo (x * ,ỹ * ) will hold. Since U ww (x * ,ỹ * ) = λR w always holds, which is presented as below.
Assume that K elements ofỹ * are equal to 1, where K ∈ {1, 2, · · · , N }. Then, it follows that where we introduce a dumb item K i=K+1 g i,A P = 0 in the equality "a" for notational convenience. The inequality "b" follows from the fact that Then, it is clear that U ww (x * , y * ) ≥ U wo (x * ,ỹ * ) always holds. Using the same approach, we can easily show that U wo (x * ,ỹ * ) ≥ U oo (x * ,ŷ * ) always holds. Theorem 6.1 is thus proved.
From Theorem 6.1, it is observed that SIC decoder plays an important role in the utility maximization of the cellular operator. It is beneficial for the operator to equip the BS with SIC decoders in terms of maximizing its utility.
B. Distributed Data Offloading
In the previous sections, we have obtained the optimal data offloading schemes for Problem 3.1, 4.1, and 5.1 when the number of users is large. However, the proposed data offloading schemes are centralized schemes, which needs the users to send the user-to-AP and user-to-BS channel power gains to the BS, and then the BS has to compute the optimal user association and feedback the decisions to the users. For Problem 4.1 and 5.1, due to the special structure of the problems, the proposed centralized algorithms can find the optimal solution in polynomial time. However, for Problem 3.1, due to the complexity of the problem, the proposed algorithm puts a heavy computational burden on the BS. Thus, to relieve the computational burden on BS and reduce the overhead for CSI and decision transfer, in this section, we propose a simple but highly efficient distributed data offloading scheme for Problem 3.1, which is given in Table III. It is observed from Table III that the BS does not have to collect the CSI from the users, and it only needs to broadcast a predetermined threshold T to the users. Thus, the network overhead of the distributed algorithm is much lower than that of the centralized algorithm. On the other hand, the computational complexity of the distributed algorithm is much lower than that of the centralized algorithm. For the centralized algorithm, the BS has to solve a relaxed integer programming problem to decide the optimal association for each user, whose worst-case computational complexity is O(N 3 ) [29]. While for the distributed scheme, the computational complexity is O(N ), since each user only has to compute a ratio (
Si,B
Si,A for user i) to decide its association. However, it is worth pointing out that the performance of the distributed algorithm greatly depends on the value of the threshold T .
In the following, we show that the distributed data offloading scheme can achieve the same performance as the centralized one given in Table I if the threshold T is properly chosen.
Theorem 6.2: There exists an optimal threshold T * , for any user i other than the user with fractional indicator function, the following equality holds.
, and x * i , ∀i is the optimal solution of Problem 3.3.
Proof: This proof is based on the KKT conditions given out in Section III. It is observed from (24) that if (20) and (21), it is also observed that α i = 0 and β i = 0 can not hold simultaneously. Since α i and β i are nonnegative, thus if β i > 0, α i must be equal to zero. Consequently, we have α i −β i < 0, which contradicts with the fact that α i −β i > 0. Thus, it clear that α i > 0 and β i = 0. Then, from (20), it follows that x i = 1. Similarly, when Si,B Si,A < T * , it can be shown that α i = 0 and β i > 0, which indicates that x i = 0. Theorem 6.2 is thus proved.
C. Fading Scenarios
In this paper, we consider three cases: (1) With SIC decoders at both sides; (2) Without SIC decoders at both sides; (3) With a SIC decoder at one side. It is worth pointing out that we do not assume any specific distribution of the channel power gains for Cases (1) and (2). Thus, the results obtained for Cases (1) and (2) can be directly applied to the blockfading scenario [30], where the channel remains constant during each fading block but possibly changes from one block to another. For the block-fading scenario, we can solve the utility maximization problem for each fading block, and update the user association scheme every fading block. This is due to the fact that there are no coupling constraints between the fading blocks, and thus maximizing the utility function for each fading block is equivalent to maximizing the longterm utility function [31], i.e., E [U (x, y)], where U (x, y) is given by equation (1) and the expectation is taken over the probability distribution of all the involved channel power gains. Since we did not assume any specific distribution of the channel power gains, the result holds for block-fading channels with any fading distributions, such as Rayleigh fading, Rician fading, Nakagami fading. However, for Case (3), we assumed the path loss model when deriving the results. This is due to the following reason. Using other fading channel models instead of the path loss model makes the utility maximization problem for this case mathematically intractable. Thus, the offloading scheme proposed for this case may not be optimal if fading channel models are adopted. However, according to the simulation results presented in Section VII, the offloading scheme proposed for this case also works well when fading channel models are considered.
D. Downlink Scenarios
In this paper, we focus on the uplink scenario. In this subsection, we show how to extend the obtained results to the downlink scenario. For the downlink scenario, the system model becomes broadcast channels. For broadcast channels, there are usually two implementation ways: • Superposition coding with SIC. The transmitter encodes the messages for all the receivers using superposition coding. Each receiver decodes the received message using SIC. This case is similar to the uplink scenario with SIC. If we assume both BS and AP adopt this scheme, and both of them adopt equal power allocation, the resultant utility maximization problem can be obtained by letting • Orthogonal schemes. If SIC decoders are not available at the receivers, for the broadcast channel, the transmitter will not encode the message for all users together. Instead, they will use orthogonal schemes, such as TDMA. For this case, the resultant user association is trivial, i.e., in each time slot, one user is selected to connect to the BS, and one user is selected to connect to the AP.
VII. NUMERICAL RESULTS
In this section, numerical results are provided to evaluate the performance of the proposed data offloading schemes.
A. Simulation Parameters
The simulation setup is as follows. We consider a 1 by 1 square area with the base station at coordinate (0, 0) and the WiFi AP at (1, 1). The number of users is denoted by N , and these N users are uniformly scattered in the square. For simplicity, we assume that the transmit power of each user is the same and given by 1. Unless otherwise stated, the path loss model is adopted to model the channel power gain. Let (posx i , posy i ) denote the position of user i, then the channel power gain between it and the BS can Fig. 3, we investigate the gap between the proposed centralized data offloading scheme given in Table I and the optimal solution. The optimal solution is obtained by the exhaustive search. For the purpose of illustration, the gap is normalized by the utility of the optimal solution. The result presented in Fig. 3 is averaged over 1000 channel realizations for each N . It is observed from Fig. 3 that the normalized utility gap decreases with the increase of the number of users. When there are only two users in the network, the normalized utility gap is as large as 0.85%. When the number of users goes up to 16, the normalized utility gap is almost zero. This is in accordance with the results presented in Theorem 3.2.
2) Performance of the Distributed Data Offloading Scheme: In Fig. 4, we investigate the system performance of the proposed distributed data offloading scheme given in Table III. The results presented in this figure is averaged over 1000 channel realizations. The red dashed lines represent the values obtained by the centralized data offloading schemes. The green dotted dashed lines represent the values obtained by the exhaustive search. In this figure, we study how the value of the threshold T affects the performance of the proposed distributed data offloading scheme.
It is observed from Fig. 4 that the centralized algorithm can achieve almost the same performance as the exhaustive search, especially when N is large. This is in accordance with our theoretical results. It is observed that the threshold T plays a significant role in the distributed algorithm when the number of users is large. It is observed that the utility gap between the distributed algorithm and the exhaustive search is as large as 1.2 when N = 16 if T is not properly chosen. However, when N = 2, the largest utility gap is less than 0.2. It is also observed from Fig. 4 that for each N , there does exists an optimal T which produces a utility which is almost the same as the centralized data offloading scheme. This is in accordance with the results presented in Theorem 6.2.
C. Without SIC Decoders at Both Sides
In Fig. 5, we investigate the performance of the proposed data offloading scheme for the case that SIC decoders are not available at both BS and the AP side. In Fig. 5 generate 1000 channel realizations for each N . We count the number of realizations, in which the proposed data offloading scheme is optimal. First, it is observed that for all the curves, the number of realizations that the proposed data offloading scheme is optimal increases with the increasing number of users. This is in accordance with our theoretical analysis given in Section IV. Secondly, it is observed that the transmit power of the users also plays an important role in the performance of the proposed data offloading scheme. For the same number of users, when the transmit power of the users is large, the number of realizations that the proposed data offloading scheme is optimal is large. This is due to the fact that when P is large, the value of (e−1)σ 2 A P is small, and thus the probability that g 1,A ≥ (e−1)σ 2 A P is large for the same number of users. Thirdly, it is observed that when the number of users is larger than 10, the proposed data offloading scheme is always optimal for all the cases. This indicates that the proposed data offloading scheme can achieve a satisfactory performance even when the number of users is not very large.
D. With A SIC Decoder at One Side
In Fig. 6, we investigate the performance of the proposed data offloading scheme for the case that a SIC decoder is only available at the BS side. In Fig. 6, we generate 1000 channel realizations for each N . We count the number of realizations, in which the proposed data offloading scheme is optimal. It is observed that for all the curves, the number of realizations that the proposed data offloading scheme is optimal increases with the increasing number of users. This is in accordance with our theoretical analysis in Section V. Secondly, it is observed that the transmit power of the users almost does not affect the performance of the proposed data offloading scheme. This is quite different from the results obtained in Fig. 5. This is due to the fact that for this case, the proposed data offloading scheme is optimal only when both g 1,A ≥ (e−1)σ 2 A P and d < 0.67 are satisfied simultaneously. For the case considered here, the condition that d < 0.67 always dominates. Since this condition is irrelevant with the transmit power, the performance of the proposed date offloading scheme is not affected by the transmit power of the users. Finally, it is observed that when the number of users is larger than 12, the proposed data offloading scheme is always optimal. This indicates that the proposed data offloading scheme can achieve a good performance even when the number of users is not large.
In Fig. 7, we investigate the performance of the proposed data offloading scheme for different fading channel models. For the Rayleigh fading model, the channel power gains are exponentially distributed [25], and we assume that the mean of the channel power gains is one. For the Nakagami-m fading model, we consider the case that m = 2, and we assume that the mean of the channel power gains is one. The transmit power of each user is assumed to be the same and equal to 1. The results are averaged over 1000 channel realizations. The optimal offloading schemes are obtained by exhaustive search. It is observed from Fig. 7 that when the number of users is small, there is a small gap between the proposed offloading scheme and the optimal offloading scheme. However, when the number of users is larger than six, the proposed offloading scheme can achieve same performance as the optimal offloading scheme. This is due to the fact that when the number of users is large, the condition given in (57) holds with a high probability, and thus the proposed offloading scheme is optimal with a high probability. Overall, the proposed offloading scheme works well under different fading channel models.
E. Benefit of SIC Decoders
In Fig. 8, we compare the utility of the cellular operator for the three cases studied in this paper. The utility values for each case are obtained under their respective optimal data offloading schemes. The results presented in Fig. 8 are averaged over 1000 channel realizations for each N . It is observed that the utility increases with the increasing of N for all three cases. This is in accordance with the theoretical results presented in previous sections. It is also observed that U ww > U wo > U oo for the same N . This indicates that SIC decoders have a significant effect on the utility of the cellular operator. It is always beneficial for the operator to equip the BS and/or AP with SIC decoders so as to maximize its utility. This is in accordance with the results obtained in Theorem 6.1.
VIII. CONCLUSIONS
In this paper, we have investigated the mobile data offloading problem through a third-party WiFi AP for a cellular mobile system. From the cellular operator's perspective, we have formulated the problem as a utility maximization problem. By considering whether SIC decoders are available at the BS and/or the WiFi AP, different cases are considered. When the SIC decoders are available at both the BS and the WiFi AP, the utility maximization problem can be solved by considering its relaxation problem. It is strictly proved that the proposed data offloading scheme is near-optimal when the number of users is large. We also propose a threshold-based distributed data offloading scheme which can achieve the same performance as the centralized data offloading scheme if the threshold is properly chosen. When the SIC decoders are not available at both the BS and the WiFi AP, we have rigorously proved that the optimal solution is One-One-Association, i.e., one user connects to the BS and the other user connects to the WiFi AP. When the SIC decoder is only available at the BS, we have shown that there is at most one user connecting to the WiFi AP, and all the other users connect to the BS. We also have rigorously proved that SIC decoders are beneficial for the cellular operator in terms of maximizing its utility. | 2014-08-22T02:52:00.000Z | 2013-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "4dc67d30c9a6737d739893158b8a8f54c6a9f1ff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.5245",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4dc67d30c9a6737d739893158b8a8f54c6a9f1ff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
21910887 | pes2o/s2orc | v3-fos-license | Production of cytokine and chemokines by human mononuclear cells and whole blood cells after infection with Trypanosoma cruzi
Introduction: The innate immune response is the first mechanism of protection against Trypanosoma cruzi, and the interaction of inflammatory cells with parasite molecules may activate this response and modulate the adaptive immune system. This study aimed to analyze the levels of cytokines and chemokines synthesized by the whole blood cells (WBC) and peripheral blood mononuclear cells (PBMC) of individuals seronegative for Chagas disease after interaction with live T. cruzi trypomastigotes. Methods: IL-12, IL-10, TNF-α, TGF-β, CCL-5, CCL-2, CCL-3, and CXCL-9 were measured by ELISA. Nitrite was determined by the Griess method. Results: IL-10 was produced at high levels by WBC compared with PBMC, even after incubation with live trypomastigotes. Production of TNF-α by both PBMC and WBC was significantly higher after stimulation with trypomastigotes. Only PBMC produced significantly higher levels of IL12 after parasite stimulation. Stimulation of cultures with trypomastigotes induced an increase of CXCL-9 levels produced by WBC. Nitrite levels produced by PBMC increased after the addition of parasites to the culture. Conclusions: Surface molecules of T. cruzi may induce the production of cytokines and chemokines by cells of the innate immune system through the activation of specific receptors not evaluated in this experiment. The ability to induce IL-12 and TNF-α contributes to shift the adaptive response towards a Th1 profile.
Trypanosoma cruzi is an intracellular parasite and the causative agent of Chagas disease, an illness that affects about eight million people in Central and South America, with 75 million living in risk areas.The global incidence of the disease is 300,000 new cases per year 1,2 .
Resistance to the parasite observed in humans and in experimental models is due at least in part to the cellular immune response, which is responsible for the production of cytokines, chemokines, and oxygen and nitrogen intermediates 3,4 .In vitro, peripheral blood mononuclear cells (PBMC) can eliminate the parasite after phagocytosis 5 .Studies have demonstrated an increase in the number of PBMC in rats infected with T. cruzi.During the acute phase of infection, the presence of the parasite induces a rapid increase in the production, maturation, and activation of monocytes/macrophages in an attempt to control its replication 6 .In vivo, these cells secrete hydrogen peroxide and nitric oxide (NO) when in contact with the parasite 7,8 .
The interaction of T. cruzi with macrophages and other cells involved in the innate immune response is mediated by pathogen-specific pattern-recognition receptors such as Toll-like receptors (TLRs).These receptors are activated by molecules present on the surface of the pathogen and induce the synthesis of various proinflammatory cytokines such as interleukin 6 (IL-6), tumor necrosis factor alpha (TNF-α), and IL-12.In addition, these receptors activate inducible nitric oxide synthase (iNOS) [9][10][11] .In this respect, TLR2 plays an important role in the regulation of the initial proinflammatory response during infection 12 .In addition to these cytokines, macrophages and other cells of the innate immune system synthesize immunomodulatory chemokines such as CCL5, CXCL9, CCL2, and CCL3, among others.The result of this interaction is crucial for the evolution of infection, permitting the elimination of the microorganism at an early stage or guiding an adaptive immune response.The immunological mechanisms relevant for both resistance to and
RESULTS
Rezende-Oliveira K et al -Production of cytokine and chemokine after infection with Trypanosoma cruzi pathogenesis of Chagas disease are numerous but are still not completely understood, especially in humans.These mechanisms are considered to be important for the control of T. cruzi infection and involve many cell types and mediators of the host's innate and adaptive immune system 13,14 .
In view of the marked importance of the interaction between T. cruzi and the host cell, the objective of the present study was to analyze the levels of cytokines and chemokines produced by cells of the innate immune system of seronegative subjects after the addition of trypomastigote forms of T. cruzi strain Y to the culture.The innate immune response of the host to parasite antigens was evaluated by investigating the synthesis of cytokines (TGF-β, IL-10, IL-12, and TNF-α) and chemokines (CCL5, CXCL9, CCL2, and CCL3), as well as the production of NO.
Parasites
Trypomastigotes of Trypanosoma cruzi strain Y maintained in kidney epithelial cells of the African monkey Cercopithecus aethiops (VERO CCl-81) were studied.The cultures were maintained in RPMI 1640 medium (Sigma, USA) supplemented with 40mg/ml garamycin (Schering-Plough, Brazil) and 5% fetal bovine serum (Gibco BRL, USA).The medium was changed daily to obtain the maximum number of trypomastigote forms and to eliminate amastigotes in the supernatant.
Subjects
Sixteen healthy volunteers ranging in age from 18 to 40 years, with negative serology for Chagas disease, were invited to participate in this study.After the volunteers had signed an informed consent form, venous blood (20ml) was collected from normal noninfected blood donors.Negative serology was confirmed by hemagglutination and enzyme-linked immunosorbent assay (ELISA).Cells were collect from 20ml heparinized blood.The blood samples were centrifuged at 400×g on a Ficoll-Hypaque gradient (Pharmacia, Sweden) for 20min at room temperature for the separation of mononuclear cells.For the analysis of whole blood cells (without separation of mononuclear cells), samples were centrifuged three times at 300×g for 15min at 4ºC and then resuspended in Dulbecco's Modified Eagle Medium supplemented with 5% fetal bovine serum, gentamicin, and 2-beta mercaptoethanol at a concentration of 2 x 10 6 cells/ml.The cells were cultured in a volume of 1,000µl in 24-well plates and incubated for 24h in the presence (5 parasites/1 host cell) or absence of live T. cruzi trypomastigote forms.The plates were incubated in an incubator at 37ºC in an atmosphere enriched with 5% CO 2 .After fractionation, PBMC were washed three times by centrifugation and cultured under the same conditions as described above.
Nitrite measurement
The Griess method was used for the measurement of nitrite 15 .First, nitrate was reduced to nitrite in buffer containing 1 unit/ ml nitrate reductase.Next, Griess reagents were prepared by mixing 1% sulfanilamide (Sigma) and 1% naphthalenediamine (Sigma) at a proportion of 1:1 in 2.5% phosphoric acid (Merck, Brazil).For the reaction, 50µl of each dilution of the curve and the supernatant were added to microplate wells and the reaction was read at 540nm.
Determination of cytokines and chemokines
Cytokines (IL-12, IL-10, TNF-α, and TGF-β) and chemokines (CCL5, CCL2, CCL3, and CXCL9) were determined by ELISA using commercially available monoclonal antibody pairs (BD optEIA).The detection limit of the methods ranged from 2pg/ml to 20pg/ml.High-affinity plates were sensitized with the capture antibody in carbonate buffer and incubated overnight at 4°C.After washing with phosphate buffered saline (PBS) containing 0.05% Tween 20 (Sigma), the plates were blocked with PBS containing 2% bovine serum albumin (BSA) for 4h.Next, the supernatants diluted 1:2 in PBS-BSA were applied concomitantly with recombinant cytokine or chemokine standard (0 to 2,000pg/ml and 0 to 1,000pg/ ml, respectively, or 250pg/ml for IL-12).The plates were incubated for 18h at 4 o C. Next, the plates were washed with PBS containing 0.05% Tween 20 and incubated for 2h at room temperature with the biotinylated antibodies specific for each cytokine or chemokine.The plates were again washed and then incubated with peroxidase-conjugated streptavidin for 2h at room temperature.Finally, the plates were washed and the reaction was developed with orthophenylenediamine in buffer containing hydroxyurea (Sigma).After color development, the reaction was stopped by the addition of 20µl 2 M H 2 SO 4 and the plates were read at 450nm.
Statistical analysis
The results were analyzed by the Mann-Whitney and Wilcoxon tests using the Statview for Windows program (Abacus).A p value <0.05 was considered to indicate significant differences.
Ethical considerations
The study protocol was approved by the Ethics Committee of Universidade Federal do Triângulo Mineiro (protocol 0905).
Cytokine levels
In the absence of live trypomastigote stimulus, whole blood cells produced higher levels of IL-10 than did PBMC.Addition of the stimulus to whole blood cell cultures induced a significant production of IL-10 by these cells (p=0.015)(Figure 1A).The high levels of IL-10 produced by whole blood cells may increase the susceptibility of these cells to infection with T. cruzi.
No significant difference in the production of TNF-α was observed between PBMC and whole blood cells.However, the addition of trypomastigotes to the two cell cultures (PBMC and whole blood cells) induced a significant increase in the production of this cytokine (p=0.0006 and p=0.0002, respectively) (Figure 1B).TNF-α is produced by cells such as monocytes, macrophages, B and T lymphocytes, and polymorphonuclear cells after adhesion and invasion of the microorganism, and is considered an important trigger of the innate immune response.
No differences in the levels of IL-12 produced by PBMC or whole blood cells were observed in the absence of the stimulus.However, there was a significant increase in IL-12 levels produced by PBMC after the addition of trypomastigotes (p=0.021),these cells being important producers of this cytokine (Figure 1C).The production of IL-12 is fundamental for the development of a Th1 immune response, which is associated with the induction of trypanocidal mechanisms.
FIGURE 1 -Comparison of the levels of IL-10 (A), TNF-α (B), and IL-12 (C) produced by whole blood cells and mononuclear cells of individuals seronegative for Chagas disease 24h after infection with trypomastigotes.
PBMC and whole blood cells did not produce significant levels of TGF-β in the presence or absence of the parasite in culture (data not shown).
Chemokine levels
The presence of the parasite in whole blood cell cultures induced the synthesis of high levels of CXCL9 (p=0.0009)(Figure 2).Whole blood cells also produced high levels of CCL3 and CCL2 (Figure 3A and Figure 3B, respectively), as well as CXCL3 and CCL5 (data not shown).Analysis of the production of CCL3 per subject showed that addition of the parasite to whole blood cell cultures induced an increase in the production of this chemokine by these cells (p=0.1092)(Figure 3A).A reduction in individual levels of CCL2 produced by whole blood cells (p=0.067) and PBMC was observed after the addition of trypomastigotes to these cultures (Figure 3B and Figure 3C, respectively).
Nitrite levels
Whole blood cells produced higher levels of nitrite than did PBMC in the absence of T. cruzi.Neither mononuclear cells nor whole blood cells produced significant levels of nitrite in the presence of trypomastigote forms.However, analysis of nitrite levels produced by PBMC per subject showed that addition of the parasite to the culture reduced nitrite production by these cells (p=0.180)(Figure 3D).The fact that the supernatant was collected 24h after infection may have contributed to the low production of NO.
Infection with T. cruzi can activate multiple pathways of the innate and adaptive immune system of the host.During the early stages of the disease, contact with T. cruzi trypomastigotes favors the synthesis of regulatory and effector molecules of the immune system, such as cytokines, chemokines, and nitrite 7,[16][17][18][19][20] .According to Guiñazú et al. 21, parasite surface antigens (tGPI-mucin surface molecules) interact with cells of the innate immune system, mainly macrophages, The authors declare that there is no conflict of interest.
REFERENCES
and thus contribute to the immunoregulatory processes observed in Chagas disease 4,17,19,22 .Moreover, monocytes play an important role in the activation of the innate response during the early stages of infection by mediating the cross-talk between the innate and the adaptive immune responses 23 .
In the present study, we analyzed the immune response of PBMC and leukocytes in the blood of healthy volunteers with negative serology for Chagas disease after exposure to live trypomastigote forms of T. cruzi strain Y. Supernatants from 24h cultures were used to limit the analysis to the early stages of the parasite-host interaction.In addition, live trypomastigote forms, which are the parasite forms involved in the natural infection of humans, were used.Various surface molecules can serve as receptors for parasite antigens, which induce the synthesis of different cytokines and chemokines 24,25 .
The present study demonstrated the capacity of whole blood cells and PBMC to produce cytokines, chemokines, and nitrite when cultured in the presence of live trypomastigote forms of T. cruzi strain Y.The parasite was able to stimulate the synthesis of proinflammatory (TNF-α) and anti-inflammatory mediators (IL-10 and TGF-β) that are responsible for the modulation of nitrite synthesis, with consequent effects on trypanocidal capacity 26,27 .The significant production of IL-12 and TNF-α highlights the importance of these two cytokines during the early stages of infection, with IL-12 inducing the differentiation of Th1 lymphocytes and TNF-α activating iNOS.However, a low production of nitrite by these cells was observed, a finding that can be explained by the fact that the cultures basically consisted of monocytes, which are poor producers of nitrite in short-term culture 28 .Furthermore, studies using experimental models have shown that nitrite levels induced by TNF-α alone are not sufficient for an efficient trypanocidal action 18 .
In the present study, significantly higher IL-10 levels were observed in whole blood cultures after the addition of live trypomastigotes.IL-10 is known to exert an important anti-inflammatory effect, inhibiting the synthesis of nitrite by macrophages and the production of IFN-γ by CD4 + T lymphocytes.The effects of this cytokine include the inhibition of a protective immune response and escape of the parasite, thus contributing to the establishment of infection 16,29,30 .The significant increase in the production of IL-10 by whole blood cells after addition of the parasite to the culture suggests that this cytokine indeed contributes to parasite escape.The ability of the parasite to induce IL-10 production suggests the involvement of this cytokine in the early stages of infection.
We also analyzed the production of chemokines, molecules that play an important role in the early events of the immune response and that may also contribute to the nature of the adaptive immune response to be established.Since chemokines play a crucial role in the development of the immune response [31][32][33] , this study also investigated the modulation of these mediator by live trypomastigotes of T. cruzi.A significant increase in CXCL9 and CCL3 and a decrease in CCL2 levels were observed after the addition of live trypomastigotes.The presence of both parasite and other immune system mediators, including cytokines, might influence the expression of these chemokines 34,35 .In experimental models, infection with T. cruzi has been shown to induce the expression of beta chemokines such as CCL3 and CCL2, which are direct inducers of iNOS in macrophages and are involved in parasite control 36 .Studies using experimental models have shown that CXCL9 is correlated with the expression of IFN-γ and TNF-α and is involved in the recruitment of inflammatory cells in chagasic myocarditis.Since CXCL9 can induce the migration of IFN-γ-producing T lymphocytes through CXCR3 receptor signaling 37,38 , the production of this chemokine stimulated by the presence of the parasite may contribute to the activation of a local immune response that is able to control parasite growth.Using an experimental model, Hardison et al. 39 demonstrated that the chemokines CXCL9 and CCL5 predominate during the acute and chronic phases of experimental Chagas disease and that CXCL9, together with CXCL10, is responsible for the control of replication of T. cruzi.
The present study suggests that the interaction of cells of the innate immune system with live trypomastigote forms of T. cruzi triggers the production of a set of mediators involved in immune response regulation.The synthesis of cytokines and chemokines induced by T. cruzi is decisive for the development of the adaptive immune response involved in parasite control and in the formation of lesions characteristic of the chronic phase of Chagas disease.
This study was supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) and Fundação de Amparo a Pesquisa de Minas Gerais (FAPEMIG). | 2017-09-14T13:16:58.003Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "17e5e24bfd4a7250023ba655272204de107191b6",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rsbmt/a/f8GPsQppd6kTmtKkRKL68hs/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "17e5e24bfd4a7250023ba655272204de107191b6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
401477 | pes2o/s2orc | v3-fos-license | The Complexity of Routing with Few Collisions
We study the computational complexity of routing multiple objects through a network in such a way that only few collisions occur: Given a graph $G$ with two distinct terminal vertices and two positive integers $p$ and $k$, the question is whether one can connect the terminals by at least $p$ routes (e.g. paths) such that at most $k$ edges are time-wise shared among them. We study three types of routes: traverse each vertex at most once (paths), each edge at most once (trails), or no such restrictions (walks). We prove that for paths and trails the problem is NP-complete on undirected and directed graphs even if $k$ is constant or the maximum vertex degree in the input graph is constant. For walks, however, it is solvable in polynomial time on undirected graphs for arbitrary $k$ and on directed graphs if $k$ is constant. We additionally study for all route types a variant of the problem where the maximum length of a route is restricted by some given upper bound. We prove that this length-restricted variant has the same complexity classification with respect to paths and trails, but for walks it becomes NP-complete on undirected graphs.
Introduction
We study the computational complexity of determining bottlenecks in networks. Consider a network in which each link has a certain capacity. We want to send a set of objects from point s to point t in this network, each object moving at a constant rate of one link per time step. We want to determine whether it is possible to send our (predefined number of) objects without congestion and, if not, which links in the network we have to replace by larger-capacity links to make it possible.
Apart from determining bottlenecks, the above-described task arises when securely routing very important persons [15], or packages in a network [2], routing container transporting vehicles [18], and generally may give useful insights into the structure and robustness of a network. A further motivation is congestion avoidance in routing fleets of vehicles, a problem treated by recent commercial software products (e.g. http://nunav.net/) and poised to become more important as passenger cars and freight cars become more and more connected. Assume that we have many requests on computing a route for a set of vehicles from a source location to a target location, as it happens in daily commuting traffic. Then the idea is to centrally compute these routes, taking into account the positions in space and time of all other vehicles. To avoid congestion, we try to avoid that on two of the routes the same street appears at the same time.
A first approximation to determine such bottlenecks would be to compute the set of minimum cuts between s and t. However, by daisy chaining our objects, we may avoid such "bottlenecks" and, hence, save on costs for improving the capacity of our links. Apart from the (static) routes we have to take into account the traversals in time that our objects take.
Formally, we are given an undirected or directed graph with marked source and sink vertex. We ask whether we can construct routes between the source and the sink in such a way that these routes share as few edges as possible. By routes herein we mean either paths, trails, or walks, modeling different restrictions on the routes: A walk is a sequence of vertices such that for each consecutive pair of vertices in the sequence there is an edge in the graph. A trail is a walk where each edge of the graph appears at most once. A path is a trail that contains each vertex at most once. We say that an edge is shared by two routes, if the edge appears at the same position in the sequence of the two routes. The sequence of a route can be interpreted as the description of where the object taking this route is at which time. So we arrive at the following core problem: Routing with Collision Avoidance (RCA) Input: A graph G = (V, E), two distinct vertices s, t ∈ V , and two integers p ≥ 1 and k ≥ 0. Question: Are there p s-t routes that share at most k edges? This definition is inspired by the Minimum Shared Edges (MSE) problem [6,15,20], in which an edge is already shared if it occurs in two routes, regardless of the time of traversal. Finally, note that finding routes from s to t also models the general case of finding routes between a set of sources and a set of sinks.
Considering our introductory motivating scenarios, it is reasonable to restrict the maximal length of the routes. For instance, when routing vehicles in daily commuting traffic while avoiding congestion, the routes should be reasonably short. Motivated by this, we study the following variant of RCA.
Fast Routing with Collision Avoidance (FRCA) Input: A graph G = (V, E), two distinct vertices s, t ∈ V , and three integers p, α ≥ 1 and k ≥ 0. Question: Are there p s-t routes each of length at most α that share at most k edges?
In the problem variants Path-RCA, Trail-RCA, and Walk-RCA, the routes are restricted to be paths, trails, or walks, respectively (analogously for FRCA).
Trail-(F)RCA NP-c. d , Thm. 5 (Cor. 4) NP-c. d , Thm. 6 (Cor. 5) P a NP-c., Walk-RCA P (Thm. 7) P (Thm. 9) NP-c., P a NP-c., Walk-FRCA open NP-c., W[2]-h. P (Thm. 9) NP-c., P a NP-c., (Thm. 8) Our Contributions. We give a full computational complexity classification of RCA and FRCA (except Walk-FRCA) with respect to the three mentioned route types; with respect to undirected, directed, and directed acyclic input graphs; and distinguishing between constant and arbitrary budget. Table 1 summarizes our results. To our surprise, there is no difference between paths and trails in our classification. Both Path-RCA (Section 4) and Trail-RCA (Section 5) are NP-complete in all of our cases except on directed acyclic graphs when k ≥ 0 is constant (Section 3). We show that the problems remain NP-complete on undirected and directed graphs even if k ≥ 0 is constant or the maximum degree is constant. We note that the Minimum Shared Edges problem is solvable in polynomial time when the number of shared edges is constant, highlighting the difference to its time-variant Path-RCA.
The computational complexity of the length-restricted variant FRCA for paths and trails equals the one of the variant without length restrictions. The variant concerning walks (Section 6) however differs from the other two variants as it is tractable in more cases, in particular on undirected graphs. (We note that almost all of our tractability results rely on flow computations in time-expanded networks (see, e.g., Skutella [19]).) Remarkably, the tractability does not transfer to the length-restricted variant Walk-FRCA, as it becomes NP-complete on undirected graphs. This is the only case where RCA and FRCA differ with respect to their computational complexity.
Related Work. As mentioned, Minimum Shared Edges inspired the definition of RCA. MSE is NP-hard on directed [15] and undirected [5,6] graphs. In contrast to RCA, if the number of shared edges equals zero, then MSE is solvable in polynomial time. Moreover, MSE is W[2]-hard with respect to the number of shared edges and fixed-parameter tractable with respect to the number of paths [6]. MSE is polynomial-time solvable on graphs of bounded treewidth [20,1].
There are various tractability and hardness results for problems related to RCA with k = 0 in temporal graphs, in which edges are only available at predefined time steps [3,10,14,13]. The goal herein is to find a number of edge or vertex-disjoint time-respecting paths connecting two fixed terminal vertices. Time-respecting means that the time steps of the edges in the paths are nondecreasing. Apart from the fact that all graphs that we study are static, the crucial difference is in the type of routes: vehicles moving along time-respecting paths may wait an arbitrary number of time steps at each vertex, while we require them to move at least one edge per time step (unless they already arrived at the target vertex).
Our work is related to flows over time, a concept already introduced by Ford and Fulkerson [7] to measure the maximal throughput in a network over a fixed time period. This and similar problems were studied continually, see Skutella [19] and Köhler et al. [12] for surveys. In contrast, our throughput is fixed, our flow may not stand still or go in circles arbitrarily, and we want to augment the network to allow for our throughput.
Preliminaries
We define [n] := {1, . . . , n} for every n ∈ N. Let G = (V, E) be an undirected (directed) graph. Let the sequence P = (v 1 , . . . , v ℓ ) of vertices in G be a walk, trail, or path. We call v 1 and v ℓ the start and end of P . For i ∈ [ℓ], we denote by P [i] the vertex v i at position i in P . Moreover, for i, j ∈ [ℓ], i < j, we denote by P [i, j] the subsequence (v i , . . . , v j ) of P . By definition, P has an alternative representation as sequence of edges (arcs) P = (e 1 , . . . , e ℓ−1 ) with Along this representation, we say that P contains/uses edge (arc) e at time step i if edge (arc) e appears at the ith position in P represented as sequence of edges (arcs) (analogue for vertices). We call an edge/arc shared if two routes uses the edge/arc at the same time step. We say that a walk/trail/path Q is an s-t walk/trail/path, if s is the start and t is the end of Q. The length of a walk/trail/path is the number of edges (arcs) contained, where we also count multiple occurrences of an edge (arc) (we refer to a path of length m as an m-chain). (We define the maximum over in-and outdegrees in G by A parameterized problem P is a set of tuples (x, ℓ) ∈ Σ * × N, where Σ denotes a finite alphabet. A parameterized problem P is fixed-parameter tractable if it admits an algorithm that decides every input (x, ℓ) in f (ℓ) · |x| O(1) time (FPT-time), where f is a computable function. The class FPT is the class of fixed-parameter tractable problems. The classes W[q], q ≥ 1, contain parameterized problems that are presumably not fixed-parameter tractable. For two parameterized problems P and P ′ , a parameterized reduction from P to P ′ is an algorithm that maps each input (x, ℓ) to (x ′ , ℓ ′ ) in FPTtime such that (x, ℓ) ∈ P if and only if (x ′ , ℓ ′ ) ∈ P ′ , and ℓ ′ ≤ g(ℓ) for some function g. A parameterized problem P is W[q]-hard if for every problem contained in W[q] there is a parameterized reduction to P .
Preliminary Observations on RCA and FRCA. We state some preliminary observations on RCA and FRCA. If there is a shortest path between the terminals s and t of length at most k, then routing any number of paths along the shortest path introduces at most k shared edges. Hence, we obtain the following.
If we consider walks, the length of an s-t walk in a graph can be arbitrarily large. We prove, however, that for paths, trails, and walks, RCA and FRCA are contained in NP, that is, each variant allows for a certificate of size polynomial in the input size that can be verified in time polynomial in the input size. Proof. Given an instance (G, s, t, p, k) of Path-RCA and a set of p s-t paths, we can check in polynomial time whether they share at most k edges. The same holds for Trail-RCA and Walk-RCA (the latter follows from Theorem 7). This is still true for all variants on directed graphs (for walks we refer to Lemma 12). Moreover, we can additionally check in linear time whether the length of each path/trail/walk is at most some given α ∈ N. Hence, the statement follows.
Everything is Equal on DAGs
Note that on directed acyclic graphs, every walk contains each edge and each vertex at most once. Hence, every walk is a path in DAGs, implying that all three types of routes are equivalent in DAGs.
We prove that RCA is solvable in polynomial time if the number k of shared arcs is constant, but NP-complete if k is part of the input. Moreover, we prove that the same holds for the length-restricted variant FRCA. We start the section with the case of constant k ≥ 0. We prove Theorem 1 as follows: We first show that RCA and FRCA on DAGs are solvable in polynomial time if k = 0 (Theorem 2 below). We then show that an instance of RCA and FRCA on directed graphs is equivalent to deciding, for all ksized subsets K of arcs, the instance with k = 0 and a modified input graph in which each arc in K has been copied p times:
Constant Number of Shared Arcs
We need the notion of time-expanded graphs.
Note that for every directed n-vertex m-arc graph the τ -time-expanded graph can be constructed in O(τ · (n + m)) time. We prove that we can decide RCA and FRCA by flow computation in the time-expanded graph of the input graph: Proof. (⇒) Let G allow for p s-t walks W 1 , . . . , W p not sharing any arc. We construct an s 0 -t τ flow of value p in H as follows. Observe that W i = (v 0 , . . . , v ℓ ) corresponds to a path P = (v 0 0 , v 1 1 , . . . , v ℓ ℓ ) in H. If ℓ < τ , then extend this path to the path P = (v 0 0 , . . . , v ℓ ℓ , t ℓ+1 , , . . . , t τ ) (observe that v ℓ ℓ = t ℓ ). Set the flow on the arcs of P to one. From the fact that W 1 , . . . , W p are not sharing any arc in G, we extend the flow as described above for each walk by one, hence obtaining an s 0 -t τ flow of value p.
(⇐) Let H allow an s 0 -t τ flow of value p. It is well-known that any s 0 -t τ flow of value p in H can be turned into p arc-disjoint s 0 -t τ paths in H [11].
is an s-t walk in G. Let P be the set of p s 0 -t τ paths in H obtained from an s 0 -t τ flow of value p, and let W be the set of p s-t walks in G obtained from P as described above. As every pair of paths in P is arc-disjoint, no pair of walks in W share any arc in G.
Proof of Theorem 2. Let (G = (V, A), s, t, p, 0) be an instance of Walk-RCA with G being a directed acyclic graph. Let n := |V |. We construct the directed n-timeexpanded graph H of G with p additional arcs (t i−1 , t i ) for each i ∈ [τ ]. Note that any s-t path in G is of length at most n − 1 due to G being directed and acyclic. The statement then follows from Lemma 2.
Lemma 2 is directly applicable to FRCA, by constructing an α-expanded graph.
Let G = (V, A) be a directed graph and let K ⊆ A and x ∈ N. We denote by G(K, x) the graph obtained from G by replacing each arc Proof. (⇒) Let G allow for a set of p s-t walks W = {W 1 , . . . , W p } sharing at most k edges. Let K ⊆ A denote the set of at most k arcs shared by the walks in W. We construct a set of p s-t walks Whenever an arc (v, w) ∈ K appears in W ′ i , we replace the arc by its copy (v, w) i . Observe that (i) W ′ i forms an s-t walk in G(K, p), (ii) the positions of the arcs in A \ K in the walks remain unchanged, and (iii) for each arc (v, w) ∈ K, no walk contains the same copy of the arc. As the arcs in K are the only shared arcs of the walks in W, the walks in W ′ do not share any arc in G(K, p).
(⇐) Let K ⊆ A be a subset of arcs in G with |K| ≤ k such that G(K, p) allows for a set of p s-t walks W ′ = {W ′ 1 , . . . , W ′ p } with no shared arc. We construct a set of p i , we replace the arc by its original (v, w). Observe that (i) W i forms an s-t walk in G, (ii) the positions of the arcs in A \ K in the walks remain unchanged. As the arcs in the set K of at most |K| ≤ k arcs can appear at the same positions in any pair of two walks in W, the s-t walks in P share at most k arcs in G.
Note that as the length of the walks do not change in the proof, the statement of the lemma also holds for Walk-FRCA.
Proof of Theorem 1. Let (G = (V, A), s, t, p, k) be an instance of Walk-RCA with G being a directed acyclic graph. For each k-sized subset K ⊆ A of arcs in G, we decide the instance (G(K, p), s, t, p, 0). The statement for RCA then follows from Lemma 3 and Theorem 2. We remark that the value of a maximum flow between two terminals in an n-vertex m-arc graph can be computed in O(n·m) time [16]. The running time of the algorithm is in O(|A| k · (|V | 3 · |A|)). The statement for FRCA follows analogously with Lemma 3 and Corollary 1.
Arbitrary Number of Shared Arcs
If the number k of shared arcs is part of the input, then both RCA and FRCA are NP-complete and W[2]-hard with respect to k. The construction in the reduction for Theorem 3 is similar to the one used by Omran et al. [15,Theorem 2]. Herein, we give a (parameterized) many-one reduction from the NP-complete [9] Set Cover problem: given a set U = {u 1 , . . . , u n }, a set of We say that F ′ is a set cover and we say that the elements in F ∈ F are covered by F . Note that Set Cover is W[2]-complete with respect to the solution size ℓ in question [4]. In the following Construction 1, given a Set Cover instance, we construct the DAG in an equivalent RCA or FRCA instance.
, and an integer ℓ ≤ m be given. Construct a directed acyclic graph G = (V, A) as follows. Initially, let G be the empty graph. Add the vertex sets . . , w m }, corresponding to U and F , respectively. Add the edge (v i , w j ) to G if and only if u i ∈ F j . Next, add the vertex s to G. For each w ∈ V F , add an (ℓ + 2)-chain to G connecting s with w, and direct all edges in the chain from s towards w. For each v ∈ V U , add an (ℓ + 1)-chain to G connecting s with v, and direct all edges in the chain from s towards v. Finally, add the vertex t to G and add the arcs (w, t) for all w ∈ V F . Lemma 4. Let U , F , ℓ, and G as in Construction 1. Then there are at most ℓ sets in F such that their union is U if and only if G admits n + m s-t walks sharing at most ℓ arcs in G.
We construct n + m s-t walks as follows. Each outgoing chain on s corresponds to exactly one s-t walk. Those walks that start with the chains connecting s with a w ∈ V F are extended directly to t (there is no other choice). For all the other walks, as Route the walks arbitrarily towards one out of {w ′ 1 , . . . , w ′ k } and then forward to t. Observe that all walks contain exactly one vertex in V F at time step ℓ + 3. Moreover, only the arcs are contained in more than one walk. As they are at most ℓ, the claim follows. (⇐) Suppose G admits a set W of n+m s-t walks sharing at most ℓ arcs in G. Observe first that the arcs of the form (w, t), w ∈ V F , are the only arcs that can be shared whenever at most ℓ arcs are shared, due to the fact that each outgoing chain on s is of length longer than k. Moreover, each arc (w, t), w ∈ V F is contained in at least one s-t walk in W, because no two walks in W can share a chain outgoing from s, and for each w ∈ V F the only outgoing arc on w has endpoint t. Denote by W ⊆ V F the set of vertices such that the set {(w, t) | w ∈ W } is exactly the set of shared arcs by the n + m s-t walks in W. Observe that |W | ≤ ℓ. We claim that the set of sets F ′ , containing the sets that by construction correspond to the vertices in W , forms a set cover for U . We show that for each element u ∈ U there is a w ∈ W such that the set corresponding to w is containing u.
Let u ∈ U be an arbitrary element of U . Consider the walk P ∈ W containing the vertex v ∈ V U corresponding to element u. As P forms an s-t walk in G, walk P contains a vertex w ′ ∈ V F . As discussed before, there is a walk P ′ containing the chain from s to w ′ not containing any vertex in V U . By construction, w ′ is at time step ℓ + 3 in both P and P ′ . As the only outgoing arc on w ′ is (w ′ , t), both P and P ′ use the arc (w ′ , t) at time step ℓ + 3, and hence (w, t) is shared by P and P ′ . It follows that w ′ ∈ W , and hence u is covered by the set in F ′ corresponding to w ′ .
Proof of Theorem 3. We give a (parameterized) many-one reduction from Set Cover to RCA. Let (U, F , ℓ) be an instance of Set Cover. We construct the instance (G, s, t, p, k), where G is obtained by applying Construction 1, p = |U | + |F |, and k = ℓ. The correctness of the reduction then follows from Lemma 4. Finally, note that as k = ℓ and Set Cover is W[2]-hard with respect to the size ℓ of the set cover, it follows that RCA is W[2]-hard with respect to the number k of shared arcs.
Observe that each s-t walk in the graph obtained from Construction 1 is of length at most ℓ + 3.
Hence, in the proof of Theorem 3, we can instead reduce to an instance (G, s, t, p, k, α) of FRCA, where G is obtained by applying Construction 1, p = |U | + |F |, and k = ℓ, and α = ℓ + 3. Therewith, we obtain the following.
Corollary 2. FRCA on DAGs is NP-complete and W[2]-hard with respect to k + α.
Path-RCA
In this section, we prove the following theorem.
Theorem 4. Path-RCA both on undirected planar and directed planar graphs is
In the proof of Theorem 4, we reduce from the following NP-complete [8] problem (a cubic graph is a graph where every vertex has degree exactly three):
Question: Is there a cycle in G that visits each vertex exactly once?
Roughly, the instance of Path-RCA obtained in the reduction consists of the original graph G connected to the terminals s, t via a bridge (see Figure 1). We ask for constructing roughly n paths connecting the terminals, where n is the number of vertices in the input graph of PCHC. All but one of these paths will use the bridge to t in the constructed graph for n time steps in total, each in a different time step. Thus, this bridge is occupied for roughly n time steps, and the final path is forced to stay in the input graph of PCHC for n time steps. For a path, this is only possible by visiting each of the n vertices in the graph exactly once, and hence it corresponds to a Hamiltonian cycle. The reduction to prove Theorem 4 uses the following Construction 2.
Construction 2. Let G = (V, E) be an undirected, planar, cubic graph with n = |V |. Construct in time polynomial in the size of G an undirected planar graph G ′ as follows (refer to Figure 1 for an illustration of the constructed graph). Let initially G ′ be the empty graph. Add a copy of G to G ′ . Denote the copy of G in G ′ by H. Next, add the new vertices s, t, v, w to G. Connect s with v, and w with t by an edge. For each m ∈ {4, 5, . . . , n+ 1}, add an m-chain connecting s with w. Next, consider a fixed plane embedding φ(G) of G. Let x 1 denote a vertex incident to the outer face in φ(G). Then, there are two neighbors x 2 and x 3 of x 1 also incident to the outer face in φ(G). Add the edges {v, x 1 }, {x 2 , w} and {x 3 , w} to G ′ completing the construction of G ′ . We remark that G ′ is planar as it allows a plane embedding (see Figure 1) using φ as an embedding of H. Proof. (⇐) Let P denote a set of n − 1 s-t paths in G ′ with no shared edge. Note that the degree of s is equal to n − 1. As no two paths in P share any edge in G ′ , each path in P uses a different edge incident to s. This implies that n − 2 paths in P uniquely contain each of the chains connecting s with w, and one path P ∈ P contains the edge {s, v}. Note that each of the n − 2 paths contain the vertex w at most once, and since they contain the chains connecting s with w, the edge {w, t} appears at the time steps {5, 6, . . . , n + 2} in these n − 2 paths P. Hence, the path P has to contain the edge {w, t} at a time step smaller than five or larger than n + 2. Observe that, by construction, the shortest path between s and w is of length 4 and, thus, P cannot contain the edge {w, t} on any time step smaller than five. Hence, P has to contain the edge at time step at least n + 3. Since the distance between s and x 1 is two, and the distance from x 2 , x 3 to w is one, P has to visit each vertex in H exactly once, starting at x 1 , and ending at one of the two neighbors x 2 or x 3 of x 1 . Hence, P restricted to H describes a Hamiltonian path in H, which can be extended to an Hamiltonian cycle by adding the edge {x 1 , x 2 } in the first or {x 1 , x 3 } in the second case. (⇒) Let G admit a Hamiltonian cycle C. Since C contains every vertex in G exactly once, it contains x 1 and its neighbors x 2 and x 3 . Since C forms a cycle in G and G is cubic, at least one of the edges {x 1 , x 2 } or {x 1 , x 3 } appears in C. Let C ′ denote an ordering of the vertices in C such that x 1 appears first and the neighbor x ∈ {x 2 , x 3 } of x 1 with {x 1 , x} contained in C appears last. We construct n − 1 s-t paths without sharing an edge. First, we construct n − 2 s-t paths, each containing a different chain connecting s with w and the edge {w, t}. Observe that since the lengths of each chain is unique, no edge (in particular, not {w, t}) is shared. Finally, we construct the one remaining s-t path P as follows. We lead P from s to x 1 via v, then following C ′ in H to x, and then from x to t via w. Observe that P has length n + 3 and contains the edge {w, t} at time step n + 3. Hence, no edge is shared as the path containing the (n + 1)-chain contains the edge {w, t} at time step (n + 2). We constructed n − 1 s-t paths in G ′ with no shared edge.
Note that the maximum degree of the graph obtained in the Construction 2 depends on the number of vertices in the input graph. In what follows, we give a second construction where the obtained graph has constant maximum degree ∆ = 4. Construct in time polynomial in the size of G an undirected planar graph G ′ as follows (refer to Figure 2 for an illustration of the constructed graph). Let initially G ′ be the graph obtained from Construction 2. Remove s, w, and the chains connecting s with w from G ′ . Add a vertex u to G ′ and add the edges {x 2 , u} and {x 3 , u} to G ′ . Let η be the smallest power of two larger than n (note that n ≤ η ≤ 2n − 2). Add a complete binary tree T s with η leaves to G ′ , and denote its root by s. Denote the leaves by a 1 , . . . , a η , ordered by a post-order traversal on T s . Next add a copy of T s to G ′ , and denote the copy by T w and its root by w. If a i is a leaf of T s , denote by a ′ i its copy in T w . Next, for each i ∈ [η], connect a i and a ′ i via a (η + i)-chain. Finally, connect s with v via an (η + log(η) + (η − n))-chain, connect u with w via a log(η)-chain, and add the edge {w, t} to G ′ , which completes the construction of G ′ . Note that T s and T w allow plane drawings, and the chains connecting the leaves can be aligned as illustrated in Figure 2. It follows that G ′ allows for a plane embedding. Lemma 6. Let G and G ′ be as in Construction 3. Then G admits a Hamiltonian cycle if and only if G ′ allows for at least η + 1 s-t paths with at most η − 2 shared edges.
Proof. (⇒) Let G admit a Hamiltonian cycle C. As discussed in the proof of Lemma 5, there is an ordering C ′ of the vertices in C such that x 1 appears first and x ∈ {x 2 , x 3 } appears last in C ′ . We construct η s-t paths as follows. We route them from s in T s to the leaves of T s in such a way that each path contains a different leaf of T s . Herein, η − 2 edges are shared. Next, route each of them via the chain connecting the leaf to the corresponding leaf in T w , then to w, and finally to t. In this part, no edge is shared, as the lengths of the chains connecting the leaves of T s and T w are pairwise different. Hence, the η s-t paths contain the edge {w, t} at the time steps 2 log(η) + η + i + 1 for each i ∈ [η]. We construct the one remaining s-t path P as follows. The path P contains the chain connecting s with v, the edge {v, x 1 }. Then P follows C ′ in H to x ∈ {x 2 , x 3 }, via the edge {x, u} to u, via the chain connecting u with w to w, and finally to t via the edge {w, t}. Observe that P contains the edge {w, t} at the time step 2η + 2 log(η) + 2, and hence no sharing any further edge in G ′ .
(⇐) Let P be a set of η + 1 s-t paths in G ′ sharing at most k := η − 2 edges. First observe that no two paths contain the chain connecting s with v, as otherwise more that k edges are shared. Hence, at most one s-t path leaves s via the chain to v. It follows that at least η paths leave s via the edges in T s . By the definition of paths, observe that each of s-t paths arrive at a leaf of T s at same time step. Suppose at least two s-t paths contain the same leaf of T s . As each leaf is of degree two, the s-t paths follow the chain towards a leaf of T w simultaneously. This introduces at least η + 1 > k shared edges, contradicting the choice of P. It follows that exactly η s-t paths leave s via T s (denote the set by P ′ ), and they arrive each at a different leaf of T s at time step log(η). Moreover, by construction, each path in P ′ arrives at a different leaf of T w at the time steps log(η) + η + i + 1 for every i ∈ [η].
We next discuss why no path in P ′ contains more than one chain connecting a pair (a, a ′ ) of leaves, where a and a ′ are leaves of T s and T w , respectively. Assume that there is a path P ′ ∈ P ′ containing at least two chains connecting the pairs (a, a ′ ) and (b, b ′ ) of leaves, where a, b and a ′ , b ′ are leaves of T s and T w , respectively, and vertex a appears at smallest time step over all such leaves of T s in P ′ and b ′ appears at smallest time step over all such leaves of T w in P ′ (recall that P ′ must contain a leaf of T s at smaller time step than every leaf in T w ). By construction, a ′ and b ′ are the copies of a and b in T w . Let r denote the vertex in T s such that r is the root of the subtree of minimum height in T s containing a and b as leaves. Let r ′ denote its copy in T w . Observe that by construction, r ′ is the root of the subtree of minimum height in T w containing a ′ and b ′ as leaves. As P ′ starts at vertex s, and the path from s to a in T s is unique, P ′ contains the vertex r at smallest time step among the vertices in X := {r, a, a ′ , r ′ , b ′ , b}. As vertex a appears at smallest time step over all leaves of T s and a is of degree two, P ′ contains the vertices a and a ′ at second and third smallest time step, respectively, among the vertices in X. As in any tree, the unique path between every two leaves contains the root of the subtree of minimum height containing the leaves, P ′ contains the vertex r ′ at fourth smallest time step among the vertices in X. Finally, as vertex b ′ appears at smallest time step over all leaves of T w and b ′ is of degree two, P ′ contains the vertices b ′ and b at fifth and sixth smallest time steps, respectively, among the vertices in X. In summary, the vertices in X appear in P ′ in the order (r, a, a ′ , r ′ , b ′ , b). Now, observe that {r, r ′ } forms a b-t separator in G ′ , that is, there is no b-t path in G ′ − {r, r ′ }. As P ′ contains r and r ′ at smaller time steps than b, P ′ contains a vertex different to t at last time step. This contradict the fact that P ′ is an s-t path in G ′ . It follows that no path in P ′ contains more than one chain connecting a pair consisting of leaf of T s and a leaf of T w .
It follows that the paths in P ′ contain the edge {w, t} at the time steps 2 log(η)+η+ i + 1 for each i ∈ [η]. Hence, the remaining s-t path containing the chain connecting s with v, denoted by P , has to contain the edge {w, t} at time step at most 2 log(η) + η + 1 or at least 2 log(η) + 2η + 2. At the earliest P can contain the edge {w, t} at time step 2 log(η) + η + (η − n) + 3 > 2 log(η) + η + 1, and thus, path P has to contain edge {w, t} at time step 2 log(η) + 2η + 2. This is only possible if P forms a path in H that visits each vertex in H, starting at x 1 and ending at vertex x ∈ {x 2 , x 3 }. As the edge {x 1 , x} is contained in H, it follows that P restricted to H forms a Hamiltonian cycle in G.
Proof of Theorem 4. We provide a many-one reduction from PCHC to Path-RCA on undirected graph via Construction 2 (for constant number k of shared edges) on the one hand, and Construction 3 (for constant maximum degree ∆) on the other. Let (G) be an instance of PCHC with n = |V (G)|.
Via Construction 2. Let (G ′ , s, t, p, 0) be an instance of Path-RCA where G ′ is obtained from G by applying Construction 2 and p = n − 1. Note that (G ′ , s, t, p, 0) is constructed in polynomial time and, by Lemma 5, G is a yes-instance of PCHC if and only if (G ′ , s, t, p, 0) is a yes-instance of Path-RCA.
The case of constant k > 0. Reduce (G ′ , s, t, p, 0) to an equivalent instance (G ′ k , s ′ , t, p, k) of Path-RCA with k > 0 as follows. Let G ′ k denote the graph obtained from G ′ by the following modification: Add a chain of length k to G ′ , and identify one endpoint with s and denote by s ′ the other endpoint. Set s ′ as the new source. Observe that any s ′ -t path in G ′ k contains the k-chain appended on s, and hence, any solution introduces exactly k shared edges.
The directed case. Direct the edges in G ′ as follows. Direct each chain connecting s with w from s towards t. (In the case of k > 0, also direct the chain from s ′ towards s.) Direct the edges {s, v}, {v, x 1 }, {x 2 , w}, {x 3 , w}, and {w, t} as (s, v), (v, x 1 ), (x 2 , w), (x 3 , w), and (w, t). Finally, replace each edge {a, b} in H by two (anti-parallel) arcs (a, b), (b, a) to obtain the directed variant of H. The correctness follows from the fact that we consider paths that are not allowed to contain vertices more than once. Note that the planarity is not destroyed.
Via Construction 3. Let (G ′ , s, t, p, k) be an instance of Path-RCA where G ′ is obtained from G by applying Construction 3, p = η + 1, and k = η − 2. Note that (G ′ , s, t, p, k) is constructed in polynomial time and, by Lemma 6, G is a yesinstance of PCHC if and only if (G ′ , s, t, p, k) is a yes-instance of Path-RCA.
The directed case. Direct the edges in G ′ as follows. Direct the edges in T s from s towards the leaves, and the edges in T w from the leaves towards w. Direct each chain connecting T s with T w from T s towards T w . Direct the edges {v, x 1 }, {x 2 , u}, {x 3 , u}, and {w, t} as (v, x 1 ), (x 2 , u), (x 3 , u), and (w, t). Direct the chain connecting s with v from s towards v, and the chain connecting u with w from u towards w. Finally, replace each edge {a, b} in H by two (anti-parallel) arcs (a, b), (b, a) to obtain the directed variant of H. The correctness follows from the fact that we consider paths that are not allowed to contain vertices more than once. Note that the planarity is not destroyed.
As the length of every s-t path is upper bounded by the number of vertices in the graph, we immediately obtain the following.
Trail-RCA
We now show that Trail-RCA has the same computational complexity fingerprint as Path-RCA. That is, Trail-RCA (Trail-FRCA) is NP-complete on undirected and directed planar graphs, even if the number k ≥ 0 of shared edges (arcs) or the maximum degree ∆ ≥ 5 (∆ i/o ≥ 3) is constant. The reductions are slightly more involved, because it is harder to force trails to take a certain way.
On Undirected Graphs
In this section, we prove the following.
Theorem 5. Trail-RCA on undirected planar graphs is NP-complete, even if k ≥ 0 is constant or ∆ ≥ 5 is constant.
We provide two constructions supporting the two subresults for constants k, ∆. The reductions are again from Planar Cubic Hamiltonian Cycle (PCHC). Construct an undirected planar graph G ′ as follows (refer to Figure 3 for an illustration of the constructed graph). Initially, let G ′ be the empty graph. Add a copy of G to G ′ and denote the copy by H. Subdivide each edge in H and denote the resulting graph H ′ . Note that H ′ is still planar. Consider a plane embedding φ(H ′ ) of H ′ and and let x ∈ V (H ′ ) be a vertex incident to the outer face in the embedding. Next, add the vertex set {s, v, w, t} to G. Add the edges {s, x}, {s, v}, {v, w}, and {w, t} to G. Finally, add n − 1 vertices B = {b 1 , . . . , b n−1 } to G and connect each of them with s by two edges (in the following, we distinguish these edges as {s, b i } 1 and {s, b i } 2 , for each i ∈ [n − 1]). Note that the graph is planar (see Figure 3 for an embedding, where H ′ is embedded as φ(H ′ )) but not simple.
Lemma 7. Let G and G ′ as in Construction 4. Then G admits a Hamiltonian cycle if and only if G ′ admits 2n s-t trails with no shared edge.
Proof. (⇒) Let G admit a Hamiltonian cycle C. Observe that H ′ allows for a cycle C ′ in H ′ that contains each vertex corresponding to a vertex in H exactly once. We construct 2n trails in G ′ as follows.
We group the trails in two groups. The first group of trails first visits some of the vertices b 1 , . . . , b n−1 by each time first using the edge {s, b i } 1 , i ∈ [n − 1], and then proceeding to t via v. The second group of trails first visits some of the vertices b 1 , . . . , b n−1 by each time first using the edge {s, b i } 2 , i ∈ [n − 1], and then proceeding via x, then following the cycle C ′ , and finally again via x towards t. Let T i 1 , . . . , T i n denote the trails of group i ∈ {1, 2}. For each j < n, the trail T i j first visits the vertices b j , . . . , b n−1 in that order before proceeding as described above. The trails T i n , i ∈ {1, 2}, do not contain any of the vertices b 1 , . . . , b n−1 , and directly approach t as described above.
Observe that, within each of the two groups, no two trails share an edge. Between trails of different groups, only edge {w, t} can possibly be shared. Note that any cycle in H ′ is of even length. Hence, the trails of group 1 contain the edge {w, t} at each of the time steps 2j + 1 for every j ∈ [n]. The trails of group 2 contain the edge {w, t} at each of the time steps 2n + 2j + 1 for every j ∈ [n]. Hence, no two trails share an edge.
(⇐) Let G ′ admit 2n s-t trails with no shared edge. First note that s has exactly 2n incident edges. Observe that, for each β = 0, . . . , |B|, no more than two trails contain β vertices of B, as otherwise any of the edges {s, v} or {s, x} would be shared. By the pigeon hole principle it follows that, for each β = 0, . . . , |B|, there are exactly two trails that contain β vertices of B. Hence, for each even time step, there are two trails leaving s via the edges {s, v} and {s, x}, respectively. Observe that those trails that proceed towards t via v use the edges {w, t} exactly at the time steps 2j + 1 for every j ∈ [n]. Because each trial in H ′ that starts and ends at the same vertex has even length, those trials that proceed towards t via x can use {w, t} only at odd time steps. Hence, since the edge {w, t} is not shared, the trails proceeding towards t via x need to stay in H ′ for 2n time steps. As H ′ − x has maximum degree three, no vertex in H ′ beside x is contained more than once in all of these trails. As the length between every two vertices in H ′ corresponding to vertices in H, it follows that every of these trails visits the vertices in H ′ corresponding to the vertices in H. It follows that each of these trails forms a Hamiltonian cycle C ′ in H ′ . As C ′ can easily turned into a Hamiltonian cycle C in G (consider the sequence when deleting all vertices that do not correspond to a vertex in H), the statement follows.
To deal with the parallel edges in graph G ′ in Construction 4, we now subdivide edges, maintaining an equivalent statement as in Lemma 7. Then the shared edge is contained in subpaths P (e i x ) and P (e j x ). As P (e i x ) and P (e j x ) are not edge-disjoint (as they share an edge), it follows that e i x = e j x , and hence P i and P j share the edge e i x in G. This contradicts the fact that P = {P i | i ∈ [p]} is a set of p s-t trails in G with no shared edge. It follows that P ′ is a set of p s-t trails in G ′ with no shared edge.
} be a set of p s-t trails in G ′ with no shared edge. Observe that, by the construction of G ′ , each s-t trail in G ′ is composed of paths of length three with endpoints corresponding to vertices in G. Hence, for each i ∈ [p], let P ′ i be represented as P ′ i = (P (e i 1 ), . . . , P (e i ℓi )), where 3 · ℓ i is the number of edges in P ′ i . For each P ′ i , consider the corresponding trail P i = (e i 1 , . . . , e i ℓi ) in G, and the set P = {P i | i ∈ [p]}. Suppose that two trails P i and P j share an edge, that is, there is an index x such that e i x = e j x . It follows that P (e i x ) = P (e j x ). Let e i x = {v, w} =: e. If both trails P ′ i and P ′ j traverse P (e) in the same "direction", i.e. either from v to w or from w to v, then P ′ i and P ′ j share at least three edges (all edges in P (e)). This contradicts the definition of P ′ . Consider the case that the trails P ′ i and P ′ j traverse P (e) in opposite "directions", i.e. one from v to w and the other from w to v. As P (e) is of length three, the edge in P (e) with no endpoint in {v, w} is then used by P ′ i and P ′ j at the same time step, yielding that the edge is shared. This contradicts the definition of P ′ . It follows that P = {P i | i ∈ [p]} is a set of p s-t trails in G with no shared edge.
We now show how to modify Construction 4 for maximum degree five, giving up, however, a constant upper bound on the number of shared edges.
Construction 5. Let G = (V, E) be an undirected planar cubic graph with n = |V |. Construct an undirected planar graph G ′ as follows (see Figure 4 for an illustration of the constructed graph). Let initially G ′ be the graph obtained from Construction 4. Subdivide each edge in H ′ and denote the resulting graph by H ′′ . Observe that the distance in H ′′ between any two vertices in V (H ′′ ) ∩ V (H ′ ) is divisible by four. Next, delete all edges incident with vertex s. Connect s with v via a 2n-chain, and connect s with x via a 2n-chain. Connect s with b 1 via two P 2 's. Denote the two vertices on the P 2 's by ℓ 1 and u 1 . Finally, for each i ∈ [n − 2], connect b i with b i+1 via two P 2 's. For each i ∈ [n − 2], denote the two vertices on the P 2 's between b i and b i+1 by ℓ i+1 and u i+1 . For an easier notation, we denote vertex s also by b 0 . Proof. (⇐) Let G ′ admit a set P of 2n s-t trails with at most 2n − 4 shared edges. At each time step, at most two trails leave s towards v and x. Otherwise, all the edges in at least one of the 2n-chains connecting s with v and s with x are shared, contradicting the fact that the trails in P share at most 2n − 4 edges. Note that every s-t trail contains vertex s at the first time step and at most once more at time step 4j + 1, for some j ∈ N (indeed, we will show that j ∈ [n − 1]). This follows on the one hand from the fact that s has degree four and hence every trail can contain s at most twice, and on the other hand from the fact that for each i ∈ [n − 1], every s-b i path is of even length.
We show that at each time step 4j + 1, 0 ≤ j ≤ n − 1, exactly one s-t trial leaves s towards v and exactly one s-t trial leaves s towards x. First, observe that |{b i , u i , ℓ i | i ∈ [n − 1]}| = 3(n − 1) and each trail can contain each vertex in {u i , ℓ i | i ∈ [n − 1]} ∪ {b n−1 } at most once (as each vertex in this set is of degree two) and each vertex b i , i ∈ [n − 2], at most twice (as they are of degree four). Hence, any trail starting on s and returning to s after visiting the vertices in B contains at most 3(n−1)+(n−2)+2 = 4(n−1)+1 vertices. It follows that every s-t trail contains s at the first time step and at most once more at time step 4j + 1 for some j ∈ [n − 1]. As there are 2n s-t trails and at each time step at most two trails leave s towards v and x, together with the pigeon hole principle it follows that exactly one s-t trial leaves s towards v and exactly one s-t trial leaves s towards x at each time step 4j + 1, 0 ≤ j ≤ n − 1. Moreover, note that each b i , i ∈ [n − 1], appears in at least two s-t trails.
We claim that there are exactly 2n − 4 shared edges and that every shared edge is incident with a vertex in {u i , ℓ i | i ∈ [n − 1]}. This follows from the fact that at least three trails going at the same time from b i to b i+1 , 0 ≤ i ≤ n − 3, share at least two edges. As four trails contain b n−2 (those four which leave s towards v and x at the time steps 4j + 1 with j ∈ {n − 2, n − 1}), it follows that at least 2(n − 2) edges are shared. Hence, no two trails share an edge after they have left s for v or x.
There is a trail P ∈ P that contains the vertex x and that contains vertex s only once at the first time step, because at each time step, two trails leave s for v or x. Observe that vertex w is contained in the trails containing v at the time steps 4j+2n+2 for all j ∈ [n − 1] ∪ {0} whence edge {w, t} is occupied at time steps 4j + 2n + 3 for all j ∈ [n − 1] ∪ {0}. Hence, the edge {w, t} is contained at time step 2n + 2 in a trail different to P and, thus, trail P contains at least one vertex in H ′′ . Furthermore, P can contain x a second time only at time steps of the form 4j + 2n + 1, 3 ≤ j ≤ n, because each path in H ′′ between two vertices that correspond to vertices in G has length four. However, as mentioned, {w, t} is occupied at time steps 4j + 2n + 3, j ∈ [n − 1]. Hence, P has to stay in H ′′ for 4n time steps. Recall that G is cubic, and hence no vertex in H ′′ − {x} corresponding to a vertex in G appears more than once in any trail. That is, P follows a cycle in H ′′ containing each vertex corresponding to vertex in G exactly once. It follows that G admits a Hamiltonian cycle. (⇒) Let G admit a Hamiltonian cycle C. Let C ′ be C, ordered such that x is the first and last vertex in C ′ . Let C ′′ denote the cycle in H ′′ following the order of the vertices in C ′ . We construct 2n s-t trails in G ′ sharing at most 2n − 4 edges as follows. We denote the trails by P x i , 0 ≤ i ≤ n − 1, x ∈ {u, ℓ}. The trails are divided into two groups according to their superscript x ∈ {u, ℓ}. For i ≥ 1, the trails P u i and P ℓ i start with the sequences Trails P u 0 and P ℓ 0 do not visit any vertex in B and simply start at s. Then, for each i = 0, . . . , n − 1, trail P u i follows the chain to v, the edge to w, and then to t. For each i = 0, . . . , n − 1, trail P ℓ i follows the chain to x, then the cycle C ′′ in H ′′ , then the edge from x to w, then to t. Observe that trail P x i , x ∈ {u, ℓ}, contains s at time step one and 4i + 1. Hence, P u i contains the vertex w at time step 4i + 2n + 2. Moreover, P ℓ i contains the vertex x at time steps 4i + 2n + 1 and 4i + 2n + 4n + 1. From the latter it follows that P ℓ i contains the vertex w at time step 4i + 2n + 4n + 2. Altogether, the edge {w, t} is not shared by any pair of trails.
Next, we count the number of edges shared between the two visits of s. Denote by Observe that |X| = n − 2 + n − 2 = 2(n − 2). We claim that the edges in X are the only shared edges by the trails P x i , 0 ≤ i ≤ n − 1, x ∈ {u, ℓ}. As P ℓ n−1 and P u n−1 contain the set X at the same time steps, every edge in X is shared. For each i ≥ 1, the edges {u i , b i−1 } and {u i , b i } are only contained in the trails P x i , x ∈ {u, ℓ}. Recall that P u i and P ℓ i contain b i exactly once and at the same time step. The subsequence around b i of P ℓ i and P u i is , respectively. It follows that both edges {u i , b i−1 } and {u i , b i } appear at two different time steps in P u i and P ℓ i . The same argument holds for the edges {b n−2 , ℓ n−1 } and {b n−1 , ℓ n−1 } as P u n−1 and P ℓ n−1 are the only trails containing the two edges. Altogether, it follows that X is the set of shared edges of the s-t trails P x i , x ∈ {u, ℓ}, and the claim follows. Finally, as |X| = 2n − 4, the statement follows.
Proof of Theorem 5. We provide a many-one reduction from Planar Cubic Hamilton Circuit (PCHC) to Trail-RCA on undirected graph via Construction 4 on the one hand, and Construction 5 on the other. Let (G = (V, E)) be an instance of PCHC and let n := |V | vertices.
Via Construction 4. Let (G ′ , s, t, p, 0) an instance of Trail-RCA where G ′ is obtained from G by applying Construction 4 and p = 2n. Note that (G ′ , s, t, p, 0) can be constructed in polynomial time and by Lemma 7, (G) is a yes-instance of PCHC if and only if (G ′ , s, t, p, 0) is a yes-instance of Trail-RCA. However, G ′ is not simple in general. Hence, replace each edge {u, v} ∈ E(G ′ ) in G ′ by a path of length three and identify its endpoints with u and v. Denote by G ′′ the obtained graph. Due to Lemma 8, (G ′′ , s, t, p, 0) is a yes-instance of Trail-RCA if and only if (G ′ , s, t, p, 0) is a yes-instance of Trail-RCA.
The case of constant k > 0 works analogously as in the proof of Theorem 4.
Via Construction 5.
Let (G ′ , s, t, p, k) an instance of Trail-RCA where G ′ is obtained from G by applying Construction 5 and p = 2n. Instance (G ′ , s, t, p, k) can be As the length of each s-t trail is upper bounded by the number of edges in the graph, we immediately obtain the following.
On Directed Graphs
We know that Trail-RCA and Trail-FRCA are NP-complete on undirected graphs, even if the number of shared edges or the maximum degree is constant. In what follows, we show that this is also the case for Trail-RCA and Trail-FRCA on directed graphs. Theorem 6. Trail-RCA on directed planar graphs is NP-complete, even if k ≥ 0 is constant or ∆ i/o ≥ 3 is constant.
To prove Theorem 6, we reduce from the following NP-complete [17] problem.
Directed Planar 2/3-In-Out Hamiltonian Circuit (DP2/3HC) Input: A directed, planar graph G = (V, A) such that, for each v ∈ V , max{outdeg(v), indeg(v)} ≤ 2 and outdeg(v) + indeg(v) ≤ 3. Question: Is there a directed Hamiltonian cycle in G? Construction 6. Let G = (V, A) be a directed, planar graph where for each vertex v ∈ V holds max{outdeg(v), indeg(v)} ≤ 2 and outdeg(v)+indeg(v) ≤ 3, and n = |V |. Construct a directed graph G ′ as follows (refer to Figure 5for an illustration of the constructed graph). Initially, let G ′ be the empty graph. Add a copy of the graph G to G ′ and denote the copy by H. Add the vertex set {s, t, v, w} to G ′ . Consider a plane embedding φ(G) and choose a vertex x ∈ V (H) incident to the outer face. Add the arcs (s, v), (v, x), (x, w), and (w, t) to G ′ . Moreover, add n chains connecting s with v of lengths 3, 4, . . . , n+2 respectively to G ′ , and direct the edges from s towards v. Note that G ′ is planar (see Figure 5 for an embedding where H is embedded as φ(H)). Proof. (⇒) Let G admit a Hamiltonian cycle C. We construct n s-t trails in G ′ , each using a chain connecting s with w, where no two use the same chain. By construction, the n s-t trails do not introduce any shared arc. Moreover, the trails contain the arc (w, t) at every time step in {4, . . . , n + 3}. The remaining trail contains no chain, but the vertices v, x, w as well as C. As C is a Hamiltonian cycle, the trail uses the arc (w, t) at time step n + 4.
(⇐) Let G ′ admit a set P of n+1 s-t trails with no shared arc. As s has outdegree n+1, in P n trails contain a chain connecting s with w, where no two contain the same chain. As there is no shared arc, the remaining trail cannot use the arc (w, t) before time step n+3. As the shortest s-w path containing v is of length three, the remaining trail has to contain n arcs in the copy H of G. As for each vertex v ∈ V (G) holds that max{outdeg(v), indeg(v)} ≤ 2 and outdeg(v) + indeg(v) ≤ 3, no vertex despite x is visited twice by the trail. Hence, the trail restricted to the copy H of G forms an Hamiltonian cycle in G.
We provide another construction where the obtained graph has constant maximum in-and out-degree.
Construction 7. Let G = (V, A) be a directed, planar graph where for each vertex v ∈ V holds max{outdeg(v), indeg(v)} ≤ 2 and outdeg(v) + indeg(v) ≤ 3, and let n := |V |. Construct a directed graph G ′ as follows (refer to Figure 6 for an illustration of the constructed graph). Let initially G ′ be the graph obtained from Construction 6. Remove s, w, and the directed chains connecting s with w from G ′ . Let η be the smallest power of two larger than n (note that n ≤ η ≤ 2n − 2). Add a complete binary tree T s with η leaves to G ′ , and denote its root by s. Denote the leaves by a 1 , . . . , a η , ordered by a post-order traversal on T s . Next add a copy of T s to G ′ , and denote the copy by T w and its root by w. For each leaf a i of T s , denote by a ′ i its copy in T w . Next, for each i ∈ [η], connect a i and a ′ i via a (η + i)-chain. Direct all edges in T s away from s towards the leaves of T s . Direct all edges in T w away from the leaves of T w towards w. Next, direct all chains connecting the leaves of T s and T w from the leaves of T s towards the leaves of T w . To complete the construction of G ′ , connect s with v via a (log(η) + η + (η − n))-chain, and connect x with w via a log(η)-chain. Note that T s and T w allow plane drawings, and the chains connecting the leaves can be aligned as illustrated in Figure 2. It follows that G ′ allows for an plane embedding. Proof. (⇒) Let G admit a Hamiltonian cycle C. First, we construct η s-t trails in G ′ as follows. For each leaf of T s , there is a trail containing the unique path from s to the leaf in T s . This part introduces η − 2 shared arcs. Next, each trail follows the chain connecting the leaf of T s with a leaf of T w , then the unique path from the leaf of T w to w, and finally the arc (w, t). Observe that the trails contain the leaves of T w at different time steps log(η) + η + i + 1, i ∈ [η], and hence no shared arc is introduced in this part. Moreover, the arc (w, t) appears in the time steps 2 log(η) + η + i + 1, i ∈ [η]. The remaining trail P contains the chain connecting s with v, the edge {v, x}, follows the cycle C in H, starting and ending at vertex x. Trail P then contains the chain connecting x with w, and the arc (w, t). Observe that the arc (w, t) appears in P at time step 2 log(η) + 2η + 2 (recall that C is a Hamiltonian cycle in H), and hence (w, t) is not shared. (⇐) Let G ′ admit η + 1 s-t trails with at most η − 2 shared arcs.
First observe that the chain connecting s with v is not contained in more than one st trail. Hence, at least η trails leave s through T s . Note that no chain connecting the leaves of T s with the leaves of T w is contained in more than one s-t trail. It follows that exactly η s-t trails (denote the set by P ′ ) leave s via T s and each contains a different leaf of T s . Herein, η − 2 arcs are shared by the trails in P ′ . Note that the path from a leaf of T s to t is unique, each trail in P ′ follows the unique path to t. The arc (w, t) appears in the trails in P ′ at time steps 2 log(η) + η + i + 1 for every i ∈ [η].
The remaining s-t trail P ∈ P contains the chain connecting s with v. Note that arc (w, t) is not shared, as all shared arcs are contained in T s . As the shortest st path via x is of length 2 log(η) + η + (η − n) + 2 ≥ 2 log(η) + η + 2, trail P has to contain a cycle C in H. As the arc (w, t) is not shared and appears in the trails in P ′ at the time steps 2 log(η) + η + i + 1, for every i ∈ [η], the cycle C must be of length n. As for each vertex v ∈ V (G) holds that max{outdeg(v), indeg(v)} ≤ 2 and outdeg(v) + indeg(v) ≤ 3, no vertex in H despite x is visited twice by the trail P . Hence, trail P restricted to the copy H of G forms a Hamiltonian cycle in G.
Proof of Theorem 6. We provide a many-one reduction from DP2/3HC to Trail-RCA on directed graphs via Construction 6 on the one hand, and Construction 7 on the other. Let (G) be an instance of DP2/3HC where G consists of n vertices.
Via Construction 6. Let (G ′ , s, t, p, 0) an instance of Trail-RCA where G ′ is obtained from G by applying Construction 6 and p = n + 1. Note that (G ′ , s, t, p, 0) is constructed in polynomial time and by Lemma 10, (G) is a yes-instance of DP2/3HC if and only if (G ′ , s, t, p, 0) is a yes-instance of Trail-RCA.
The case of constant k > 0 works analogously as in the proof of Theorem 4.
As the length of each s-t trail is upper bounded by the number of edges in the graph, we immediately obtain the following.
6 Walk-RCA Regarding their computational complexity fingerprint, Path-RCA and Trail-RCA are equal. In this section, we show that Walk-RCA differs in this aspect. We prove that the problem is solvable in polynomial time on undirected graphs (Section 6.1) and on directed graphs if k ≥ 0 is constant(Section 6.2).
On Undirected Graphs
On a high level, the tractability on undirected graphs is because a walk can alternate arbitrarily often between two vertices. Hence, we can model a queue on the source vertex s, where at distinct time steps the walks leave s via a shortest path towards t. However, if the time of staying in the queue is upper bounded, that is, if the lengthrestricted variant Walk-FRCA is considered, the problem becomes NP-complete.
Theorem 7. Walk-RCA on undirected graphs is solvable in linear time.
Proof. Let I := (G, s, t, p, k) be an instance of Walk-RCA with G being connected. Let P be a shortest s-t path in G. We assume that p ≥ 2, since otherwise P witnesses that I is a yes-instance. We can assume that the length of P is at least k +1, otherwise we can output that I is a yes-instance. Let {s, v} be the edge in P incident to the endpoint s. We distinguish the two cases whether k is positive or k = 0. In this proof, we represent a walk as a sequence of edges.
Case k > 0: We can construct p s-t walks P 1 , . . . , P p sharing at most one edge as follows. We set P 1 := P and P i = ({s, v}, . . . , {s, v} 2i-times , P ) for i ∈ [p], that is, the st walk P i alternates between s and v i times. We show that the set P := {P 1 , . . . , P p } share exactly edge {s, v}. It is easy to see that {s, v} is shared by all of the walks. Let us consider an arbitrary edge e = {x, y} = {s, v} in P , which appears in P at time step ℓ > 1 (ordered from s to t). By construction, e appears in P i at position 2i + ℓ. Thus, no two walks in P contain an edge in P , that is on time step ℓ > 1 in P , at the same time step. Since each walk in P only contains edges in P , it follows that {s, v} is exactly the shared edges by all walks in P. As k ≥ 1, it follows that we can output that I is a yes-instance.
Case k = 0: Let v 1 , . . . , v ℓ be the neighbors of s, and suppose that v 1 = v (that is, the vertex incident to s appearing in P ). If ℓ < p, then we can immediately output that I is a no-instance, as by the pigeon hole principle at least one edge has to appear in at least two walks at time step one in any set of p s-t walks in G. If ℓ ≥ p, we construct ℓ s-t walks P 1 , . . . , P ℓ that do not share any edge in G as follows. We set P 1 = P , and , that is, the s-t walk P i alternates between s and v i i times. Following the same argumentation as in preceding case, it follows that no edge is shared by the constructed walks P 1 , . . . , P p .
In summary, if k > 0, then we can output that I is a yes-instance. If k = 0, then we first check the degree of s in linear time, and then output that I is a yes-instance if deg(s) ≥ p, and that I is a no-instance, otherwise.
The situation changes for Walk-FRCA, that is, when restricting the length of the walks.
Theorem 8. Walk-FRCA on undirected graphs is NP-complete and W[2]-hard with respect to k + α.
Given a directed graph G, we call an undirected graph H the undirected version of G if H is obtained from G by replacing each arc with an undirected edge.
Proof. We give a (parameterized) many-one reduction from Set Cover. Let (U, F , ℓ) be an instance of Set Cover. Let G ′ the graph obtained from applying Construction 1 given (U, F , ℓ). Moreover, let G be the undirected version of G ′ . Let (G, s, t, p, k, α) be the instance of Walk-FRCA, where p = n + m, k = ℓ, and α = ℓ + 3. Note that any shortest s-t path in G is of length α, and hence every s-t walk of length at most α behaves as in the directed acyclic case. That is, each walk contains a vertex of V F at time step k + 2. The correctness follows then analogously as in the proof of Lemma 4. It remains open whether Walk-FRCA is NP-complete when k is constant.
On Directed Graphs
Due to Theorems 2 and 3, we know that Walk-RCA is NP-complete on directed graphs and is solvable in polynomial time on directed acyclic graphs when k = 0, respectively. In this section, we prove that if k ≥ 0 is constant, then Walk-RCA remains tractable on directed graphs (this also holds true for Walk-FRCA). Note that for Path-RCA and Trail-RCA the situation is different, as both become NPcomplete on directed graphs, even if k ≥ 0 is constant. Theorem 9. Walk-RCA and Walk-FRCA on directed n-vertex m-arc graphs is solvable in O(m k+1 · n · (p · n) 2 ) time and O(m k+1 · n · α 2 ) time, respectively.
Our proof of Theorem 9 follows the same strategy as our proof of Theorem 1. That is, we try to guess the shared arcs, make them infinite capacity in some way, and then solve the problem with zero shared arcs via a network flow formulation in the timeexpanded graph. The crucial difference is that here we do not have at first an upper bound on the length of the walks in the solution.
Theorem 10. If k = 0, then Walk-RCA on directed n-vertex m-arc graphs is solvable in O(n · m · (p · n) 2 ) time.
Lemma 12. Every yes-instance (G, s, t, p, k) of Walk-RCA on directed graphs admits a solution in which the longest walk is of length at most p · d t , where d t = max v∈V : distG(v,t)<∞ dist G (v, t).
Proof of Lemma 12. Let P be a solution to (G, s, t, p, k) with |P| = p where the sum of the lengths of the walks in P is minimum among all solutions to (G, s, t, p, k). Suppose towards a contradiction that the longest walk P * ∈ P is of length |P * | > p · d t . Then, by the pigeon hole principle, there is an i ∈ [p] such that there is no walk in P of length ℓ with (i − 1) · d t < ℓ ≤ i · d t .
Let v = P * [(i − 1) · d t + 1], that is, v is the ((i − 1) · d t + 1)th vertex on P * , and let S be a shortest v-t path. Observe that the length of S is at most d t . Consider the walk P ′ := P * [1, (i−1)·d t +1]•S, that is, we concatenate the length-((i−1)·d t ) initial subpath of P * with S to obtain P ′ . Observe that (i−1)·d t < |P ′ | ≤ i·d t . If P \P * ∪P ′ forms a solution to (G, s, t, p, k), then, since |P ′ | < |P * |, P \ P * ∪ P ′ is a solution of smaller sum of the lengths of the walks, contradicting the choice of P. Otherwise, P ′ introduce additional shared arcs and let A ′ ⊆ A(G) denote the corresponding set. Observe that A ′ is a subset of the arcs of S. Let a = (x, y) ∈ A ′ be the shared arc such that dist S (y, t) is minimum among all shared arcs in A ′ , and let P ′ [j] = y. Let P ∈ P be a walk sharing the arc with P ′ . Then P ′′ := P [1, j] • P ′ [j + 1, |P ′ |] is a walk of shorter length than P .Moreover, P \P ∪P ′′ is a solution to (G, s, t, p, k). As |P ′′ | < |P |, P \ P ∪ P ′′ is a solution of smaller sum of the lengths of the walks, contradicting the choice of P. As either case yields a contradiction, it follows that |P * | ≤ p · d t .
The subsequent proof of Theorem 10 relies on time-expanded graphs. Due to Lemma 12, we know that the time-horizon is bounded polynomially in the input size.
Proof of Theorem 10. Let (G, s, t, p, 0) be an instance of Walk-RCA where G = (V, A) is an directed graph. We first compute d t in linear time. Let τ := p · d t . Next, we compute the τ -time-expanded (directed) graph H = (V ′ , A ′ ) of G with p additional arcs (t i−1 , t i ) for each i ∈ [τ ]. We compute in O(τ 2 · (|V | · |A|) ⊆ O(p 2 · (|V | 3 · |A|) time the value of a maximum s 0 -t τ flow in H. Due to Lemma 12 together with Lemma 2, the theorem follows.
Restricting to α-time-expanded graphs yields the following. Corollary 6. If k = 0, Walk-FRCA on directed n-vertex m-arc graphs is solvable in O(n · m · α 2 ) time.
Proof of Theorem 9. Let (G = (V, E), s, t, p, k) be an instance of Walk-RCA with G being a directed graph. For each k-sized subset K ⊆ A of arcs in G, we decide the instance (G(K, p), s, t, p, 0). The statement for Walk-RCA then follows from Lemma 3 and Theorem 10. The running time of the algorithm is in O(|A| k · p 2 · (|V | 3 · |A|)). The statement for Walk-FRCA then follows from Lemma 3 and Corollary 6.
Conclusion and Outlook
Some of our results can be seen as a parameterized complexity study of RCA focusing on the number k of shared edges. It is interesting to study the problem with respect to other parameters. Herein, the first natural parameterization is the number of routes. Recall that the Minimum Shared Edges problem is fixed-parameter tractable with respect to the number of path [6]. A second parameterization we consider as interesting is the combined parameter maximum degree plus k. In our NP-completeness results for Path-RCA and Trail-RCA it seemed difficult to achieve constant k and maximum degree at the same time. | 2017-05-10T09:38:03.000Z | 2017-05-10T00:00:00.000 | {
"year": 2017,
"sha1": "bd329f3a79b004a0009714a74e73ae7f3641a30e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.03673",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bd329f3a79b004a0009714a74e73ae7f3641a30e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
14890538 | pes2o/s2orc | v3-fos-license | Optimized In Vivo Transfer of Small Interfering RNA Targeting Dermal Tissue Using In Vivo Surface Electroporation
Electroporation (EP) of mammalian tissue is a technique that has been used successfully in the clinic for the delivery of genetic-based vaccines in the form of DNA plasmids. There is great interest in platforms which efficiently deliver RNA molecules such as messenger RNA and small interfering RNA (siRNA) to mammalian tissue. However, the in vivo delivery of RNA enhanced by EP has not been extensively characterized. This paper details the optimization of electrical parameters for a novel low-voltage EP method to deliver oligonucleotides (both DNA and RNA) to dermal tissue in vivo. Initially, the electrical parameters were optimized for dermal delivery of plasmid DNA encoding green fluorescent protein (GFP) using this novel surface dermal EP device. While all investigated parameters resulted in visible transfection, voltage parameters in the 10 V range elicited the most robust signal. The parameters optimized for DNA, were then assessed for translation of successful electrotransfer of siRNA into dermal tissue. Robust tagged-siRNA transfection in skin was detected. We then assessed whether these parameters translated to successful transfer of siRNA resulting in gene knockdown in vivo. Using a reporter gene construct encoding GFP and tagged siRNA targeting the GFP message, we show simultaneous transfection of the siRNA to the skin via EP and the concomitant knockdown of the reporter gene signal. The siRNA delivery was accomplished with no evidence of injection site inflammation or local tissue damage. The minimally invasive low-voltage EP method is thus capable of efficiently delivering both DNA and RNA molecules to dermal tissue in a tolerable manner.
Introduction
Small interfering RNAs (siRNAs) have recently demonstrated their potential as novel therapeutics due to their ability to induce robust, sequence-specific gene silencing in cells. 1,2 However, several challenges, particularly related to delivery, surrounding the use of siRNA in vivo still need to be optimized before the full potential of therapeutic RNA interference is realized. Due to the polyanionic nature of the molecule, the cellular uptake of naked siRNA on its own is extremely low. These molecules are also highly susceptible to degradation by enzymes both in tissue and intracellularly as well as systemically in the blood. Therefore, to become an effective clinical tool, an efficient, targeted, and tolerable in vivo delivery method must be identified. Using siRNA to induce RNA interference could become a promising therapeutic for treatment of many disorders, such as some cancers and many viral and genetic diseases. Although some trials involving siRNA have entered the clinic, delivery of siRNA, particularly locally for dermal applications, remains a key challenge. Indeed, the potential for clinical success of siRNA therapeutics hinges on an efficient and targetable delivery system.
In addition to improvements in the stability of the molecule which can be achieved through chemical modification, multiple delivery strategies for siRNA have also been attempted. These include both physical and carrier-mediated methods. Examples of drug delivery through carrier-mediated methods include the traditional liposomal methods such as lipid nanoparticles, 3 as well as ultrasound and microbubble delivery. 4 Electroporation (EP) represents a physical method to temporarily increase the permeability of a target tissue to macromolecules. EP involves the application of brief electrical pulses that result in the creation of aqueous pathways within the lipid bilayer membranes of mammalian cells. This transient perturbation of the lipid bilayer allows the passage of large molecules, including nucleic acids, across the cell membrane which otherwise is less permeable. As such, EP increases the uptake of macromolecules delivered to their target tissue. EP has now entered the clinic as an enabling technology for the delivery of DNA vaccines and immune therapies. The technology has proven to be a safe and effective DNA delivery method in multiple trials delivering an array of genetic vaccines. [5][6][7] We have previously reported on a subcutaneous DNA EP delivery device with a penetration depth of 3 mm (CEL-LECTRA-3P) 8 as well as three intramuscular DNA delivery devices, the Medpulser, 9 ELGEN, 5 and CELLECTRA-5P [10][11][12] devices (Inovio Pharmaceuticals, San Diego, CA). These
Optimized In Vivo Transfer of Small Interfering RNA Targeting Dermal Tissue Using In Vivo Surface Electroporation
Kate E Broderick 1 , Amy Chan 2 , Feng Lin 1 , Xuefei Shen 1 , Gleb Kichaev 1 , Amir S Khan 1 , Justin Aubin 2 , Tracy S Zimmermann 2 and Niranjan Y. Sardesai 1 Electroporation (EP) of mammalian tissue is a technique that has been used successfully in the clinic for the delivery of geneticbased vaccines in the form of DNA plasmids. There is great interest in platforms which efficiently deliver RNA molecules such as messenger RNA and small interfering RNA (siRNA) to mammalian tissue. However, the in vivo delivery of RNA enhanced by EP has not been extensively characterized. This paper details the optimization of electrical parameters for a novel low-voltage EP method to deliver oligonucleotides (both DNA and RNA) to dermal tissue in vivo. Initially, the electrical parameters were optimized for dermal delivery of plasmid DNA encoding green fluorescent protein (GFP) using this novel surface dermal EP device. While all investigated parameters resulted in visible transfection, voltage parameters in the 10 V range elicited the most robust signal. The parameters optimized for DNA, were then assessed for translation of successful electrotransfer of siRNA into dermal tissue. Robust tagged-siRNA transfection in skin was detected. We then assessed whether these parameters translated to successful transfer of siRNA resulting in gene knockdown in vivo. Using a reporter gene construct encoding GFP and tagged siRNA targeting the GFP message, we show simultaneous transfection of the siRNA to the skin via EP and the concomitant knockdown of the reporter gene signal. The siRNA delivery was accomplished with no evidence of injection site inflammation or local tissue damage. The minimally invasive low-voltage EP method is thus capable of efficiently delivering both DNA and RNA molecules to dermal tissue in a tolerable manner. devices cover injection depths between 3 and 25 mm, typically suited for subcutaneous and/or intramuscular vaccine delivery using distinct EP array configurations. Extensive studies have highlighted the critical requirement for optimization of both the EP device and the parameters used for successful delivery which additionally are dependent on both the target tissue and the DNA construct to be delivered. 8 Combining optimized EP delivery methods with optimized constructs has led to the successful translation of DNA vaccination into a clinical setting. 5,[13][14][15] While the body of evidence for both preclinical and clinical success of enhanced DNA delivery by EP is strong, the characterization of EP-enhanced RNA delivery is not as clear. However, electrically assisted delivery of siRNA has been documented in a variety of tissues including the cornea, 16 solid tumors, [17][18][19] joints, 20 muscle, 21 brain tissue, 22 and skin. 23 Certainly DNA and RNA differ both physically and electrochemically. These differences affect the dielectric properties of the molecules and as such could impact their ability to be successfully transfected in vivo using EP. The basis for EP is the application of an external electrical field which results in a significant increase in the permeability of the cell plasma membrane. Therefore the size, structure, and charge of the molecule to be transfected could significantly affect the ability of that molecule to permeate and/or interact with the membrane. Plasmid DNA, generally ranging in size from 3-5 kb, exists in solution primarily in supercoiled form (75-80%). Supercoiled or covalently closed-circular DNA is a relatively large (3-5 kb; 2-4 Mda) contorted and double-stranded molecule adopting a highly compacted structure. 24 The structure of siRNA on the other hand is a short (usually 21-nts) doublestranded RNA with 2-nt 3′ overhangs on either end. Clearly, the optimal transfer dynamics of these two nucleotide molecules could be widely disparate.
Molecular Therapy-Nucleic Acids
Here we investigated the delivery of siRNA to skin and optimized the EP conditions for delivery using a surface EP (SEP) device with a novel 4 × 4 minimally invasive needle array. 25 This SEP device differs from other EP devices in that the electrodes are minimally invasive (scratch the skin surface but do not penetrate the skin).
Initially, the electrical parameters were optimized using plasmid DNA encoding green fluorescent protein (GFP). These parameters were then assessed for translation to electrotransfer of siRNA using a fluorescently tagged siRNA for detection. We then investigated the delivery of tagged siRNA to skin through histological methods and additionally assessed the ability of EP to enhance targeted gene knockdown in dermal tissue.
In summary, this study sought to answer two fundamental questions: first, can EP be used to efficiently deliver siRNA in vivo in a functionally relevant manner and second, whether parameters optimized for DNA transfection may also be used for RNA delivery, especially at the low-voltage settings?
Results
A range of voltage parameters results in successful expression of GFP in dermal tissue. We have previously reported on the development of the 4 × 4 SEP 25 where we optimized critical parameters such as electrical pulse pattern, needle array geometry and electrode configuration, depth of penetration, and operating conditions under constant current or constant voltage configurations. With the early development accomplished, we sought to further explore effective voltage parameters for the SEP device, and to this end conducted reporter gene (plasmid-expressing GFP) expression and localization studies in guinea pig skin following EP with the SEP device. Separate skin sites on the flank of a hairless guinea pig were injected with 50 µl of GFP plasmid (@ 1 mg/ml concentration) and immediately pulsed (single 100 ms pulse) using the SEP set at a series of voltage parameters ranging between 10-200 V (Figure 1a). Robust GFP transfection was seen using the 10-and 50-V parameters, with the 10-V treatment appearing stronger and more reproducible than the 50-V treatment. The skin area transfected following the 10-V treatment corresponded in size (~4 mm 2 ) to the surface area of the SEP device and the injection bubble size (4 mm diameter). Although GFP transfection was detected following the 100-and 200-V parameters, unexpectedly, the signal was significantly weaker than the 10-and 50-V treatments and appeared far less reproducible over multiple treatment sites. GFP transfection was visible 8 hours following treatment and peaked at 3 days. In contrast, minimal or no GFP transfection was detectible following GFP plasmid injection alone. Direct contact between the skin and electrical devices could result in tissue damage. To assess the effect of voltage parameters on dermal integrity, the guinea pig skin was assessed for signs of tissue damage, including redness, swelling and/or burning following the treatments detailed in Figure 1a across several voltage settings (Figure 1b). At 48 hours post-treatment, skin sites pulsed with 200, 100, and 50 V all showed signs of redness and swelling which reduced in severity with a reduction in voltage. Not surprisingly, the 200 and 100 voltage treatments showed significant surface burning of the skin resulting in scabbing and inflammation. Only the no EP, injection only control group and the 10 V treatments showed no visible signs of tissue damage. Thus the lower voltage parameters resulted in not only a more efficient delivery of DNA, but also led to a more tolerable EP procedure.
Lower voltage parameters result in higher antibody titers to NP protein.
Although establishing the expression patterns for a plasmid reporter gene allowed us to gain insight into parameter optimization, a key determinant for assessing efficient EP is induction of immunogenicity. As such, we then sought to establish the optimal parameters leading to a functional immune response. Since significant tissue damage resulted from pulsing the SEP at 200 and 100 V for intradermal delivery, the immune study was conducted with the 50-and 10-V parameters. Hartley guinea pigs were immunized with plasmid DNA encoding for NP influenza antigens. Matched NP antigen from Puerto Rico/39 strain was optimized, synthesized, and then cloned into the backbone of a mammalian expression vector, pMB76.5, which has been used in previous human clinical trials. Animals were immunized intradermally with 100 µg of DNA at week 0 and were boosted at week 3 with the same amount of DNA at week 3. Groups of guinea pigs (four/group EP or three/group injection only) were electroporated either with the SEP device at the 50 or 10 V setting or left as injection only without EP. As seen in Figure 2, at week 7, robust antibody titers were generated by both EP conditions, although the titers in the 10-V treatment group were double (~60, 000) compared to those in the 50-V group (~30, 000). The titers in the injection only group were approximately tenfold less than the EP groups. Interestingly, the reproducibility of the titers generated was consistently better for the 10-V group over the 50-V group as determined by the scatter of the plotted titers.
A range of voltage parameters results in successful delivery of tagged siRNA to dermal tissue. In a bid to establish effective voltage parameters for siRNA delivery by the SEP device, we carried out a localization study using a fluorescently tagged siRNA in guinea pig skin following SEP at defined parameters. First we investigated whether the parameters that resulted in effective plasmid delivery were similar for optimal siRNA delivery. Separate skin sites on the flank of Hartley guinea pigs were injected with 50 µl of siRNA-tagged with Alexa-488 (@ 2 mg/ml concentration) and immediately pulsed (single 100 ms pulse) using the SEP set at a series of voltage parameters ranging between 10-200 V (Figure 3). Two days post-treatment, the animals were euthanized and 8 mm skin biopsies were removed and visualized under a fluorescent microscope. Strong Alexa signal was detected using all voltage parameters, with the 10-V treatment appearing to elicit the strongest signal. Minimal or no Alexa signal was detectible following injection of the tagged siRNA with no EP.
Dermal EP results in robust siRNA-Cy3 signal present 48 hours post-treatment. We next continued the investigation of siRNA delivery enhanced by EP using the 10-V parameters, since this parameter set appeared to have both the optimal delivery efficiency and the least-associated tissue damage. Skin biopsies were taken and histological analysis was carried out to assess delivery efficiency of EP for siRNA at the cellular level. Separate skin sites on the flank of Hartley guinea pigs were injected with 50 µl of siRNA-tagged with Cy3 (@ 2 mg/ml concentration) and either immediately pulsed (three 100 ms pulses) using the SEP set at 10 V or left as injection only controls (Figure 4). Two days post-treatment, the animals were euthanized and 8 mm skin biopsies were removed, fixed, and prepared for paraffin sectioning. Sections were stained with 4′,6-diamidino-2-phenylindole (DAPI) and visualized under a microscope for positive Cy3 signal. At 48 hours, little or no Cy3 signal was detected in the injection only controls. However, robust Cy3 signal was detected in the EP group. Surface EP results in siRNA-Cy3 signal with confirmation by siRNA-specific FISH, and GFP expression colocalized to the stratum corneum and epithelial cells. We further assessed the localization of the signal to specific layers of the skin using histological techniques ( Figure 5). Separate skin sites on the flank of Hartley guinea pigs were injected with 50 µl of siRNA-tagged with Cy3 (@ 2 mg/ml concentration) and immediately pulsed (three 100 ms pulses) using the SEP set at 10 V (Figure 5a). Two days post-treatment, the animals were euthanized and 8 mm skin biopsies were removed and fixed and prepared for paraffin sectioning. Sections were stained with DAPI and visualized under a high power microscope for positive Cy3 signal. The majority of the signal was localized to the upper layers of the epidermis (stratum corneum). However, Cy3 signal was also detected in epidermal cells throughout the stratified layers, from the basement membrane to the surface. No Cy3 signal was detected in the dermis, which is in keeping with the mode of action of this delivery device. Furthermore, siRNA localization as detected by Cy3 signal was confirmed by fluorescent in situ hybridization (FISH) using probes specific to the antisense strand of the siRNA (Figure 5b). Serial skin sections to those analyzed for Cy3 signal were hybridized to a 5′DIG-labeled, locked nucleic acid (LNA) enhanced anti-antisense siRNA probe and treated with a fluorescein tyramide signal amplifier. Sections were visualized for both positive Cy3 and fluorescein signal. As shown in Figure 5c, there was significant accumulation and clear colocalization of the signals to the stratum corneum. In additon, fluorescein signal was detected to a lesser extent in the epidermal layers, as was seen with Cy3 signal.
To address whether plasmid DNA and siRNA can colocalize when codelivered, plasmid DNA expressing GFP (100 µg) was mixed with siRNA-tagged with Cy3 (100 µg) and simultaneously injected into the skin of guinea pigs followed by EP using the 10-V settings. The biopsies were prepared as described above but this time visualized for both positive Cy3 signal and positive GFP signal. There was clear colocalization of both signals to the stratum corneum. Both signals were also detected in cells in the epidermis. Although highly likely, we were nevertheless unable to definitively determine visually whether any single cell contained both signals.
Dermal EP results in targeted reporter gene knockdown in dermal tissue. Having demonstrated successful delivery of tagged siRNA to skin through histological methods, we wanted to assess whether EP-enhanced siRNA delivery would result in targeted gene knockdown in dermal tissue. Separate skin sites on the flank of Hartley guinea pigs were injected with 20 µg siRNA-GFP mixed with 25 µg plasmid-expressing GFP immediately pulsed (three 100 ms pulses) using the SEP device set at 10 V (Figure 6a). The expression of GFP in this group was compared to the plasmid only group. Negative controls of plasmid injection only and plasmid injection plus non-matched (luciferase) siRNA were also included. Representative examples of biopsies are shown here. Twenty-four hours post-treatment, the animal was euthanized and 8 mm skin biopsies were removed. The biopsies (four for each siRNA group and eight from the plasmid only groups) were visualized under fluorescent microscopy and pixel counting software used to quantify the signal (Figure 6b). Visually, there was a clear reduction in the GFP signal in the group where siRNA-GFP was codelivered with the plasmid. The plasmid only and plasmid plus siRNA-Luc (non-matched control) appeared similar. The pixel density quantification (determined from the average of multiple samples) demonstrated an approximate 50% reduction in GFP signal. As is consistent with previous experiments, little or no GFP signal was detected in the plasmid injection alone group.
Discussion
Due to their ability to induce robust, sequence-specific gene silencing in cells, siRNAs 1,2 are an exciting and novel therapeutic option for disease targets such as macular degeneration, 26 solid tumors, 27 and melanoma's. Indeed, clinical programs assessing the safety, tolerability, and pharmacokinetics of targeted siRNA have now been assessed in patients. 26,27 However, RNA-based drugs in the absence of an appropriate delivery vehicle or method are hampered in their applicability due to their low cellular uptake and susceptibility to degradation. As such, to become an effective clinical tool, it is important to identify a targeted delivery system that can achieve effective transfection of siRNAs. Our group at Inovio has focused on the design and development of novel, innovative solutions to key oligonucleotide delivery problems. 23,28,29 Since a need exists for more effective delivery platforms for localized in vivo siRNA delivery, we assessed whether our EP platform was a viable solution for local siRNA delivery.
The primary interest for this study was to establish whether EP parameters optimized to deliver plasmid DNA would translate to effective delivery of RNA in vivo. If successful, the ability to deliver distinct nucleic acids would give the EP platform far-reaching therapeutic potential. New EP devices and parameters optimized for efficacy, tolerability, and safety could expand treatment and drug delivery options, especially in the field of RNA therapeutics. Both preclinical and clinical studies have demonstrated that EP, as an effective physical delivery method, can improve both the expression and immunogenicity of DNA vaccines by 100-1,000 fold.
Having previously developed a novel EP device that effectively targets dermal tissue, we were also keen to establish optimal EP parameters for oligonucleotide transfer which could offer both efficient delivery and reduced tissue damage.
Initially, using GFP expression as a readout for intracellular delivery, we determined parameter ranges for plasmid DNA transfection using this EP device, and were able to observe transfection of epidermal tissue over a spectrum of voltage parameters, ranging from 200 V down to 10 V (Figure 1a). Interestingly, GFP transfection at the lowest voltage parameter (10 V) appeared more reproducible and robust in comparison to the 50-200 V parameters. Although 10 V is considered a low-voltage parameter, the actual field strength (a function of the electrode distance) is still 67 V/cm, well within the acceptable range for in vivo EP. 30,31 At the higher voltage settings, this field strength increases substantially (200 applied volts = 1,334 V/cm), resulting in significant cell death as observed through histological analysis. This may explain the reduced levels of GFP transfection seen when the applied voltage is increased for the SEP device. We note that other earlier EP devices operate in the 100-200 voltage range for effective gene delivery. However, for those devices, there is typically greater separation between electrodes (0.5 to >1.0 cm) thereby maintaining reasonable field strengths to achieve efficient gene transfer without significant tissue damage. Importantly, efficient transfection was achieved at voltage settings as low as 10 V without any observable tissue damage with the SEP device. As seen in Figure 1, transfection of the reporter gene plasmid may be occurring; however, the cells may be unable to recover from the EP treatment and thus are unable to express the plasmid. In support of this theory is the steady increase in tissue damage occurring following EP with the higher voltage parameters. Visible redness, inflammation and in some cases, scabbing was identified following treatments between 50-200 V with the SEP device (Figure 1b). We also note that the presence of hemoglobin quenches the GFP signal. 32 Although exploring plasmid reporter gene expression across a wide range of voltage parameters was informative, of greater interest was assessing the voltage parameters which resulted in an immune response. It was clear that the 100 and 200 voltage parameters caused significant tissue damage, and as a result, reduced plasmid expression upon treatment with the SEP device. As such, we limited our assessment to the 50and 10-V parameters to induce humoral immune responses as compared to intradermal injection alone ( Figure 2). Both the 50-and 10-V parameters illicited responses of significant magnitude against the administered antigen. However, the spread of the titers was wider in the 50-V group. Indeed, one animal's response was similar in magnitude to the injectiononly control. The reduction in expression due to increased cell damage/death may also explain the attenuated responses and titers. While only one time point is shown here (week 7), the trend towards achieving higher titers with lower voltage parameters was demonstrated at all time points, through study completion (week 12).
Ultimately, our primary question for this study was to establish whether EP would be able to enhance the delivery of siRNA (Figure 3). Again, initial range-finding experiments were conducted using the original DNA-EP parameter set (200-10 V) used in Figure 1. Since siRNAs are considerably smaller than plasmids, we were unsure whether EP parameters optimized for a large circular DNA molecule (~2 kb) would effectively deliver a smaller double-stranded RNA (20-25 nts in length). Using Alexa-488 tagged siRNA, we were able to demonstrate that a large range of voltage parameters (200-10 V) successfully delivered the siRNA to dermal tissue. There appeared to be a trend towards higher delivery efficiency at the lower voltage settings, thus supporting the pattern observed with plasmid delivery. Since the siRNA is tagged and therefore does not require expression for visualization, (unlike the plasmid DNA), the effect of the tissue damage at higher voltages seemed less pronounced in these samples. Since the 10-V parameter appeared to induce the most effective delivery of siRNA, we wanted to further investigate the localization of the delivered siRNA in the dermal tissue by histological techniques. Forty-eight hours following an injection-only delivery displayed no detectable Cy3 siRNA in the skin biopsy. However, robust Cy3 signal was present in the epidermis of the skin following EP-enhanced delivery (Figure 4). At an increased magnification, it was clear that the majority of the Cy3 signal was localized to the stratum corneum (Figure 5a) which is the upper-most stratified layer in the epidermis. However, clear localization of Cy3 could also be seen in epithelial cells in the epidermis, from the basement membrane up. As cells in the epidermis differentiate they move from the lowest layer (basement membrane) to the surface (stratum corneum). The concentrated signal in the stratum corneum could potentially be attributed to cells transfected with Cy3 siRNA that are moving through the natural regenerative cycle. The Cy3 signal remained strong at 48 hours although it was unclear whether the tag was still attached to the siRNA.
In order to confirm the localization of siRNA as detected by Cy3 signal, we employed FISH, visualized by a fluorescein-tagged tyramide signal amplifier, to detect the antisense strand of the siRNA. Analyzing serial sections from the same dermal tissue that was used for Cy3 detection allowed us to determine the extent of signal colocalization. As seen in Figure 5b, there was significant accumulation and clear colocalization of the Cy3 and fluorescein signals to the stratum corneum. Fluorescein signal was detected in the epidermal layers; however, to a lesser extent than the stratum corneum similar to the observations with direct detection of Cy3 signal. These data confirm that the distribution pattern of Cy3 in the skin sections is reflects that of siRNA localization and is not simply detection of free fluorophore.
We were keen to understand whether simultaneous codelivery of fluorescent DNA and RNA molecules would result in colocalization of signals. To investigate this, plasmid-expressing GFP was mixed with Cy3-labeled siRNA and delivered simultaneously to skin using the SEP (Figure 5c). There was clear colocalization of both signals to the stratum corneum and there was also signal in epithelial cells in the epidermis; however, it was difficult to determine whether any single cell contained both GFP and Cy3-siRNA signals. It also appeared that at the 48-hour time point, more cells were Cy3-positive than GFP-positive. This may reflect the kinetics of expression or stability of GFP in the cell.
While the tagged siRNA allowed us to establish the feasibility of siRNA delivery and optimize delivery parameters, the true application would be to demonstrate targeted gene knockdown. Using the plasmid expression of GFP, we demonstrated that matched siRNA simultaneously delivered was able to significantly knockdown reporter gene expression in the skin (Figure 6). As hypothesized, unmatched (siRNA-Luc) had no effect on GFP expression. We believe that this successful demonstration of siRNA delivery to dermal tissue demonstrates that EP-mediated delivery of siRNA is a viable option to enhance gene knockdown in the skin and could offer a clinical platform for therapeutic options in the future. We considered the use of sequential administration of GFP plasmid followed by the administration of siRNA at a subsequent time point (a day or two later to better model a therapeutic effect). However, skin delivery of DNA and siRNA to the animal at the same site presents a significant technical challenge and cannot be adequately controlled. This is because over time the injected liquid (and the plasmid or siRNA) dissipates away from the injection site, the surface of the skin and the underlying dermal layer slide relative to each other, and it is difficult to adequately ensure that the same cells that receive the reporter plasmid are also getting the siRNA molecules. Towards this end, subsequent studies will address the effects of improved EP-mediated siRNA delivery in surface tumor models and investigate the impact on tumor growth. Using a visible tumor model allows us to accurately target siRNA delivery to the tumor target directly.
EP has been previously used in several studies to deliver siRNA to target tissues. Like the Inoue et al. study, 23 we also demonstrate successful delivery of siRNA to skin. A distinct difference between the two studies is the animal model used. Mouse skin was used in the Inoue study which also presents a distinct challenge in that because it is very thin, it is difficult to control transfection and gene silencing to the skin; instead it is likely that both skin and the underlying muscle are impacted. In this study, we investigated siRNA transfer in guinea pig skin which represents a more relevant dermatological model due to its similarities in thickness and physiology to human skin. A major advancement in the siRNA delivery protocol that we describe here is related to the device and the optimal EP parameters. Here we determined that low-voltage parameters (10 applied volts or 67 V/cm), which generate minimal current, using our SEP device was optimal at siRNA transfer. Indeed, such parameters result in no discernable tissue damage and as previously reported for delivery of DNA molecules, are highly tolerable. 25 The specific design of the device also lends itself to optimized delivery. Due to the grind of the electrodes, no additional conductive gel need to be applied and the device makes only contact with the surface of the skin. Therefore from a patient's perspective, the benefit of combining the SEP device with low-voltage parameters for RNA delivery is increased tolerability. Achieving efficacy and target specificity are crucial elements of this delivery platform.
We not only established that we could enhance RNA delivery to skin by EP and demonstrate reporter gene knockdown, we also established that our low voltage, tolerable SEP parameters appeared to induce the most effective delivery. The EP platform detailed in this study at present is particularly suited to topical RNA delivery applications.
In summary, these data support the idea that RNA delivery can be facilitated by EP, much in the same way that DNA delivery is, and as such, this platform could pave the way for development of targeted RNA-based therapies for local applications.
Materials and methods
SEP minimally invasive device design. Electrode arrays consisting of a 4 × 4 gold-plated trocar needle of 0.0175 inch diameter at a 1.5 mm spacing were constructed to be used in conjunction either with the ELGEN1000 (Inovio Pharmaceuticals, Blue Bell, PA) pulse generator or a battery powered low voltage circuit.
Plasmid preparation. The gWiz GFP plasmid was purchased from Aldevron (Fargo, ND). The NP plasmid encodes the fulllength NP derived from Puerto Rico 8 (H1N1) strain of influenza (accession number: ADY00024.1). The construct had the nuclear targeting signals mutated and was optimized and synthesized by GeneArt (Grand Island, NY) then cloned into the backbone of a mammalian expression vector, pMB76.5. All plasmids were diluted in 1× phosphate-buffered saline before injection.
The GFP-siRNA and Luc-siRNA was purchased from Bioneer, Daejeon, Korea. The sequence of the GFP sense strand was CGAAGGUUAUGUACAGGAA(dTdT) and the sequence of the luciferase sense strand was UUGUUUUGGAGCACGG AAA(dTdT).
Animals. Female Hartley guinea pigs (strain code 051) and female IAF hairless guinea pigs (strain code 161) were purchased from Charles River Laboratories, Wilmington, MA. Guinea pigs (four animals per group for immune study) were housed at BioQuant (San Diego, CA).
All animals were housed and handled according to the standards of the Institutional Animal Care and Use Committee (IACUC).
Preparation of animals. The GFP reporter results observed on the Hartley guinea pigs after hair removal were the same as the results observed on the IAF hairless guinea pigs in the initial plasmid localization experiments. Since hair removal appeared to have no effect on the resulting transfection and due to cost considerations, we chose to carry out the immune study in Hartley guinea pigs. Hartley guinea pigs were shaved and stubble removed by dilapatory cream (Veet) 24 hours before treatment.
DNA/siRNA injections. Guinea pigs were injected intradermally (Mantoux method (needle parallel to skin)-29 gauge insulin needle) with 50 µl of 1× phosphate-buffered saline containing the desired dose of plasmid or siRNA. For the immune study, animals were vaccinated twice 3 weeks apart with 100 µg plasmid at each immunization.
Dermal device EP. Immediately following injection of DNA or siRNA, the dermal device was applied to the site of dermal injection. The array was "wiggled" at the injection site to ensure good contact and electrotransfer achieved through pulse generation either from the Elgen 1000 or a low-voltage battery circuit.
Visualization of GFP reporter gene signal. Skin samples or biopsies were removed postmortem from animals after termination and stored on ice until imaged under an OV 100 imaging microscope (AntiCancer, San Diego, CA) at 480 nm.
Pixel count method. The images were processed using Adobe Photoshop CS5. A "gated region" of electrode contact for pixel analysis was established on the presumption that transfection occurs only where the electric field is applied and that the electric field is formed only where the electrodes are in direct contact with the skin. The distance between the first and fourth electrode in the SEP device is 4.5 mm. The "ruler tool" in Photoshop was used to isolate a 4.5 mm 2 region which was defined as ~95 pixels in length. The images obtained from the microscopy were 8 bit RGBs files. Photoshop is able to recognize pixel intensities ranging from 0-255 (darkest-brightest) in three different channels (red, green, blue). Intuitively, positive GFP signal would predominate in the green channel; therefore, pixel analysis was restricted to this channel. The CS5 version of Photoshop is able to automatically calculate mean and median pixel intensity of a selected region. Since the distribution of pixel intensity was not symmetrical in most cases, the median gave a better representation of central tendency for the histogram.
Histological analysis. Skin samples or biopsies were removed postmortem from animals after termination and immediately preserved in 10% neutral buffered formalin and processed for histopathological analysis. Appropriate tissues were trimmed, processed, embedded in paraffin, sectioned at ~5 µm, and stained with DAPI. All images were captured on the Zeiss Axiovision Microscope (Carl Zeiss, Goettingen, Germany) using a ×20 objective.
FISH. Skin sections that were serial to those used for histological analysis were processed by FISH using a 5′ DIG-labeled, LNA enhanced probe (Exiqon, Woburn, MA) designed to detect the antisense strand of the Cy3-siRNA. The sequence of the probe is 5′-AACTTACGCTGAGTACTTC-3′.
Sections were washed, blocked, and incubated with an anti-DIG-HRP antibody (Perkin Elmer, Waltham, MA). A fluorescein-fluorophore Tyramide Signal Amplification Kit (Perkin Elmer) was used to enhance HRP-generated signal. Skin sections were then stained with DAPI and visualized for both Cy3 signal (directly tagged to siRNA), and fluorescein signal (detection of the AS strand of the Cy3-siRNA). Images were overlaid to determine the extent of colocalization. such have financial interest (in the form of salary compensation, stock options and/or stock ownership) in the work described in this manuscript. T.S.Z., A.C., and J.A. are employees of Alnylam Pharmaceuticals and as such have financial interest (in the form of salary compensation, stock options and/or stock ownership) in the work described in this manuscript. | 2018-04-03T05:30:10.634Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "5a7806ce5fa497bf9952f1c2892cabb8bf12cac1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1038/mtna.2012.1",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a7806ce5fa497bf9952f1c2892cabb8bf12cac1",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
20569148 | pes2o/s2orc | v3-fos-license | A SPECTRAL APPROACH INTEGRATING FUNCTIONAL GENOMIC ANNOTATIONS FOR CODING AND NONCODING VARIANTS
Over the past few years, substantial effort has been put into the functional annotation of variation in human genome sequence. Such annotations can play a critical role in identifying putatively causal variants among the abundant natural variation that occurs at a locus of interest. The main challenges in using these various annotations include their large numbers, and their diversity. Here we develop an unsupervised approach to integrate these different annotations into one measure of functional importance (Eigen), that, unlike most existing methods, is not based on any labeled training data. We show that the resulting meta-score has better discriminatory ability using disease associated and putatively benign variants from published studies (in both coding and noncoding regions) compared with the recently proposed CADD score. Across varied scenarios, the Eigen score performs generally better than any single individual annotation, representing a powerful single functional score that can be incorporated in fine-mapping studies.
tions in the first block, k 2 for the second block, and k 3 for the third block with k 1 + k 2 + k 3 = M .
For clarity, we rename the variables for the second block as s 1 , . . . s k 2 , and those for the third block as u 1 , . . . , u k 3 . Then we have the following systems of equations: a 11 a 12 . . . a 1k 2 a 21 a 22 . . . a 2k 2 . . .
Adding up the elements of this matrix we get: Similarly, we can get k 3 (t 1 + · · · + t k 1 ) + k 1 (u 1 + · · · + u k 3 ) = and k 3 (s 1 + · · · + s k 2 ) + k 2 (u 1 + · · · + u k 3 ) = The matrices a, b, c represent the corresponding sub-matrices of the Q matrix. From this system of equations we can solve for t 1 + · · · + t k 1 , s 1 + · · · + s k 2 and u 1 + · · · + u k 3 . We also have by summing the elements from the first column that a ij for j = 1 . . . k 2 and we can then get s 1 , . . . s k 2 . Furthermore, from the equations we can get the solution for t i with i = 1 . . . k 1 . We can then easily get the solution for u 1 . . . u k 3 .
Since there are more equations than unknowns the only issue remaining is to require that the systems of equations are compatible. Since the exponential of the matrix on the left in eq. (1) is of rank 1, a necessary (and sufficient) condition is that the exponential of the matrix on the right in eqn. (1) is of rank 1. The exponentials of the entries on the right hand side are covariances for pairs of conditionally independent random variables, so by (5) in the main text we can write µ 1,0 λ 1,0 µ 1,0 λ 2,0 . . . µ 1,0 λ k 2 ,0 µ 2,0 λ 1,0 µ 2,0 λ 2,0 . . . µ 2,0 λ k2,0 . . . µ k 1 ,0 λ 1,0 µ k 1 ,0 λ 2,0 . . . µ k 1 ,0 λ k 2 ,0 where µ i,0 is the conditional mean of functional annotation i in the first block given component 0, and λ j,0 is the conditional mean of functional score j in the second block. Therefore the matrix can be written as 1−π π µ 0 ( λ 0 ) T where µ 0 and λ 0 have dimension k 1 × 1 and k 2 × 1 respectively. Therefore the exponential of the matrix is of rank 1 and the proof is complete.
S2. Determining the Functional Class of a Variant
The functional class for a variant is retrieved from the CADD database (see Web-based Resources). These were originally produced using the Ensemble Variant Effect Predictor (VEP) with the per gene option. When a variant matches multiple functional categories -for example, a variant that is synonymous in one splice variant and non-synonymous in another -this option causes VEP to only return the most severe effect for each gene. In most cases this results in a single annotation per variant. The exception is when more than one gene overlaps the variant. If this occurs, CADD will return multiple lines for the annotation, one per gene. In this case the first annotation listed in the CADD output is used here. The severity ranking used by VEP is given in the documentation (see Web-based Resources).
Variants that are classified in the CADD annotations as "Non Synonymous", resulting in an amino acid substitution, "Stop Lost", removing the stop codon, "Stop Gained", producing a premature stop codon, or "Splice Site", or "Canonical Splice", altering the splice junction between exons, are considered to be non-synonymous coding changes. The noncoding annotations are "Regulatory", referring to variants in a sequence with a known regulatory function, "Intronic", referring to variants occurring in introns but not part of a splice site, "Downstream" and "Upstream", referring to variants in genic region either after the last exon or before the first exon, "Noncoding change", referring to variants in noncoding RNAs, "3prime UTR" and "5prime UTR", referring to variants in the untranslated portions of a spliced RNA, "Intergenic", referring to variants outside a known gene region. "Synonymous" refers to variants in a coding region that do not result in an amino acid substitution. This last category is not included in the non-synonymous group since it has no potential to alter the amino acid sequence, and so is not covered by any of the protein function scores.
Violin plots for Eigen scores for non-synonymous variants in several gene sets. The horizontal line corresponds to the median Eigen score for variants in random genes.
For the proposed Eigen score, variants in intolerant, essential and GWAS genes have the highest scores, followed by random, LoF, tolerant and olfactory genes. Variants in these gene sets had lower scores than pathogenic variants reported in the ClinVar database, but higher scores than benign variants in the ClinVar database. These results are similar to the ones obtained using the CADDscore (see Figure below), with one difference.
Violin plots for CADD-scores (v1.1) for non-synonymous variants in several gene sets.
Namely, variants in tolerant and olfactory genes tend to have lower CADD-scores than benign variants in ClinVar. As shown in the Figure below, variants in tolerant and olfactory genes tend to have higher protein function scores (e.g. PolyPhenDiv) compared to benign variants, but lower conservation scores (e.g. PhyloVer). Since the CADD-score focuses on evolutionary selection (it quantifies negative selection at a position), these lower conservation scores for variants in tolerant and olfactory genes are reflected in the lower CADD-scores for these gene sets compared to benign variants in ClinVar.
S4. Hierarchical model to combine sequencing data with functional annotations
The Eigen, Eigen-PC and other aggregate scores are expected to be most useful when combined with population level genetic data for fine-mapping purposes at loci of interest. Therefore, we have performed simulation studies to investigate the improvement in discriminatory ability by combining sequencing data from a case-control dataset with the Eigen score compared to using the Eigen score alone.
Violin plots for PolyPhenDiv and PhyloVer scores for variants in several gene sets.
We based our simulations on data for one gene, the Vacuolar Protein Sorting 13 homolog B (VPS13B, also known as COH1, MIM #607817), from a whole-exome sequencing autism spectrum disorders (ASD) case/control dataset with 860 individuals. VPS13B is a gene associated with Cohen syndrome (CS, OMIM #216550), a rare autosomal recessive neurodevelopmental disorder, and mutations in this gene have also been reported in individuals with autism and non-syndromic intellectual disability. We simulated the truly causal variants in this gene based on a logistic regression model with the Eigen score as the sole predictor, assuming an association between the causal status of a variant and the Eigen score of magnitude (relative risk or RR) 1.1, 2 or 4 and assuming the proportion of truly causal variants to be 10%. Only non-synonymous variants were used in the simulations. The case-control status was generated as follows. For carriers of causal variants we generated a continuous phenotype from a normal distribution with a mean of 0.5 and a standard deviation of 0.2, while for non-carriers we used a standard normal distribution (corresponding to a Cohen's d effect size [4] of 0.53 -moderate effect). Cases were defined as the individuals with phenotype values above the median while the remaining individuals were classified as controls. We compare the discriminative performance of the hierarchical model [5,6,7] including the Eigen score as the functional predictor, with that of using the Eigen score alone. shown, combining the Eigen score with the case-control frequencies improves the power to identify true causal variants, especially when the RR for the association between the Eigen score and the causal status is low (1.1-2), as seems to be the case in many of the examples we looked at.
S5. CADD v1.0 vs. v1.1 In v1. 1 Kircher et al. [8] make two changes to the original (v1.0) score. First, the authors add several functional scores not included in v1.0. Second, they use a logistic regression model rather than a SVM as the learner. In the release notes for v1.1 the authors compare the two versions on results from tests used in the original paper. For example, they compare the correlation between the CADD-scores and the change in expression level associated with variants in regulatory regions for three genes. They find that v1.1 has a higher correlation in one of the genes, and in the data pooled from all three; however, v1.0 has higher correlation for the other two genes. They also look at AUC in four comparisons between ClinVar pathogenic and ESP likely benign variants. In three of the four comparisons, v1.1 has a small advantage over v1.0, in the fourth they are essentially equal.
These results suggest that v1.1 may be an improvement on balance, but that it does not always dominate v1.0. This is in line with our findings, in which v1.0 sometimes has worse performance than v1.1 and sometimes has better.
S6. Comparisons with CADD-score with reduced set of annotations
We have performed comparisons with the CADD-score trained on the same set of annotations we have considered in the construction of the Eigen score. Specifically, we first re-trained CADD using the exact same set of annotations we used for our own score Eigen. Based on the new CADD model, we have calculated new scores for variants in our example datasets (see Results section for more details on these datasets) and the results are summarized in Supplementary Tables 14, 15, and 16. Overall the performance of the new CADD-score is consistent with that of the full CADDscores (v1.0 and v1.1), and these results show that the Eigen score outperforms the CADD-score not simply because of the set of annotations used by Eigen (which is a proper subset of the set used by CADD), but rather because of more fundamental differences in the methodologies used to construct the two types of scores, as we explain in the main text. In Supplementary Table 14 we show results for the comparisons of the different aggregate scores on missense variants in four Mendelian genes. Note that we have excluded the nonsense mutations in these four genes because the new version of CADD did not work properly for nonsense mutations due to the exclusion of functional consequence from the annotation set. In Supplementary Table 15 we show results on non-synonymous de novo variants in various neuropsychiatric diseases. In Supplementary Table 16 we report results on noncoding variants identified in GWAS and eQTL studies.
S7. Including CADD into the construction of the Eigen score
We have performed several experiments with the CADD-score included as one of the component annotations, despite the fact that including the CADD-score violates our main assumption of conditional independence for annotations in different blocks. For both coding and noncoding setting, we included the CADD-score v1.0 in the evolutionary conservation block. As can be seen from the correlation plots (see Figures below), the CADD-score correlates strongly with the conservation scores (as expected), but also with annotations in the other blocks (by way the CADD-score is constructed).
Because of these correlations with annotations in other blocks, the CADD-score gets assigned fairly high weight, especially in the noncoding case (Supplementary Tables 17 and 18). However Correlation among different functional annotations with CADD score (v1.0) included (noncoding and synonymous coding variants).
this high weight is not necessarily reflective of the predictive accuracy of CADD, but reflects the natural correlation CADD has with annotations in the other blocks.
S8. Data artifacts
Data artifacts can impact the accuracy of meta-scores as discussed here, especially for Eigen-PC. In particular, correlations between annotations that are due to something other than the functional/non-functional mixture can skew the results. However, the block structure we have in the Eigen score may help to minimize this problem. The blocks are chosen in such a way that functional annotations derived using the same or similar (experimental) data are grouped together.
Since the weights used by the Eigen score depend on the R matrix, which is derived using between Table 8. P values (Wilcoxon rank-sum test) for somatic mutations (recurrent vs. non-recurrent) in the COSMIC database, for individual functional scores and several meta-scores. Comparisons are done for variants in different functional categories. n-rec is the number of recurrent somatic mutations, and n-nonrec is the number of nonrecurrent somatic mutations. The best single annotation is highlighted for each dataset. | 2017-10-28T02:28:05.958Z | 2015-12-03T00:00:00.000 | {
"year": 2015,
"sha1": "4ed8f5a13e267e3ff88e1e034620bd19c18cdd7a",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4731313?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5166c5c94ada299f9494b7fa189f106ddc71ee7a",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
255976119 | pes2o/s2orc | v3-fos-license | Accurate identification of circRNA landscape and complexity reveals their pivotal roles in human oligodendroglia differentiation
Circular RNAs (circRNAs), a novel class of poorly conserved non-coding RNAs that regulate gene expression, are highly enriched in the human brain. Despite increasing discoveries of circRNA function in human neurons, the circRNA landscape and function in developing human oligodendroglia, the myelinating cells that govern neuronal conductance, remains unexplored. Meanwhile, improved experimental and computational tools for the accurate identification of circRNAs are needed. We adopt a published experimental approach for circRNA enrichment and develop CARP (CircRNA identification using A-tailing RNase R approach and Pseudo-reference alignment), a comprehensive 21-module computational framework for accurate circRNA identification and quantification. Using CARP, we identify developmentally programmed human oligodendroglia circRNA landscapes in the HOG oligodendroglioma cell line, distinct from neuronal circRNA landscapes. Numerous circRNAs display oligodendroglia-specific regulation upon differentiation, among which a subclass is regulated independently from their parental mRNAs. We find that circRNA flanking introns often contain cis-regulatory elements for RNA editing and are predicted to bind differentiation-regulated splicing factors. In addition, we discover novel oligodendroglia-specific circRNAs that are predicted to sponge microRNAs, which co-operatively promote oligodendroglia development. Furthermore, we identify circRNA clusters derived from differentiation-regulated alternative circularization events within the same gene, each containing a common circular exon, achieving additive sponging effects that promote human oligodendroglia differentiation. Our results reveal dynamic regulation of human oligodendroglia circRNA landscapes during early differentiation and suggest critical roles of the circRNA-miRNA-mRNA axis in advancing human oligodendroglia development.
Background
Circular RNAs (circRNAs) are a large class of single-stranded, stable, functional RNAs in mammalian cells that have a closed-loop structure [1][2][3][4][5]. CircRNAs are derived from the covalent joining of a downstream 5′ splice donor to an upstream 3′ splice acceptor via a previously underappreciated pre-mRNA splicing mechanism, known as "back-splicing. " Compelling evidence shows that circRNAs play sophisticated biological roles, including regulation of pre-mRNA splicing, miRNA sponging, RNA binding protein (RBP) sequestration, and IRES-mediated cap-independent translation to produce short peptides [6][7][8][9]. The molecular mechanisms underlying circRNA biogenesis are attributed to cis-regulatory elements, such as the repetitive Alu sequence and trans-acting RBPs that flank the circular exon in the pre-mRNA. Both mechanisms could help to bring back splicing junction (BSJ) sites into proximity for efficient splicing [10][11][12]. Recent studies also indicated that RNA adenosine to inosine (A-to-I) editing is located within the Alu sequence, interfering with inverted and repeated Alu pairs to influence circRNA biogenesis [13,14].
While circRNAs are widespread in metazoans with generally low levels of expression, many circRNAs are highly enriched in specific tissues, such as the brain, and exhibit cell type-specific expression and function [14]. Specifically, abundant circRNAs are expressed in brain neurons and dynamically regulated during differentiation [14,15]. Of note, numerous circRNAs are specifically expressed in the human brain [16]. Despite the well-documented neuronal and synaptic circRNAs that display abnormalities in brain diseases [17,18], a comprehensive and precise understanding of the circRNA landscape and its downstream biological pathways in the human brain is still missing. Moreover, circRNAs were recently found in glia cells isolated from the post-mortem adult brains [19], including oligodendroglia (OL) that are responsible for myelinating neuronal axons to achieve rapid information flow in the brain [20]. Furthermore, alternative splicing is extensive in OLs that govern myelin development [21], and OL defects underlie neurodevelopmental and neurological diseases represented by schizophrenia and multiple sclerosis [22][23][24]. Therefore, the regulation and function of circRNAs in human OL development warrant rigorous investigation. Nonetheless, due to the difficulty in obtaining human OLs in culture, understanding circRNA biology in human OL development is a prevailing challenge.
Accurate and precise identification of the circRNA landscape relied on RNA-seq data but faced technical challenges. Reads spanning the circRNA specific BSJ sites are the only means for distinguishing circRNAs from their parental transcripts [10,15,[25][26][27][28]. Because BSJ reads only account for a small portion of RNA-seq reads and do not map to the genomic index, the identification and quantification of circRNAs suffer from low power and sensitivity and are thus prone to high false discovery rates [4,29]. RNase R treatments have been widely used to degrade linear RNA for circRNA enrichment to identify circRNAs of low abundance [11,30,31]. However, some transcripts were resistant to RNase R, owing to their lack of single-strand 3′ overhang or possession of secondary structures such as the G-quadruplex (G4) [32]. One recent study employed the addition of poly-A tails in vitro followed by RNase R treatment in optimized buffer conditions by replacing K + to Li + (refer to A-tailing approach hereafter) and achieved the best linear transcripts removal in an experimental setting to date [32].
On the other hand, the comparison of BSJ reads-based computational methods resulted in only a modest overlapping, again suggesting a high false discovery rate of circRNA identification by solely relying on BSJ reads [29]. Although constructing a cir-cRNA pseudo-reference for re-aligning RNA-seq reads achieved more accurate and sensitive circRNA identification [33], pseudo-reference mapping requires a more thorough removal of reads from linear transcripts to reduce the false-positive rate [33]. Furthermore, because circRNAs that harbor the same BSJ can contain multiple exons and even retained introns, BSJ reads-based methods cannot parse circRNA full-length information and internal structural variations, which is critical for circRNA function [34]. One in silico approach used split alignments of the reads pair with BSJ reads to reconstruct circRNA full-length from RNA-seq data despite the challenge for longer circRNAs due to the nature of short reads length from Illumina sequencing [35]. Recently, Nanopore long-read sequencing was performed to find circRNA full-length and alternative splicing events within the circRNA body but was restricted by low read depth [36,37]. Also, alternative circularization can generate multiple circRNAs within a single gene that share the same BSJ site, showing the complicated diversity of circRNA biology [10,38]. These "clustered" circRNAs share partial common sequences and may function additively in sponging miRNAs or RBPs, yet their potential coordinating roles are often overlooked. A multi-functional computational framework optimized for A-tailing datasets is needed to identify and quantify circRNAs accurately and cost-effectively.
We developed CARP (CircRNA identification using A-tailing RNase R approach and Pseudo-reference alignment), a comprehensive computational framework for circRNA identification and quantification using A-tailing RNase R RNA-seq data. Using CARP, we systematically interrogated circRNA landscape in an human OL cell line called HOG and identified circRNA dynamic regulation specifically during human OL differentiation. Some circRNAs appeared to be regulated independently of their parental mRNAs during differentiation possibly through flanking intron-associated RBPs or adenosineto-inosine (A-to-I) RNA editing within Alu repetitive elements. Multiple circRNAs regulated upon HOG differentiation could potentially advance OL differentiation via influencing miRNA activities and downstream gene expression.
Effective and accurate circRNA identification and quantification by CARP
In order to enrich circRNAs in RNA-seq data, we first adopted a recently published method with the addition of poly-A tails in vitro followed by RNase R treatment in Li + buffer (A-tailing approach hereafter) to remove linear RNAs [32]. Total RNA extracted from HEK293T and SH-SY5Y cells was used to test efficiency. The majority of linear mRNAs were degraded in RNase R treatment with traditional K + buffer ( Fig. 1a; Additional file 1: Figure S1A). Also, linear RNAs that harbor G4 structures thus are RNase R-resistant in K + buffer were efficiently degraded when switching to Li + buffer ( Fig. 1b; Additional file 1: Fig. S1B). Moreover, RNAs that lack 3′ poly-A tails but harbor unique 3′ end structures, such as histone mRNAs, could only be degraded by combined A-tailing and RNase R treatments ( Fig. 1c; Additional file 1: Fig. S1C). In contrast, an example cir-cRNA, circSMARCA5 (hsa_circ_0001445) [39], was not affected by A-tailing approach ( Fig. 1d; Additional file 1: Fig. S1D). Taken together, A-tailing coupled with RNase R in Li + buffer improved the efficiency for removal of linear RNAs from total RNA samples without affecting circRNA stability (t-test, p-value = 2.3 × 10 −22 ) ( Fig. 1d; Additional file 1: Fig. S1D). CARP effectively and accurately identifies full-length circRNAs from A-tailing data. a Substantial linear mRNAs were degraded by RNase R treatment in K + buffer in HEK293T cells. b mRNAs with G-quadruplex (G4) structures were further degraded by RNase R treatment in Li + buffer in HEK293T cell. c Linear RNAs with short poly-A tail were resistant to RNase R treatment but could be degraded after adding Poly-A tail in HEK293T cell. d A-tailing approach achieved the best linear RNA removal (scatter plot) without affecting circRNA stability (IGV view) in HEK293T cells. e Workflow of CARP to identify confident, full-length circRNAs from A-tailing data. f Density plot showed confident circRNAs identification by removing false-positive circRNAs sensitive to A-tailing and RNase R treatment in HOG cells. The ratio of RNA levels between the A-tailing treatment and the control was calculated and shown on the x-axis. The cutoff that defined resistant vs. sensitive upon A-tailing treatment is shown with the dashed line with an FDR < 0.05. g Most of the mapped reads in the A-tailing library were located to a predicted circRNA body sequence other than the non-circRNA forming sequence in HOG cells. Student's t-test (two-tailed and unpaired) was used for gene expression comparison between different libraries We then applied A-tailing to explore the dynamic regulation of human OL circRNA landscapes using the human oligodendroglioma cell line HOG. The human neuroblastoma cell line BE(2)-M17 (referred to as M17 hereafter) was used in parallel experiments for cell-specificity comparisons. Both HOG and M17 cells can be induced to undergo robust differentiation that recapitulates the early morphologic characteristics in OL and neuron development (Additional file 1: Fig. S2A) [40,41]. In addition, QKI5, an RNA-binding protein enriched in human OLs over neurons (Additional file 1: Fig. S2B) [42], was abundantly expressed in HOG cells but only negligibly expressed in M17 cells (Additional file 1: Fig. S2C). Moreover, global transcriptomic principal component analysis (PCA) demonstrated a close correlation between HOG cells and iPSC-derived OLs (Additional file 1: Fig. S2D-E) [43,44], further supporting HOG cells as an in vitro model for human OL that provides sufficient materials for reliable detection of low abundance circRNAs.
In addition, to improve computational methods for reliable identification of circRNAs, we developed a computational framework, CARP (CircRNA identification using A-tailing RNase R approach and Pseudo-reference alignment), designed to handle A-tailing datasets. Four well-established algorithms, including CIRCexplorer2, CIRIquant, find_ circ, and MapSplice, were first applied for independent putative circRNA identification. However, only a subset of circRNAs was shared by the outputs of these methods (Additional file 1: Fig. S3A), indicating a relative low power using BSJ reads alone and potentially false-positive circRNAs identified by a single software program [29]. Therefore, all circRNAs identified by any one of the four BSJ-based algorithms were pooled, and a pseudo-reference for each candidate circRNA was constructed using sequence ± 149 bp flanking the BSJ sites ( Fig. 1e) to overcome these problems. Reads that are directly aligned against the pseudo-reference should be derived explicitly from circRNA BSJ sites.
The stringency of reads that map to the pseudo-reference, such as how many nucleotides surrounding BSJ sites should be included in the sequencing reads, plays critical roles in distinguishing reads from either circRNA or their linear parental genes. To achieve a possible low false discovery rate (FDR) without compromising the actual circRNA reads from A-tailing libraries, CARP can perform a series of optimizations to define the suitable stringency. In addition, CARP also re-aligned these circRNA reads to the genome and transcriptome to eliminate false-positive reads. We determined that reads with 8 bp exact reverse complementary with ± 4 bp flanking the BSJ flanking sequencing achieved FDR < 0.05. Under these criteria, most false-positive reads were removed, and the remaining were considered bona fide circRNA reads (Additional file 1: Fig. S3B).
We further validated and refined the results based on the direct comparison of the circRNA species identified by A-tailing samples to the untreated libraries. A linear reference was constructed using the sequence of the last exon to quantify the linear host transcript of each candidate circRNA because the last exons barely form circRNAs [2,32]. By calculating the linear mRNA ratio between A-tailing and untreated libraries, CARP determined the RNA pool sensitive or resistant to A-tailing/RNase R treatment using FDR < 0.05 as a cutoff. Compared with 95% of linear RNA, only 4% of circRNAs identified by CARP were significantly sensitive to A-tailing/ RNase R treatment ( Fig. 1f; Additional file 1: Fig. S3C). Furthermore, circRNAs that were sensitive to RNase R treatment were subsequently removed, resulting in a substantive true positive circRNA pool for downstream study. Taken together, our data suggest a much-improved circRNA identification both experimentally and computationally in this study.
Compared to untreated libraries, A-tailing allowed the identification of an additional 37,950 de novo circRNAs by CARP in HOG cells (Additional file 1: Fig. S3D). Notably, circRNAs identified by CARP in A-tailing libraries were highly correlated with circRNAs in untreated RNA-seq libraries (Pearson correlation, R 2 = 0.99, p-value < 2.2 × 10 −16 ), with the majority of circRNAs displaying enrichment after A-tailing (Additional file 1: Fig. S3E). Meanwhile, the circRNA expression levels quantified by CARP using pseudoreferences displayed a high correlation with circRNA quantification by CIRCexplorer2 (Pearson correlation, R 2 =0.90) and CIRIquant (Pearson correlation, R 2 =0.95) using BSJ reads (Additional file 1: Fig. S3F, G). In addition, since A-tailing/RNase R degraded most linear transcripts, the mapped reads spanning discontinuous regions in pre-mRNA, referred to as "split reads, " can be used to determine full-length circRNA sequences (Fig. 1e). The split reads show exact regions in the circRNA host genes that could be included or excluded in circRNAs. CARP supplied accurate circRNA full-length annotation, evidenced by that reads from A-tailing data mapped to predicted circRNA body sequence instead of non-circRNA forming sequence (Fig. 1g). For example, by using split reads, a circRNA from GLRX3 was annotated to contain 3 exons (Additional file 1: Figure S3H, blue color) with one linear exon (Additional file 1: Fig. S3H, red color) excluded from circRNA, and a detailed inspection in IGV confirmed no A-tailing reads in the excluded exon on its host gene (Additional file 1: Fig. S3H, red color). The circGLRX3 full-length constructed by CARP was consistent with the recently published long-read sequencing data (Additional file 1: Fig. S3H, lower box) [36]. These data suggested that CARP demonstrated better sensitivity in circRNA identification with complete length information.
Identification of human OL progenitor circRNA landscape by CARP
Using CARP, we identified an average of 38,561 confident circRNAs in HOG cells. A similar number of confident circRNA species were identified in M17 cells (Additional file 2). The majority were derived from annotated gene regions, including circRNA derived from exons and introns. Most exon-derived circRNAs bear multiple exons, while a few are from single exon and intron lariats in HOG cells (Additional file 1: Fig. S3I, Additional file 3). The lengths of most circRNAs ranged from 200 to 2000 nt (Additional file 1: Figure S4A). The median exon length of multi-exon circRNAs was comparable to the length of randomly chosen exons that did not form circRNAs, while the exon length of single exon circRNA was much longer (Additional file 1: Fig. S4B) [13]. Both upstream and downstream circRNA flanking introns were much longer than random introns in the human genome (Additional file 1: Fig. S4C). The numbers of exons in various circRNAs ranged from 1 to over 20 (Additional file 1: Figure S4D). Consistent with earlier reports, very few circRNAs contained the first exon or the last exon of the host transcripts due to the lack of splice donor or acceptor sequences to support backsplicing (Additional file 1: Fig. S4E). Using our recently developed algorithm, circMeta [45], we calculated the Alu score, which reflects the likelihood of circRNA formation by IRAlus formed within or across flanking introns [10]. The confident circRNAs identified by CARP showed a higher Alu score compared to false-positive circRNAs and randomly selected intron pairs (Additional file 1: Figure S4F), once again indicating the improvement in bona fide circRNA identification.
In order to identify the OL-specific circRNA landscape, A-tailing samples obtained from HOG and M17 cells were subjected to DE analyses by CARP using integrated DESeq2 [46], which revealed 2468 and 2660 DE circRNAs distinctly enriched in HOG and M17 cells, respectively (Fig. 2a). Among the DE circRNAs, circSLC45A4, a negative regulator of neuronal differentiation [47], was highly expressed in HOG cells. In contrast, the synaptoneurosomal circRIMS2 [14] was found enriched in M17 cells. Of note, only 346 and 427 significant DE circRNAs were identified in HOG and M17 cells using RNA-seq libraries without A-tailing treatment, further indicating improved sensitivity by A-tailing treatment (Additional file 1: Fig. S4G). With the improved sensitivity, CARP CARP identified a distinct circRNA landscape in M17 and HOG cells more efficiently using A-tailing data. a The volcano plot showed significant DE circRNAs in M17 and HOG cells using A-tailing data. Blue and red dots indicate significant M17 and HOG cell-enriched circRNAs (DESeq2, FDR < 0.05). b Overlap of DE circRNAs identified by CARP using A-tailing data and control data without A-tailing. c A scatter plot showing a log 2 fold change of DE circRNAs in HOG and M17 cells and their expression (counts per million) in HOG cells. Red dots indicate DE circRNAs that were explicitly identified by A-tailing data. Cyan dots show DE circRNAs identified by both A-tailing data and control data. A-tailing libraries were sensitive in identifying circRNAs with relatively low expression levels (red dots). d The density plot showed a high correlation of log 2 fold change for common circRNAs identified in the A-tailing library and the untreated library. Density colors show circRNA numbers in specific log 2 fold changes. Red dots represent significantly DE circRNAs identified by A-tailing data (DESeq2, FDR < 0.05). e A scatter plot showed a high correlation of circRNA expression quantified by A-tailing data and qPCR for 4 randomly selected circRNAs with different expression levels detected 6.63 times more de novo DE circRNAs between M17 and HOG cells than traditional untreated RNA-seq (Fig. 2b). The majority of the de novo DE circRNAs are of low abundance (Fig. 2c), demonstrating the ability of CARP in detecting low abundance circRNAs. Importantly, the log 2 fold changes of each circRNA also showed a high correlation between A-tailing and untreated libraries (Pearson correlation, R 2 = 0.82, p-value < 2.2 × 10 −16 ), indicating that CARP is accurate for circRNA DE analysis (Fig. 2d). To further validate circRNA expression levels quantified by CARP, the expression of four DE circRNAs representing high, medium and low-level expression in HOG cells were evaluated by qPCR with divergent primers (Additional file 4), which showed a high correlation with our RNA-seq data (Pearson correlation, R 2 = 0.99, Fig. 2e).
CircRNA biogenesis and sequence composition were regulated upon HOG differentiation
HOG cells were induced to differentiate for 13 days before being subjected to A-tailing and CARP analysis in order to delineate whether and how human OL development regulates circRNA landscapes. In addition to morphological differentiation (Additional file 1: Figure S5B) were processed in parallel to achieve cell-specificity comparisons.
CARP identified 204 circRNA isoforms that carry different sequence compositions but share the same BSJ sites, which underwent significant isoform switching during HOG cell differentiation (t-test, p-value < 0.05 and inclusion level difference over 0.2, Fig. 3a). To annotate exons excluded or included in the 204 circRNAs isoforms, we compared the sequences in these circRNA isoforms with gene annotation downloaded from UCSC Table Browser and cassette exon information downloaded from HEXEvent [56]. We found that 17.89% of the alternative circular exons are from previously annotated cassette exons, 45.83% are derived from previously annotated constitutive exons, while 36.27% are from novel unannotated exons. For example, two isoforms of circCFAP299 exist due to the alternative inclusion of a 100-nt cassette exon ( Fig. 3a, b). The expression level of the short isoform switched from 55% of total circCFAP299 to 76% upon HOG differentiation (Fig. 3c). During differentiation, the isoform switch could cause functional consequences as the unique exon in the long isoform may sponge miRNA and sequester RNA-binding proteins. The cassette exon was circRNA-specific, not annotated in any linear mRNAs produced by the CFAP299 host gene in UCSC genomic browser. The circCFAP299 full-length constructed by CARP, particularly the unique cir-cRNA specific exon, was consistent with the long-read sequencing data (Fig. 3c, lower box) [36].
In addition to isoform switch, CARP also identified 189 upregulated circRNAs, and 181 downregulated circRNAs in differentiated HOG cells (Fig. 3d, FDR < 0.05 as cutoff ) (Additional file 1: Fig. S6A). Few circRNAs were commonly regulated by differentiation of HOG and M17, suggesting cell type-specific roles and regulation of these circRNAs in OLs differentiation (Additional file 1: Fig. S6B). Most significant DE circRNAs during HOG differentiation are positively correlated with developmental regulation of their host genes (Fig. 4a, group 1 shown in grey dots). However, a subclass of differentiation-regulated circRNAs showed distinct changes than the linear RNAs derived from the host genes ( Fig. 4a, group 2 shown in orange and blue dots). For example, the gene encoding the vacuolar protein sorting-associated protein 13C (VPS13C), which belongs to the GO term of protein retention in Golgi apparatus, was upregulated in differentiated HOG cells. Conversely, a circRNA derived from the VPS13C locus was downregulated, suggesting circVPS13C could be subjected to post-transcriptional regulation independent of its host gene expression (Fig. 4b).
To elucidate molecular mechanisms that regulate circRNA biogenesis, we first explored the potential roles of RBPs in circRNA biogenesis, focusing on RBP-encoding mRNAs that are significantly regulated during HOG differentiation. CARP was used to systematically survey eCLIP data for 150 available RBPs to search for their CircRNA internal structure and expression were regulated upon HOG differentiation. a CARP identified circRNA isoform switching events upon HOG differentiation. Blue and red dots represent short to long isoform switch and long to short isoform switch, respectively (Student's t-test, two-tailed and unpaired, P < 0.05). The "inclusion difference" is the difference of "inclusion level" that is calculated based on junction reads count between undifferentiated and differentiated HOG cells. b A 100-bp exon (green) was excluded in circCFAP299 in differentiated HOG cells. c IGV view shows that the cassette exon of circCFAP299 was supported by mapped reads. The novel exon in circCFAP299 was also confirmed by a recent study using the Nanopore-based long reads sequencing method. d DE analysis of circRNAs in undifferentiated and differentiated HOG cells. Orange and blue dots show upregulated and downregulated circRNAs upon HOG differentiation (DESeq2, FDR < 0.05) potential binding sites within the flanking introns of each selected circRNA (group 2 in Fig. 4a). Among them 34 RBPs were regulated upon HOG differentiation and their binding site were significantly enriched in the flanking introns of the group 2 circR-NAs (Fig. 4c, bar plot) [57]. Interestingly, 8 of the 34 RBPs are known splicing factors 1), while blue and orange dots stand for inversely correlated circRNAs with their host gene (group 2). b IGV view showed circVPS13C was downregulated upon HOG differentiation. Bar plot indicated VPS13C mRNA was upregulated upon HOG differentiation. c Bar plot indicated the number of circRNAs from group 2 in Fig. 4a that a given RNA-binding protein could bind to their flanking introns. Heatmap showed RPKM levels of those RBPs in parental and differentiated HOG cells. d IGV view showed circPRH1 expression was upregulated during HOG differentiation. CircPRH1 full-length sequences are in green. The left bar plot shows that the A-to-I editing in the Alu sequence of circPRH1 flanking introns was depleted upon HOG differentiation. The right bar plot shows that PRH1 mRNA was downregulated upon HOG differentiation. Student's t-test (two-tailed and unpaired) was used for A-to-I editing change. **P < 0.01 (23.53%), which showed significant enrichment compared to splicing factors in the RBP database (7.76%) (chi-squared test, p-value = 0.02). This is consistent with the reported roles of RBPs in regulating back-splicing [58]. Specifically, a top-ranked RBP, KHSRP, was recently reported to regulate the biogenesis of a large number of circR-NAs, including circVPS13C, in HepG2 and K562 cells [59] (Additional file 1: Figure S6C) [59]. Our RNA-seq analysis revealed downregulation of KHSRP during HOG differentiation (Fig. 4c, heatmap) accompanied with reduced circVPS13C, which is consistent with a function of KHSRP decline in attenuating circVPS13C biogenesis.
We next questioned whether A-to-I editing of a cis-regulatory element might contribute to circRNA biogenesis upon HOG differentiation. CARP integrated a published algorithm, Software for Accurately Identifying Locations Of RNA-editing (SAILOR) [60][61][62], and focused on significantly changed A-to-I editing sites within complementary Alu sequences in the flanking intron of DE circRNAs upon HOG differentiation based on RNA-seq data from samples without RNase R treatment. As a result, 71 significant A-to-I editing changes occurred within DE circRNA flanking introns during HOG differentiation, suggesting A-to-I editing is robust in the cis-regulatory element of DE circRNA (Binomial test, p-value < 2.2 × 10 −16 ). Among these, circPRH1 was significantly upregulated upon HOG differentiation and inversely correlated with its host gene expression change (Fig. 4d). Interestingly, several inverted and repeated Alu pairs (IRAlus) were found in the flanking introns of circPRH1 (Fig. 4d, green and red arrows). In addition, CARP found a significant reduction of A-to-I editing within one Alu loci during HOG differentiation, which could contribute to the upregulation of circPRH1 (Fig. 4d).
Identification of circRNAs that may contribute to HOG differentiation via modulating the activity of miRNAs to regulate mRNA targets
Because circRNAs are best known as miRNA sponges, to investigate circRNA-miRNA interactions for DE circRNAs during OL differentiation, we hypothesized differentiation-regulated circRNAs in HOG cells may modulate miRNA activity to regulate OL development. We searched for miRNAs whose predicted mRNA targets showed significant expression changes upon HOG differentiation and predicted circRNAs that contain potential binding sites for these miRNAs. Small RNA-seq was also performed in HOG cells with or without differentiation to quantify miRNA abundance using the published algorithm miRge 2.0, which correlated with the expression of their target mRNAs predicted by TargetScan [63,64]. Using an equal number of random mRNAs without miRNA binding sites as negative controls, CARP identified 45 miRNAs whose target mRNAs were significantly altered during HOG differentiation (Fig. 5a), among which the mRNA targets of miR-760 showed the most significant downregulation during HOG differentiation (t-test, p-value = 4.35 × 10 −10 ) (Fig. 5b). The downregulation of miR-760 target mRNAs likely occurs at post-transcriptional steps, as the pre-mRNA levels of these genes quantified by analyzing intron coverage from RNA-seq data did not show significant changes (t-test, p-value = 0.06) (Fig. 5c) [65].
Interestingly, miR-760 levels did not change during HOG differentiation (Fig. 5d), raising the question whether a circRNA may sponge and regulate miR-760 activity hence affecting the downstream mRNA targets. Indeed, we identified one circRNA, circSPATA13 CircSPATA13 regulated oligodendroglioma differentiation via sponging miR-760. a Scatter plot showed miRNA expression level (x-axis) and significant expression change of their target genes (y-axis) during HOG differentiation. An equal number of randomly selected non-miRNA target genes were used as a negative control. b Cumulative plot showed miR-760 targets were downregulated in HOG differentiation compared with randomly selected non-miRNA targets. The green line stands for log 2 fold change of random non-miRNA target genes. The orange line represents log 2 fold change of target genes according to the TargetScanHuman database. The blue line represents log 2 fold change of most confident target (top 90% according to context++ score) from TargetScanHuman database. Student's t-test (two-tailed and unpaired) compared top targets versus random targets, and p-values indicated. c Cumulative plot showed pre-mRNA of miR-760 target genes were not affected in HOG differentiation. d Volcano plot showed miRNA expression change upon HOG differentiation. Blue and red dots represent significantly downregulated and upregulated miRNAs during HOG differentiation (DESeq2, FDR < 0.05). Expression of miR-760 was not significantly changed upon HOG differentiation. e Bar plot showed circSPATA13 was downregulated while circPEX6 was upregulated in differentiated HOG cells. Cuffdiff was used for gene expression comparison. **P < 0.01. f A standard curve was constructed to calculate the copy number of circSPATA13 where the x-axis stands for log 2 copy number of circSPATA13 and the y-axis stands for Ct value from qPCR. g Expression change of circSPATA13, MYC, HIST1H2BM, linear SPATA13, and HERC6 upon si-circSPATA13 in HOG cell. The t-test (two-tailed and unpaired) was used for gene expression comparison. n = 7, ***P < 0.001, *P <0.05, "NS" indicating no significant change. h CircRNA-miRNA-mRNA network regulating HOG differentiation. Blue and orange represent down-and upregulated circRNA/miRNA/mRNA upon HOG differentiation. Dash lines represent inactivation of upstream regulator promote downstream target or biological process (hsa_circ_0004865), which harbors seven predicted miR-760 binding sites and was markedly downregulated during HOG cell differentiation (Fig. 5e, Additional file 1: Fig. S6D). Of note, circSPATA13 is not expressed in M17 cells and the circular exon in human circ-SPATA13 is poorly conserved in mouse (blastn identity = 66%), suggesting a preferential function of circSPATA13 in human OLs. Each undifferentiated HOG cell was estimated to harbor 2000 copies of circSPATA13 based on a real-time PCR standard curve generated with a known amount of circSPATA13 PCR product (Pearson correlation, R 2 = 0.99, Fig. 5f). Compared with the well-studied functional circCDR1as that efficiently sponges miRNAs when expressed 200-300 copies per HEK293T cell, the amount of circSPATA13 expressed in HOG cells should be sufficient to sponge miR-760 thus may regulate HOG differentiation.
One reported direct target of miR-760 is MYC, which suppresses the transition from proliferating OPC to differentiated OLs by binding to the promoter of genes involved in cell cycle regulation and/or chromosome organization [66][67][68][69][70][71]. The significant reduction of circSPATA13 during HOG cell differentiation is expected to release the sequestered miR-760, which in turn suppresses MYC. Indeed, our RNA-seq data revealed downregulation of MYC mRNA (Fig. 5b, Additional file 1: Fig. S7A) along with the decline of circSPATA13 (Fig. 5e) in differentiated HOG cells. To directly validate the function of circSPATA13 in modulating the miR-760 pathway, we conducted circSPATA13 knockdown in HOG cells using an siRNA that targets the BSJ sequence of circSPATA13. The level of circSPATA13 was significantly reduced (P-value = 2.74 × 10 −6 ), without affecting its linear mRNA (Fig. 5g). Importantly, several previously reported or predicted miR-760 targets, including MYC, HIST1H2BM, HIST1H3D, and HIST3H2A, were downregulated upon depletion of circSPATA13 whereas the non-miR-760 target HERC6 was unaffected ( Fig. 5g; Additional file 1: Fig. S7E). These data support the model that the developmentally programmed decline of circSPATA13 may turn on the miR-760 pathway independent of altering miR-760 biogenesis to advance human OL differentiation (Fig. 5h).
Identification of novel circRNA clusters that may exert additive effects in regulating miRNA activity during HOG differentiation
Although many circRNAs were reported to function individually, alternative Fig. 6 CircRNA alternative circularization generated clustered circRNAs with potential additive functions. a Distribution of circRNA numbers in circRNA clusters defined by alternative circularization events. b Nine circRNA isoforms in circRNA cluster FIP1L1 identified by A-tailing data display distinct expression patterns during HOG differentiation. c Scatter plot shows circRNA cluster complexity and expression changes during HOG differentiation. All dots represent significantly changed circRNA clusters during HOG differentiation. Insignificant circRNA changes are in red. Dot size represents the change of circRNAs number upon HOG differentiation within each cluster. d Cumulative bar plot showed the expression change of each circRNA in circRNA cluster ARHGEF28. IGV view showed a common region where circRNA cluster ARHGEF28 accumulated and significantly enriched during HOG differentiation. DESeq2 was used for circRNA cluster expression comparisons. *P < 0.05. e Schematic diagram showed that a common sequence of circRNA cluster ARHGEF28 (blue) could potentially regulate OL differentiation by sponging miR-454-3p. f Expression of ERBB4 upon HOG differentiation. g Expression of ERBB3 upon HOG differentiation. Cuffdiff was used for circRNA expression comparison. **P < 0.01 circularization of circRNAs that share one common BSJ site could form "circRNA clusters, " including alternative 3′ back splicing (A3BS) and alternative 5′ back splicing (A5BS) (Fig. 6a). In HOG cells, CARP found 15,221 and 11,387 clusters containing more than one circRNA with a common 3′ or 5′ BSJ site, respectively (Fig. 6a). For example, one circRNA cluster, FIP1L1, contained nine circRNAs identified by our A-tailing datasets, while untreated RNA-seq only identified one circRNA (circFIP1L1 form #4 in Fig. 6b) (Fig. 6b). Thus, our data provided better sensitivity and additional information compared with previous methods. Interestingly, a clear switch of dominant circRNAs within circRNA cluster FIP1L1 can be detected upon HOG differentiation. Specifically, the circFIP1L1 form #4 level accounted for 19.79% of total circRNAs produced within the circFIP1L1 cluster in HOG cells but elevated to 41.62% in differentiated HOG cells (Fig. 6b). The alternative circularized circRNAs within this cluster appeared to undergo independent regulation, as circFIP1L1 form #1 and 2 were downregulated in contrast to the rest during HOG differentiation, suggesting that a circRNA cluster could provide diverse functions from the same loci.
Importantly, all circRNAs within one cluster contain a common sequence due to the nature of shared 5′ or 3′ back splicing sites, which could act in an additive manner for sponging miRNAs and/or RBPs. The function of clustered circRNAs was often overlooked by previous methods when none of the individual circRNAs were significantly changing, but the common sequence expression shows significant expression change due to additive effect during differentiation. By comparing control and differentiated HOG cells, we identified 533 DE circRNA clusters, 123 of which did not contain significant DE circRNA individually and can be neglected by DE circRNA calling (Fig. 6c, red dots). One of the DE circRNA clusters, ARHGEF28, contains six alternatively circularized circRNAs, none of which showed significant alteration during HOG differentiation. However, the common sequence shared in all the alternative circRNAs derived from this cluster showed significant upregulation (Fig. 6d, e, the sequence in blue). Noticeably, the common sequence was predicted to sequester miR-454-3p, and the overall miR-454-3p target mRNAs were upregulated in differentiated HOG cells post-transcriptionally (Additional file 1: Fig. S8A, B). The top targets of miR-454-3p were subjected to KEGG pathway analysis and found enriched in critical biological pathways involved in OL development, including the mTOR signaling pathway, Wnt signaling pathway, and ErbB signaling pathway (Additional file 1: Fig. S8C). Several miR-454-3p top targets, including ERBB4 and ERBB3, were indeed upregulated during HOG differentiation (Fig. 6f, g), suggesting a potentially novel mechanism by which clustered circRNAs could play additive roles in regulating OL development (Fig. 6e).
Discussion
This study provided a 21-module computational framework CARP optimized for an A-tailing approach to identify and quantify full-length circRNAs. By applying the A-tailing approach and CARP in the human OL cell line HOG, we identified hundreds of human OL-specific circRNA that regulate OL early differentiation. Furthermore, multiple circRNAs and circRNA clusters were found to form a complicated network with miRNAs and genes in advancing OL differentiation. Thus, to our knowledge, this study stands for the first circRNA profiling in human OL early development.
Current methods for circRNA identification and quantification based on BSJ reads suffer from insufficient power and sensitivity and high FDR owing to the relatively low expression level of many circRNAs [29]. To bypass these hurdles, we adopted a published A-tailing method coupled with RNase R treatment in Li + buffer, and our results suggested the A-tailing method could effectively resolve linear RNAs and enrich circR-NAs [32]. We also developed a comprehensive computational framework, CARP, optimized for A-tailing or other RNase R-based experimental data. With a full range of customized and flexible cutoff for a number of bases that match to back-splice junction flanking sites for pseudo-reference alignment to achieve the best balance between FDR and sensitivity from different datasets for circRNA identification, CARP optimized an 8-bp seed sequence matching stringency to pseudo-reference for our dataset and subsequently filtered false-positive reads that could map to the genome or the transcriptome after pseudo-reference mapping. Furthermore, by comparing with a linear reference from the last exon, CARP could further remove A-tailing sensitive circRNAs, which are likely false positives. Without compromising the quality and FDR, CARP reported more circRNAs than most BSJ-based algorithms, including CIRCexplorer2, findcirc, and MapSplice, with higher accuracy.
Since the quantification of circRNA is highly dependent on each individual algorithm, often resulting in difficulties to cross-compare their output results. This issue, however, can be overcome by pseudo-reference-based approach such as CARP by universally merge potential circRNA junctions identified by different approaches. Therefore, CARP offers a streamlined computational approach that maximizes the sensitivity of circRNA detection and standardize the quantified output for better cross-reference comparison. The advantages of using pseudo-reference alignment have also been confirmed by recent publications [33,72]. Furthermore, CARP provided accurate circRNA quantification and sensitive DE analysis compared with RNase R treatment only, which may be biased because of the uneven efficiency of the RNase R treatment [33].
Importantly, obtaining circRNA full-length information is critical in determining their functions, a task that most BSJ-based circRNA identification software cannot fulfill [4]. Given the effective removal of linear RNAs, CARP could pinpoint circRNA sequence composition using split reads precisely. Despite cell-type specificity, we are still able to identify 12,242 common circRNAs between the two datasets from 44,705 circRNAs identified in HOG cells by CARP and 35,801 circRNA identified in the brain by isoCirc and 10,472 (85.54%) of them displayed identical internal structure between the two datasets, suggest a high level of consistency [36]. Compared with recent efforts that determine circRNA full-length information by long-read sequencing strategy [35,36], CARP can take advantage of the better coverage and cost-effectiveness of Illumina sequencing to identify low expression circRNAs and quantify circRNA expression more accurately. Meanwhile, the circRNA sequence composition from CARP was critical for downstream circRNA functional investigation and isoform switch events detection. Consequently, CARP provided a framework to predict circRNA-miRNA interplay, considering cir-cRNA expression level, miRNA binding site, miRNA expression level, and their target mRNA expression change. Together, CARP is a highly integrated, multi-functional, and comprehensive framework covering multifaceted circRNA biology.
To date, most efforts were made to study the functions of individual circRNAs solely based on the BSJ [6,47,73,74]. However, our study revealed that many circRNAs that share BSJ sites in the same parental linear RNAs could undergo alternative circularization. Furthermore, these "clustered" circRNAs with various lengths share pieces of common sequences that could be additive in terms of soaking miRNAs or RBPs. Thus, compared with individual circRNA, common and specific functions by independently regulated clustered circRNAs provide more flexible and complex mechanisms in finetune gene expression post-transcriptionally. Importantly, CARP was able to identify much more circRNA clusters than untreated RNA-seq data using the A-tailing approach, making the functions of clustered circRNAs more appreciable, which were often overlooked by individual circRNA studies. In addition, it has been increasingly acknowledged that stoichiometry must be considered before proposing a sponging model of relatively low expressing circRNAs [4]. Thus, the additive effects of multiple circRNAs derived from the same cluster could be potentially crucial for the functions of low expressing circRNAs to collectively fine-tune the gene expression.
CircRNAs that are independently regulated in the same cluster demonstrated posttranscriptional regulation of circRNAs, which is also supported by the circRNAs which expression changes were inversely correlated with their host genes. Indeed, some circR-NAs have been reported to undergo post-transcriptional regulation in cis or trans, independent of their host genes [13,[75][76][77]. Given the universal presence of cis-regulatory elements in all cell types, trans-factors could well account for circRNA tissue specificity. In addition, our data also suggested that A-to-I editing is a potential regulatory mechanism for cis-elements due to its dynamic in OL development. As A-to-I editing is well known to be regulated by ADAR, circRNA biogenesis is coordinated with ADAR activity should be a future challenge.
CircRNAs have been reported as critical regulators in many biological processes, including neurodevelopment and functions, but less conserved among species [14,47,78]. Mounting evidence has also demonstrated that dysregulation of circRNAs is involved in human neurological disorders, including Alzheimer's disease, Parkinson's disease, and Schizophrenia [17,18,75]. Despite the recent circRNA profiling in OLs from the human post-mortem brain, circRNA landscape and functions in the difficultto-obtaining human early OL development, which is crucial for myelin developmental disorders and lesion repair, is unknown. In this study, we applied CARP to explore cir-cRNA landscape and function in HOG cells and identified dynamic circRNA profiles during human OL development for the first time. A significant overlap of circRNA was found between HOG and OLs in the human post-mortem brain [19], demonstrating HOG as an in vitro system for human early OL development. In addition, much more circRNAs were identified from HOG cells, which represents an early OL progenitor cell stage, either showing better sensitivity of our method in detecting circRNA or a novel biology clue that there are much more circRNA expressed in the early OL stage, which may or may not be retained in mature OL stages.
Among various mechanisms for circRNA to regulate biological processes, the most well-defined was to "sponge" miRNAs and interfere with miRNA silencing activities on target mRNAs [4,9]. In this study, using a multi-step framework for circRNA functional annotation by CARP, we found a sophisticated network that integrate the function of multiple circRNA-miRNA-mRNA pathways to advance OL differentiation. Numerous repressor and enhancer genes have been shown to impact OL differentiation, represented by MYC and STAT3. MYC-induced repressive histone methylation and premature peripheral nuclear chromatin compaction was previously shown to suppress OPC differentiation [66], whereas STAT3 was thought to advance OPC differentiation and myelin repair [71]. However, the functional co-operation between differentiation repressors and enhancers remains poorly understood. The reciprocal regulation of the circS-PATA13-miR-760-MYC pathway and the circPEX6-miR-17-5p/106-5p-STAT3 pathway during early differentiation of human OLs revealed by our studies provide the first example that the developmental regulation of circRNAs may facilitate functional integration of differentiation repressors and enhancers to advance neural development. It should be noted that circRNAs can sponge multiple miRNAs, which could subsequently regulate hundreds of genes to form a complicated network. Moreover, the dynamic regulation of numerous circRNAs and circRNA clusters during OL differentiation identified here argues for the importance to further delineate the sophisticated co-operation of multiple circRNA-miRNA-mRNA axes, which is a prevailing challenge for future studies.
Conclusions
Our studies provide a robust platform that allowed sensitive and reliable identification of the circRNA landscape by combining the improved experimental condition for cir-cRNA enrichment and our computational algorism CARP. Using this method, we identified the first circRNA landscape in human OLs, which contains hundreds of novel circRNAs undergoing dynamic regulation during early OL differentiation. The precise mapping of full-length circRNA sequences by CARP allows reliable computational prediction of sponged miRNAs and revealed novel circRNA clusters that may achieve additive sponging effects. Importantly, we identified circRNA-miRNA-mRNA pathways that are reciprocally regulated during human OL differentiation to achieve functional cooperation and drive human OL differentiation. Together, our studies established improved methods for circRNA landscape identification, discovered novel circRNA sequence features, and drew direct mechanistic connections between circRNAs and the downstream miRNA-mRNA pathways, which provided important new insights into circRNA biology.
Western blot
M17 and HOG cells were lysed in 1× Laemmli Sample Buffer and heated at 95 °C for 5 min. Equal quantities of protein were separated on SDS-PAGE gels and transferred to PVDF membranes (Immobilon-P, Millipore). PVDF membranes were incubated with 5% milk for 1 h at room temperature and probed with primary antibody at 4 °C overnight. Membranes were rinsed three times in Tris-buffered saline (TBS) with 0.1% Tween (TBST) and then probed with horseradish peroxidase-conjugated secondary antibodies (Promega) for 1 h at room temperature. After rinsing in TBST, membranes were visualized by enhanced chemiluminescence (Pierce Biotechnology, Rockford, IL) and imaged using the Chemidoc MP imaging system (BioRad, Hercules, CA, USA). The following primary antibodies were used: anti-QKI5 (A300-183A-1, Bethyl Laboratories, Inc) and anti-EIF5 (SC-282, Santa Cruz Biotechnology, Inc).
RNA isolation
Cultured cells were harvested, then centrifuged at 1500g, and cell pellets were used for RNA isolation. Cell pellets were homogenized in TRIzol using a hand-held pestle homogenizer and incubated in TRIzol for at least 5 min. Chloroform (1:5 ratio) was added, mixed well, and incubated at room temperature for 15 min. Samples were centrifuged at 12,000g for 15 min at 4 °C. The top aqueous layer was transferred to a clean tube, and the RNA was precipitated in 3 M NaAc pH 5.2 (10:1 ratio), 4 μl of glycogen (5 mg/ml), 100% isopropanol (1:1 ratio) overnight at − 80°C. The next day, the samples were centrifuged at 20,000g for 20 min at 4 °C. The resulting RNA pellet was washed in 75% ethanol, centrifuged at 7500g for 10 min at 4 °C. The washed RNA pellet was dissolved in nuclease-free water, quantified by NanoDrop, and the quality was confirmed by agarose gel electrophoresis.
A-tailing RNase R treatment
Total RNA from HOG and M17 cells were treated for poly (A) tailing and with RNase R as described in the published method with modifications [32]. In brief, 3 μg of total RNA was subjected to poly (A) tailing in a 50 μL reaction using the poly (A) tailing kit (Thermo Fisher AM1350) following manufacturer's instructions; 2 μL E-PAP and 40 U RNase inhibitor (Thermo Fisher Scientific N8080119) were also added to the reaction and incubated at 37 °C for 1 h. First, the RNAs were purified by the RNA Clean & Concentrator-25 (Zymo R1018) kit and eluted in 25 μL nuclease-free water. The RNAs were then treated with 5 U RNase R in 30 μL reactions which contained 25 μL of all RNA samples from A-tailing reaction, 3 μL 10× RNase R Buffer (0.2 M Tris-HCl (pH 8.0), 1 mM MgCl 2, and 1 M LiCl) and 1 μL RiboLock RNase Inhibitor (40 U/ μL) (Thermo Fisher Scientific EO0381). According to the manufacturer's instructions, reactions were purified with RNA Clean & Concentrator-25 (Zymo Research R1018), and the RNA was eluted in 30 μL nuclease-free water. Then, the amount of RNAs (in 30 μL nuclease-free water) was used to prepare rRNA depleted RNA-seq library following the KAPA RNA HyperPrep Kit with RiboErase (HMR).
Library preparation and high-throughput sequencing
For the rRNA-depleted RNA-seq library, sample quality was assessed by Bioanalyzer 2100 Eukaryote Total RNA Pico (Agilent Technologies, CA, USA) and quantified by Qubit RNA HS assay (Thermo Fisher). Ribosomal RNA depletion was performed with Ribo-zero rRNA Removal Kit (Illumina Inc., San Diego, CA) followed by NEBNext ® Ultra ™ II Nondirectional RNA Library Prep Kit for Illumina ® per manufacturer's recommendation. Library concentration was measured by qPCR, and library quality was evaluated by Tapestation High Sensitivity D1000 ScreenTapes (Agilent Technologies, CA, USA). Equimolar pooling of libraries was performed based on qPCR values. Libraries were sequenced on a HiSeq with a read length configuration of 150 PE, targeting 80M total reads per sample (40 M in each direction).
For small RNA-seq library, total RNA sample quality was assessed by RNA Screen- Tape
Linear RNA quantification in HEK293T and SH-SY5Y cell
Paired-end reads from rRNA depleted RNA-seq for untreated, RNase R (K + ) treated, RNase R (Li + ) treated, and A-tailing RNase R (Li + ) treated library were mapped to human genome assembly version (GRCh38/hg38) using TopHat2 version 2.1.1 with default parameter. Bam files were sorted according to genome coordinate by "samtools sort" and converted to bed file by "bedtools bamtobed" with flag "-split. " Bed files were sorted by "sort -k 1,1 -k2,2n" according to the instruction manual for bedtools. The genome coordinate of the last exon of each gene was extracted from gene structure annotation of hg38 and downloaded from the UCSC table using a homemade Perl script. Reads count for each last exon were counted by "bedtools coverage" with flag "-sorted -counts -split" from sorted bed file of each sample. Read counts for the last exon were then normalized to total sequencing reads and exon length as RPKM to stand for linear RNA expression level according to the following equation:
Confident circRNA identification and quantification
Candidate circRNAs identified by CIRCexplorer2, CIRIquant, find_circ, and MapSplice from all A-tailing samples were pooled together and annotated to their host transcripts. By using "CARP PseudoRef, " a 248-bp "pseudo-reference" was constructed for each cir-cRNA flanking its back splicing junction site (± 149 bp) with 8 bp center sequence as "seed sequence. " Reference for linear isoform quantification was also constructed using the last exon of their host gene (referred hereafter as "the last reference") by "CARP PseudoRef. " Using "CARP Mapping, " reads from A-tailing and untreated library were mapped to "pseudo reference" and "last reference" by Bowtie 2 using the default parameter. Reads mapped to "pseudo-reference" were compared with seed sequence by "CARP BSJreads" and mapped to genome and transcriptome by Bowtie 2 and TopHat2 using "CARP Remap, " respectively. Reads mapped to genome or transcriptome or did not precisely match the 8 bp "seed sequence" were considered linear isoform derived reads and removed from downstream analysis. Using "CARP ReadsCount, " the remaining reads were used for circRNA identification and quantification, while reads mapped to "last reference" were used for linear RNA quantification. CircRNAs with read counts less than 2 were excluded, and the ratio for read counts in the A-tailing RNase R library and the untreated library was calculated to differentiate RNase R-sensitive or resistant reads. Since linear RNAs, not circRNAs, can be degraded in the A-tailing library, the ratio distribution for linear RNA and circRNA should display a clear difference, as shown in Fig. 1f. We defined a cutoff for A-tailing/Control ratio according to ratio distribution of linear RNAs to ensure a ratio of > 95% linear RNA, which should be sensitive to A-tailing RNase R treatment are lower than this cutoff (Fig. 1f, dash line). CircRNAs which have a ratio higher than this cutoff were considered as confident circRNAs which should be resistant to A-tailing RNase R treatment. Therefore, we controlled the false discovery rate of confident circRNA to 0.05 by using this cutoff.
CircRNA full-length construction and isoform switch detection
Read pairs that mapped to circRNA "body structure" or pseudo-reference were extracted to determine circRNA internal structure. Mapped reads spanning discontinuous regions in circRNA were regarded as "split reads" and were used to identify candidate junction sites in full-length circRNAs. Junction sites supported by more than two split reads were considered actual splicing sites in circRNA bodies and reported by CARP. The maximum read count for each junction and back splice junction (BSJ) was considered as the total expression level of this specific circRNA isoform, and the following equation calculated the proportion of each junction site: For downstream functional prediction, the proportion of each junction site of circR-NAs was compared, and the dominant isoform of circRNAs was reported. For circRNA alternative splicing analysis, the proportion of each junction site was compared across samples by t-test, and p-value < 0.05 was considered significant alternative splicing events to cause circRNA isoform switch. CircRNA full-length construction and isoform switch prediction was conducted by "CARP CircAS" and "CARP CircIsoformSwitch. "
Expression analysis for individual circRNA and circRNA cluster
CircRNA expression levels were normalized before differential expression analysis. Back splicing junction (BSJ) read counts (RC) for each circRNA were first calculated by counting reads mapped directly to a pseudo-reference to normalize circRNA expression in the A-tailing library. The following equation normalized RCs: Differential analyses for individual circRNAs were conducted by a well-defined algorithm DESeq2 using normalized reads count integrated with "CARP DEcirc. " CircRNAs sharing a common 5′ donor site or 3′ acceptor site were defined as one circRNA cluster. Using "CARP CircCluster, " expression of the circRNA cluster was calculated as the total expression of each circRNA in this cluster. DE analysis of circRNA cluster was also conducted by DESeq2, and FDR < 0.05 was considered as significant circRNA cluster differential expression.
Expression analysis of miRNA, mRNA, and pre-mRNA
Small RNA-seq data were mapped to miRNA database for miRNA quantification by miRge 2.0 to human miRNA database with the following parameter: "-sp human -ad AAC TGT AGG CAC CAT CAA T -ai -gff -trf " [63]. Untreated rRNA depleted RNA-seq data were mapped to hg38 by TopHat2 with default parameter followed by gene expression quantification and differential expression analysis using Cuffdiff [76]. Pre-mRNA expression was quantified using an iRNA-seq package according to reads mapped to the intron sequence with flag "-g hg38 -count intron" [65]. Differential expression analysis for miRNA and pre-mRNA was performed by DESeq2, and FDR < 0.05 was considered a significant expression change.
CircRNA-miRNA-mRNA network construction
Using "CARP CircNetwork, " miRNA binding sites in circRNA were predicted by targets-can_70 using full-length circRNA sequence [77]. The mRNA targets of these miRNAs were obtained from TargetScanHuman and ranked based on context++ scores [64]. Top
Proportion of junction site =
Reads count support this junction site Maximum Reads count of each junction site and BSJ Normalized RC = BSJ RC in A − tailing Library Total BSJ RC in A − tailing Library × Total BSJ RC in Control Library Total Mapped Reads in Control Library targets for a specific miRNA were defined by weighted context++ score ranking higher than 90 percentile. Using "CARP miRTarget, " the log 2 fold changes upon differentiation between top targets and random non-target genes were compared by t-test. Expression changes having a p-value < 0.05 were considered significant changes of miRNAs influences on their target mRNAs. To test whether these miRNA targets were regulated at transcription or post-transcription levels, the pre-mRNA log 2 fold changes of top target and random non-target genes obtained by iRNA-seq package were also compared by t-test, and p-value > 0.05 showed no significant differences at the transcription level. CircRNA-miRNA-mRNA networks were constructed according to the predicted binding site and positive interplay among the circRNA-miRNA-mRNA axis.
Absolute circRNA copy number determination PCR products for the back spliced junction region of circSPATA13 were first obtained using a divergent primer (Additional file 4) and purified by E.Z.N.A. ® Gel Extraction Kit. Then, the purified PCR product was serially diluted from 1 ng/μL to 10 pg/μL, and 1 μL aliquots were used for another round of qPCR using divergent primer. Finally, the copy number of templates used for qPCR was calculated by the following equation: where c stands for the concentration of template used for qPCR, V represents the volume used for qPCR (1 μL), M represents the molecular weight of template calculated by its sequence, and Na represents Avogadro's constant. A standard curve was generated for copy number and Ct value according to the copy number used for qPCR. Total RNA was extracted from 2.8 × 10 6 HOG cells and quantified by NanoDrop to measure the circSPATA13 copy per HOG cell. An aliquot of 500ng RNA was reverse transcribed into cDNAs for real-time qPCR. The copy number of circSPATA13 per HOG cell was calculated according to the standard curve and Ct value then divided by total cell number. Further, using the copy number and average normalized reads count of circSPATA13 and normalized reads count values of all detected circRNAs in HOG cell RNA-seq datasets as references, we calculated the copy number for each detected circRNA in HOG cells (Additional file 5).
CircSPATA13 knockdown in HOG
HOG cells were transfected with 200 pmol siRNA target junction site of circSPATA13 (AAG GAG AAG GAG GAG CCC GUG) and a negative control siRNA (Thermo Fisher Scientific, AM4611) for 48 h by using Lipofectamine 2000 (Invitrogen) following the manufacturer's instructions. The pDsRed2-C1 plasmid was co-transfected into HOG cells to assess transfection efficiency. Expression of circSPATA13 and linear SPATA13 were quantified by RT qPCR. Expression of miR-760 target include MYC, HIST1H2BM, HIST1H3D, and HIST3H2A were also quantified by RT qPCR with non miR-760 target HERC6 as a negative control. Primer sequences were listed in Additional file 4.
Copy number =
c × V × Na M qPCR for mRNA and circRNA expression Five hundred nanograms of RNAs from M17 and HOG cells was used to quantify mRNA expression in cells undergoing differentiation for qPCR analysis using SuperScript III (Invitrogen). For neuron differentiation and OL differentiation, markers were quantified using specific primers for M17 differentiation and HOG differentiation (Additional file 4). In addition, we quantified circRNA expression by qPCR analysis using 500 ng RNAs from HOG cells for reverse transcription with SuperScript III (Invitrogen). Randomly selected HOG enriched circRNAs with different expression levels were used for qPCR validation with a divergent primer (Additional file 4). Correlation of Ct value from qPCR and normalized reads count from RNA-seq data were conducted by cor function in R.
A-to-I editing and RNA binding protein prediction
The dynamic regulation of A-to-I editing during HOG cell differentiation was calculated by CARP integrated Software for Accurately Identifying Locations Of RNA-editing (SAILOR) using untreated RNA-seq data, and significant A-to-I editing loci were obtained by t-test with p-value < 0.05 using "CARP CircAtoI" [60]. Regulated A-to-I editing events were overlapped with the Alu element downloaded from the UCSC table. RNA-binding protein binding site was from the CLIP-seq peaks file in K562 and HepG2 cells [57]. Common CLIP-seq peak regions from 2 replicates were used as confident binding sites and then overlapped with flanking intron of differentially expressed cir-cRNA. Using "CARP CircRBP, " circRNAs with RBP binding sites in both upstream and downstream intron were regulated by specific RNA binding protein. | 2023-01-19T21:58:05.304Z | 2022-02-07T00:00:00.000 | {
"year": 2022,
"sha1": "a6c762f28918136ad352962f26a2a1489ce6a84c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13059-022-02621-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a6c762f28918136ad352962f26a2a1489ce6a84c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
17063783 | pes2o/s2orc | v3-fos-license | Continuous Measurement of Cerebral Oxygenation with Near-Infrared Spectroscopy after Spontaneous Subarachnoid Hemorrhage
Objective. The aim of our prospective study was to investigate the applicability and the diagnostic value of near-infrared spectroscopy (NIRS) in SAH patients using the cerebral oximeter INVOS 5100C. Methods. Measurement of cerebral oximetry was done continuously after spontaneous SAH. Decrease of regional oxygen saturation (rSO2) was analyzed and interpreted in view of the determined intrinsic and extrinsic factors. Changes of rSO2 values were matched with the values of ICP, tipO2, and TCD and the results of additional neuroimaging. Results. Continuous measurement of rSO2 was performed in nine patients with SAH (7 females and 2 males). Mean measurement time was 8.6 days (range 2–12 days). The clinical course was uneventful in 7 patients without occurrence of CVS. In these patients, NIRS measured constant and stable rSO2 values without relevant alterations. Special findings are demonstrated in 3 cases. Conclusion. Measurement of rSO2 with NIRS is a safe, easy to use, noninvasive additional measurement tool for cerebral oxygenation, which is used routinely during vascular and cardiac surgical procedures. NIRS is applicable over a long time period after SAH, especially in alert patients without invasive probes. Our observations were promising, whereby larger studies are needed to answer the open questions.
Introduction
Delayed cerebral ischemia (DCI) is the major cause of morbidity and mortality in patients suffering from spontaneous subarachnoid hemorrhage (SAH). Besides increasing importance of newer aspects like brain injury, inflammation, and microthrombosis and their influence on DCI, cerebral vasospasm (CVS) remains an important therapeutically target. Therefore, monitoring of cerebral blood flow (CBF) and oxygenation is a substantial and to improving issue in multi-modal intensive care [1][2][3][4][5][6][7][8][9][10][11]. Diverse therapeutically approaches have focussed bedside brain monitoring for early detection of hypoperfusion of the brain. Primarily, it is the measurement of intracerebral pressure (ICP), partial tissue oxygenation (tipO 2 ), regional cerebral blood flow using thermal diffusion flowmetry and micro-dialysis-being invasive measurement method. As a noninvasive measurement tool one or two channel EEG adhesive electrodes are used to monitor the sedation depth [2,[11][12][13]. As another noninvasive but discontinuous bedside measurement method transcranial Doppler (TCD) is also available, with well-known deficits like investigator dependence and weak correlation of elevated blood flow velocity and symptomatic CVS [14,15].
Near-infrared spectroscopy (NIRS) enables continuous and noninvasive measurement of regional cerebral oxygen saturation (rSO 2 ) via absorption of near-infrared light by oxyhemoglobin (HbO), deoxyhemoglobin (Hb), and cytochrome oxidase [1,[16][17][18]. NIRS is frequently used in cardiac and vascular surgeries and during neuroendovascular procedures as well, achieving good and reliable results, though, its application is confined to the periprocedural and short time of the postprocedural stage [16,[19][20][21]. Long-term measurement of rSO 2 for early detection of hypoperfusion of the brain after SAH is not yet widespread, so that the aim of our prospective study was to examine the applicability and the diagnostic value of NIRS used continuously in SAH patients over the estimated CVS period.
INVOS-System.
In this study we used the INVOS 5100C oximeter (Somanetics), which provides a continuous, noninvasive and real time measurement of cerebral oxygenation (Figure 1(a)).
The near-infrared wavelengths are generated by a light source of the sensor and penetrate the skin and the bone. Within the brain tissue in 3 cm depth the light is either absorbed or reflected to the two detectors of the sensor (Figure 1(b)). Since hemoglobin within the detection field consists of venous blood in about 75%, arterial blood in about 20%, and capillary blood in about 5%, clinical interpretation of the values can be considered as a venous measurement. Red dyed hemoglobin (HbO) displays the highest absorption of used wavelengths. Hence, the red shade of each hemoglobin molecule shows the containing oxygen concentration. The portion of reflected data gives the relative concentration of deoxyhemoglobin and the overall level of hemoglobin, from which the regional oxygen saturation (% rSO 2 ) is calculated. The continuous measured data are displayed on the monitor with an update every five seconds.
Study
Protocol. This study was approved by the Ethics Committee of the University of Schleswig-Holstein, Campus Kiel.
We included patients with spontaneous SAH, diagnosed by CT scanning or analysis of cerebrospinal fluid via lumbar puncture within 24 h after occurrence of symptoms in our study. After diagnosis, digital subtraction angiography (DSA) or CT angiography was performed for detecting the bleeding source. After treatment of the bleeding source (clip occlusion or coil embolization), all patients were treated in the neurosurgical ICU with continuous measurement of the arterial blood pressure, pulse rate, oxygen saturation, and continuous recording of electrocardiogram. The neurological condition in the continuing course was assessed by the physician at the ICU closely, using the Hunt and Hess scale (H&H). Appearance of blood on CT was scored using the Fisher classification.
TCD was performed daily to measure the flow velocity of the intracranial arteries. CVS was determined as acceleration of blood flow velocity > 120 cm/second. Sedated and intubated patients with mechanical ventilation received intracranial probes for continuous measurement of ICP and tipO 2 additionally. External ventricular drainage was done in cases of occlusive hydrocephalus or hydrocephalus like ventricular system. In cases of elevated ICP due to insufficient sedation, one-channel EEG electrodes (BIS) were applied additionally to measure the sedation depth.
After admission of the patients in the ICU, the forehead of the patients was cleaned up with alcohol pads, and selfadhesive oximetry strips were applied bilaterally to measure the baseline value of rSO 2 in 3 cm depth (Figure 1(c)).
After setting the baseline value, we started the continuous measurement of rSO 2 over the estimated CVS period up to the 12th day after onset. Measured data were saved in the memory device of INVOS 5100C and additionally in a memory stick.
In cases of decrease of rSO 2 with an obvious causefor example, removing the strips for personal hygiene, dysfunction of the strips, or removing for neuroimagingan event mark button was pressed to illustrate this event and to allow correct correlation during data analysis.
The oximetry strips could be removed during nursing activities and were reapplied at the same position. Once the self-adhesive behaviour of the strips diminished, fixation was done by a regular wound plaster to obtain further measurement ( Figure 1(d)).
Threshold Values and Data Interpretation.
Desaturation below 50% or decrease of rSO 2 by 20% from the baseline value was estimated to be critical; rSO 2 below 40% or decrease by 25% was assumed to be associated with DCI and neurological deficits.
For correct analysis of the measured values, intrinsic and extrinsic factors were considered exactly. Intrinsic factors were determined to be mean arterial blood pressure (MAP), hemoglobin (Hb), peripheral oxygen saturation (SaO 2 ), partial carbon dioxide pressure (pCO 2 ), and core temperature (t) [22]. Extrinsic factors were head positioning, correct position and adhesion of the strips, and correct connection of the strips to the INVOS device. In case of decrease of rSO 2 intrinsic and extrinsic factors were controlled and equalized, before terming the rSO 2 value pathologic and performing further diagnostic evaluations.
For analysis, the recorded data were transferred and represented graphically in the Microsoft Office Excel program. Furthermore, changes of rSO 2 values were matched with the values of ICP, tipO 2 , and TCD and the results of additional neuroimaging.
Results
Continuous measurement of rSO 2 was performed in nine patients (7 females, 2 males). DSA revealed an aneurysm as the source of hemorrhage in 7 patients (three aneurysms of the anterior communicating artery (ACoA), one middle cerebral artery (MCA) aneurysm, one aneurysm of the posterior cerebral artery (PCA), and one aneurysm of the pericallosal artery). The bleeding source was unclear in one case after DSA (SAH of unknown origin). In the last case DSA was not performed due to poor clinical condition with cessation of CBF diagnosed by perfusion-weighted CT scanning (PW-CT) and brain death two days after admission.
Mean measurement time of rSO 2 was 8.6 days (range 2-12 days). The clinical course was uneventful in 7 patients ISRN Neurology without occurrence of CVS and ischemic strokes. In these patients, NIRS measured constant and stable rSO 2 values without any significant alterations. Special findings and characteristics of NIRS application are illustrated in three displayed cases.
Case 1.
A 70-year-old female patient presented with SAH H&H grade 5, Fisher grade 4 due to a ruptured leftsided PCA aneurysm with intracerebral and intraventricular hemorrhage. The aneurysm was treated surgically, and the patient remained sedated and intubated, receiving mechanical ventilation postoperatively. In the continuing course, TCD showed elevated blood flow velocities of both MCA and ACA arteries of up to 200 cm/second. Despite triple H therapy and nifedipine application, NIRS showed left-sided decrease of rSO 2 below 40% on day 5 after onset ( Figure 2). Intrinsic and extrinsic factors were normal at that time (MAP 127 mmHg, SaO 2 99%, FiO 2 60%, t 38.3 • C, pCO 2 35%, and Hb 11.3 g/dL). Left frontal applied ICP probe showed no significant changes at the same time (ICP 11 mmHg, CPP 118 mmHg). Subsequently performed native CT and PW-CT scans showed neither perfusion deficits nor ischemic stroke (Figures 3(a) and 3(b)).
Left-sided rSO 2 values remained on a low level with further decrease. Two days later, ICP increased slowly and reached the maximum of 39 mmHg on day twelve after onset. In parallel to this right-sided rSO 2 , values decreased as well. Newly performed CT scan showed a marked left hemispheric ischemic stroke with shift of the midline strictures and signs of brain herniation (Figure 3(c)). In consideration of the poor clinical condition, the age, and occurrence of distinct ischemic stroke, we decided to limit the therapy. The patient died on day twelve after onset.
Case 2.
A 42-year-old male patient presented with SAH H&H grade 2 and Fisher grade 3 due to a ruptured aneurysm of the ACoA (Figure 4(a)). The aneurysm was treated via coil embolization. In the continuing course, the patient suffered from headaches, but he was alert without neurological deficits at all times. NIRS showed normal and stable rSO 2 values ( Figure 5). In the continuing course, TCD showed elevated blood flow velocities of the left ACA and MCA up to 220 cm/second. Although performed magnetic resonance angiography (MRA) showed radiological spasm of the left ICA and ACA (Figure 4(b)), the clinical condition of the patient remained stable without deterioration. Further on, NIRS showed stable rSO 2 values without significant desaturation. In line with performed mild triple-H therapy and oral nifedipine application, TCD values normalized in the continuing course, and the patient could be discharged without any neurological complaints. At the six months follow-up examination, the patient was still in a good condition, and he was working again. MRA showed normalized vascular patterns (Figure 4(c)).
Case 3.
In this case, it was a 14-year-old girl, who was found comatose at home with dilated and nonresponsive pupils. CT scan showed massive SAH with huge cerebral edema. PW-CT showed cessation of CBF, and TCD showed abnormal reverberating flow, indicating distinct increased ICP. NIRS strips were applied for two days and measured interestingly rSO 2 values of 63%. Over the measurement period of about 48 hours rSO 2 values were very stable within the range of 60-70%, (Figure 6) without common fluctuations as seen in other patients. Two days after onset the patient was pronounced brain dead.
Discussion
Application of NIRS is done routinely during vascular and cardiac surgical procedures. The results of several studies have shown that monitoring of rSO 2 decreases the risk of procedure-related cerebral desaturation and improves the outcome in those patient groups [23][24][25][26][27]. However, continuous measurement of cerebral oxygenation with NIRS after spontaneous SAH is not sufficiently investigated. Just a few studies focused on the application of NIRS after SAH [1,17,18]. While Mutoh, Yokose, and Zweifel used the NIRO 200 monitor (Hamamatsu) in their studies, comparable studies with the INVOS system (Sommanetics) are to the best of our knowledge not available. The mentioned studies draw the conclusions that application of NIRS can provide continuous and long-term measurement of rSO 2 and can give relevant information about the cerebral autoregulation and the effectiveness of performed therapy on CVS.
Concluding the results of our study, we can postulate that measurement of cerebral oxygenation with NIRS is a save and easy to use noninvasive additional measurement tool, which is applicable for long-term measurements in SAH patients.
It seems to be useful especially in alert patients, in whom invasive probes are not usually used. In general, performance of rSO 2 measurement was done without major problems in our study, whereby the nursing personnel needed repeatedly well instruction to obtain a smooth functioning process.
Although our examined patient group is relatively small, the displayed cases show in an illustrative manner the diagnostic relevance of measured values.
In Case 1, NIRS detected early relevant hypoperfusion of the ACA territory way ahead of increased ICP. Also PW-CT failed to detect perfusion deficits at the same time. The interpretation of measured elevated blood flow velocities by TCD is controversial concerning radiological and symptomatic CVS, as already discussed in other studies [14,15]. This case illustrated pointedly the diagnostic value of NIRS for early detection of DCI in a sedated and intubated patient.
Vice versa, rSO 2 values remained consistently stable in Case 2 with proven CVS by TCD and MRA. The patient developed no neurological impairments, so that CVS could be termed radiological. In this case, NIRS was a very important additional measurement tool to control the gathered findings in view of DCI in an alert patient. Even though the neurological examination by a physician probably remains the easiest and most reliable procedure to detect CVS related deteriorations in alert patients, early illustration of changes in cerebral oxygenation before clinical manifestation is desirable and might be provided by NIRS.
Case 3 can be discussed controversially. We measured rSO 2 values above 60% in a brain dead patient. Similar observations were made by Kyttä and coworkers [28]. The investigators examined six brain dead patients with NIRS and measured relatively normal rSO 2 values and brain desaturation dependent on the level of mechanical ventilation. Hence, the authors suggested an additional extracranial contribution of measured rSO 2 values and restricted the diagnostic relevance of NIRS in the diagnostic protocol of brain death [2]. Although new generations of NIRS devices claimed to exclude the extracranial component effectively, and the tissue oxygenation index was shown to be independent of could be that NIRS measured the remaining desoxgenated haemoglobin in the brain, which does not underly the blood circulation. Thus, this attempted explanation could state 6 ISRN Neurology the relative constant rSO 2 values without alterations. According to this particular case, it is arguable that application of NIRS is unsuitable in brain dead patients. Different aspects should be considered, which could lead to limitation of its applicability and interpretation. First of all, it is to note that the measurement of cerebral oxygenation mainly involves the ACA territory. The important MCA territory is only partly captured, if the strips are applied to the forehead. It remains to prove if application of the strips to the temples after head shaving can provide reliable rSO 2 values of the MCA territory.
Another problem is the restricted applicability due to limited space on the forehead if the sedation depth has to be measured with EEG electrodes in patients with increased ICP. A solution for this problem could be the combination of NIRS strips and EEG electrodes, which is an object of development.
Fever and sweat of the skin have an influence on the measurement. This could be a problem for continuous and long-term measurement, since infections and fever occur relatively frequently in sedated and intubated patients at the ICU. Hence, measured data could be unsuitable for decision making towards further imaging and therapy.
Furthermore, it is to note that application of NIRS strips requires a compliant patient. Measurement and data interpretation of an agitated patient is nearly impossible, if the patient dislocates and tears the strips off repeatedly.
Finally, there is a lack of common consensus about the threshold values in view of necessity of intervention [2,22,30,31]. However, it has already been postulated and is also our opinion that NIRS is suitable for recording trends. Our selected threshold values were based on the information of the manufacturer, which is the result of a large number of other studies.
It is desirable to have a noninvasive and continuous measurement method of cerebral oxygenation with reliable values. Despite the promising course of our study, further large prospective studies are required to answer the open questions and to test our observations.
Conclusion
Measurement of rSO 2 with NIRS is a safe, easy to use, noninvasive additional measurement tool for cerebral oxygenation, which is used routinely during vascular and cardiac surgical procedures. NIRS is applicable over a long-time period after SAH, especially in alert patients without invasive probes. Extrinsic and intrinsic factors have to be considered exactly to enable correct data interpretation. Under consideration of our relative small patient group, the measured values seem to be relevant and reliable. However, further large studies are needed to test our observations and to obtain recommendations for its application in SAH patients. | 2016-05-04T20:20:58.661Z | 2012-11-14T00:00:00.000 | {
"year": 2012,
"sha1": "8404acdb41b8adcd508a94a699e77e202acb1579",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5402/2012/907187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8404acdb41b8adcd508a94a699e77e202acb1579",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110006448 | pes2o/s2orc | v3-fos-license | Sustainable energy planning decision using the intuitionistic fuzzy analytic hierarchy process: choosing energy technology in Malaysia
Energy consumption for developing countries is sharply increasing due to the higher economic growth due to industrialisation along with population growth and urbanisation. The increasing demand of energy leads to global energy crisis. Selecting the best energy technology and conservation requires both quantitative and qualitative evaluation criteria. The fuzzy set-based approach is one of the well-known theories to handle fuzziness, uncertainty in decision-making and vagueness of information. This paper proposes a new method of intuitionistic fuzzy analytic hierarchy process (IF-AHP) to deal with the uncertainty in decision-making. The new IF-AHP is applied to establish a preference in the sustainable energy planning decision-making problem. Three decision-makers attached with Malaysian government agencies were interviewed to provide linguistic judgement prior to analysing with the new IF-AHP. Nuclear energy has been decided as the best alternative in energy planning which provides the highest weight among all the seven alternatives.
Introduction
Sustainable energy planning refers to man-made long-range policies which use resources to meet mankind needs while preserving the environment to help the foreseeable future of a local, national, regional and global energy system. The main points of sustainable energy planning are to optimise energy efficiency, low-or-no carbon energy emissions, and equitable energy services provision to users. The worldwide energy crisis simultaneously moves beyond the threats of climate change, greenhouse effects, toxic pollutants and air pollutants especially for developing countries ). Usually, developing countries need to overcome the increasing demand of growth mainly due to industrialisation, population growth and urbanisation which leads to increase in demand for energy (UNDP 2006). Globally, energy demand seems to increase every annum due to the anticipated higher gross domestic product growth. Besides, the transportation and industrial sectors will persist to be the major energy consumers by 41.1% and 38.8% of the total energy demand by 2010 in Malaysia (UNDP 2006). The primary energy sources such as crude oil, natural gas and conventional fuels are extremely limited resources formed by the process through solar energy accumulation over millions of years because of energy fluctuations in reserves and prices due to the increased costs of power station (Al-Mofleh et al. 2009). The increase in energy costs has made the energy planning and efficiency as a prime option to prevent the energy demands and costs. Thus, it is compulsory to highly maintain sustainable energy planning to ensure efficient utilisation of energy sources, diversification of sources and minimisation of wastages.
However, due to the complexity in choosing among various alternative energy sources and technologies of energy planning decision problems, multi-criteria decision-making (MCDM) is used as solvable tools. Furthermore, MCDM has been attracting increasing attention since a long time. MCDM includes decision support and evaluation for addressing complex problems featuring high uncertainty, conflicting objectives, multi interests and perspectives (Kaya and Kahraman 2011). Various applications of sustainable energy planning are adapted in MCDM (Buchholz et al. 2009;Doukas, Andreas, and Psarras 2007;Jing, Bai, and Wang 2012) approaches such as technique for order performance by similarity to ideal solution, weighted sum method, weighted product method, preference ranking organisation method for enrichment evaluation (PROMETHEE), the elimination and choice translating reality (ELECTRE), multi-attribute utility theory and analytic hierarchy process (AHP). The most appropriate MCDM technique that is frequently used in energy planning area is AHP followed by PROMETHEE and ELECTRE (Pohekar and Ramachandran 2004). The main advantage of theAHP is its inherent ability to handle intangibles, less-cumbersome mathematical calculations and it is more easily comprehended in comparison with other methods. However, in most cases of AHP procedures, the decision-makers are usually unable to explicitly explain the uncertainty due to the fuzzy nature of the decisionmaking process. This point motivates the notion that AHP helps them to provide an ability of giving interval or fuzzy judgments instead of crisp numbers (Bozbura and Beskese 2007).
The theory of intuitionistic fuzzy set (IFS) was developed by Atanassov in 1986 to represent vague and incomplete information. It considers degree of an element belongs to membership function and non-membership function. The notation of IFS is a sequel of fuzzy sets theory by Zadeh (1965) to handle uncertainty and fuzziness more effectively than fuzzy sets. Similar to the fuzzy set, the IFS was also applied to stimulate the human decision-making process (Szmidt and Kacprzyk 1996) in various fields such as in medical (De, Biswas, and Roy 2001), topology (Turanli and Coker 2000), economic (Chen and Li 2011), environmental (Xu 2007a) and social sciences (Wang 2009). The concept of IFS can be viewed as an alternative approach to define a fuzzy set in cases where available information is not sufficient for the definition of an imprecise concept by means of a conventional fuzzy set (Li 2005). An algorithm for solving MCDM problems has been provided by Atanassov (1986) in which the weights' criteria are given by the crisp number and the values are interpreted in intuitionistic fuzzy numbers.
Advances research of the AHP method using the IFS theory has been explored by several researchers. Many attempts have been suggested in the literature to incorporate different types of IFS in the formal AHP framework. For example, Rehan and Solomon (2009) introduced intuitionistic fuzzy AHP (IF-AHP) in environmental decision-making which expressed the vagueness using the fuzzification factor to generalise the fuzzy pair-wise comparison matrix of three vertices of the crisp number of Saaty's AHP preference scale measurement as the IFS notation. Thus, the pair-wise comparison can be generalised as the minimum and maximum values of a membership value of zero and the most likely value has a membership of one. Abdullah, Sunadia, and Imran (2009) proposed a new AHP using two notations of the IFS membership function and nonmembership function without considering values of the hesitation degree as one component in the IFS preference measurement notation. Very recently, Hai, Gang, and Xiangqian (2011) proposed IF-AHP that synthesise eigenvectors of the intuitionistic fuzzy comparison matrix which all the decision information are represented by intuitionistic fuzzy values.
In this study, a new IF-AHP with a new preference scale of pair-wise comparison matrix judgment is proposed by considering the values of the membership function, non-membership function and the degree of hesitation index concurrently to test the alternatives selection for sustainable energy planning. With an attempt to consider the values of the hesitation degree, it could be anticipated that the preference scale of matrix judgment makes a more comprehensive look. This new preference scale also leads us to propose a new consistency test for matrix judgment by using the values of the hesitation degree. The new preference scale with hesitation degree could avoid the DMs from repeating the overall process of IF-AHP and the outcome of the decision process would fit to the problems. In order to demonstrate the potential of this methodology, an application in the energy planning area is presented.
Sustainable energy planning in Malaysia
The Malaysian government has managed to implement the policies and strategies to maintain the issues of security, energy efficiency and environmental effects to meet the increasing demand of energy for development sectors. Currently, Malaysia is extremely focusing on developing sustainable energy planning policies in order to reduce the dependency on fossil fuel and contribute towards climate change effects (Hashim and Ho 2011). The policies that have been recently implemented for energy planning are National Energy Planning and National Green Technology Policy 2009 (Ministry of Energy 2010), National Biofuel Policy 2006 and National Renewable Energy Policy 2010 (Ministry of Energy 2010). These formulated numerous energy-related policies ensure long-term policies for sustainable energy planning to minimise the energy usage but at the same time increasing the economic growth towards as in a developing country, respectively.
Energy system plays a very important role in the economic growth and social development of a country and the living quality of people. For the rapid growth of economy, Malaysia needs plenty of energy resources to support the industrial sector and to enhance the productivity. MCDM procedures have been applied in many energy related research. Energy optimisation modelling (Dong et al. 2012), energy planning and selection (Buchholz et al. 2009;Kaya and Kahraman 2011;Tsoutsos et al. 2009;Wang, Xu, and Song 2011), energy review and analysis (Al-Mofleh et al. 2009;Rahman Mohamed and Lee 2006;Shafie et al. 2011;Wang et al. 2009) and energy alternatives (Jing, Bai, and Wang 2012;Scott et al. 2012;Solangi et al. 2011), just to name a few. In the decision-making process, the most common energy alternatives involved are solar energy (also known as the photovoltaic system), wind energy, hydraulic energy, biomass, combine heat and power (CHP) and wave or ocean energy (Tsoutsos et al. 2009). Besides, nuclear energy and conventional energy resources such as coal, oil and natural gas may be included in the list of alternative energy technologies (Tan and Foo 2007). The selection of criteria requires parameters related to reliability, appropriateness, practically and limitations of measurement.
Amongst the listed energy sources, Malaysia is well-known as a good dependent on energy mix resources such as conventional energy (oil, natural gas and coal), biomass, solar and hydro. In the electricity sector, almost 94.5% electricity is generated by natural gas, coal and oil, while the other percentage is generated by hydroelectric power (Shafie et al. 2011). Figure 1 shows the increasing usage of conventional energy despite hydroelectric power. This is due to huge and redundant amount of energy resources. According to Rahman Mohamed and Lee (2006), natural gas reserves in Malaysia is the largest in South East Asia and 12th largest in the world while coal reserve currently stands out about 1712 million tons of coal ranging from lignite to anthracite. However, the contribution of oil sources has declined from 46.8% in the year of 2005 to 44.7% by 2010 (UNDP 2006). Moreover, the climate and geographic conditions in Malaysia play an important role for the development of solar energy (also known as the photovoltaic system) due to abundant sunlight power throughout the year. Currently, solar energy is the easiest and has the fastest production without considering the maintenance and environmental impacts since the sources can be directly used. The largest solar installations are solar water heating systems in hotels, small food and beverage industries and upper middle class urban homes. Next, biomass energy is an organic non-fossil material of biological origin that may be used as fuel for heat production or electricity generation. Electricity generated from biomass is based on the steam turbine technology. Many regions of the world still have large unexploited supplies of biomass residues that can be converted into competitively valued electricity. One of the problems of biomass is that the material that is directly combusted in cooking stoves produces pollutants, leading to severe health and environmental consequences (UNDP 2007).
Malaysia has no nuclear power generation and has no plan to initiate a nuclear power programme in the future. Recently, Malaysia and several Southeast Asian regions such as Thailand, Vietnam and Indonesia announced to implement nuclear energy plans. However, there are some anxiety that have always been discussed by many scholars about how to remove the waste disposal and the high maintenance of decommissioning the nuclear power plant operation (Oh, Pang, and Chua 2010). For developed countries such as France and South Korea, almost 75% of the electricity needs are supported from 59 nuclear reactors and 40% of electricity needs by 20 nuclear power plant, respectively (Oh, Pang, and Chua 2010). Another potential energy identified in Malaysia is hydropower, wind energy and CHP. Hydropower is generally pointed as a potential and kinetic energy of water converted into electricity in hydroelectric plants ranging in size from very small to huge operations, while wind energy is a kinetic energy of wind exploited for electricity generation by wind turbines. Both of these energy supplies also highly support the electricity needs in our country. Lastly, CHP is also particularly useful for installations that incur high heating or cooling loads, such as factories, hotels, hospitals and commercial buildings since the CHP system can be used for heating or cooling tools in the industrial sector. CHP has high thermal efficiency ratios as compared with conventional thermal generation techniques. Efficiencies of up to 90% are possible unlike conventional thermal generation (40%) and combined cycle generation plants (55%).
As a conclusion, energy alternatives have their own advantages and disadvantage towards Malaysia's industrialisation sectors, which lead to increased economic growth. There are exactly useful and beneficial energy provided by our country. However, due to increasing global energy alternatives, many researchers seldom work on identified most appropriate energy sources that can be of more advantageous to our country. Hence, it is important to seek and recognise the greatest energy resources for the future generation and maintain the sustainable energy planning and environmental impacts in Malaysia. Towards searching and finding the best alternatives, the common aspects and criteria to evaluate the energy resources are technical, economic, environmental and Wang et al. (2009), which will be used in this framework procedures.
Preliminaries
This section introduces the basic definitions relating to fuzzy sets, IFS, triangular fuzzy number (TFN) and triangular intuitionistic fuzzy numbers (TIFNs) which are needed to clarify the proposed method.
Definition of fuzzy sets and IFS
To deal with the uncertainty of human thought due to imprecision and vagueness, Zadeh (1965) introduces fuzzy sets and fuzzy logic theory which are most potent as mathematical tools for modelling uncertainty measurements in the system.
Definition 1 A fuzzy set A in the universe of discourse X = {x 1 , x 2 , . . . , x n } is defined as which is characterised by the membership function μ indicates the membership degree of the element x to the set A.
The extension of fuzzy set to IFS is defined by Atanassov (1986). The IFS concept is defined as follows:
Definition 2 Let X be an ordinary finite non-empty set. An IFS in A is an expression A given by
where and v x (x) denote, respectively, the degree of the membership and the degree of the non-membership of the element x in the set A. The notation of IFS 'A' is defined as follows: represents the degree of hesitation or intuitionistic index or non-determinacy of x to A. Therefore, for ordinary fuzzy sets the degree of hesitation π
Definition of TFN and TIFNs
The fuzzy set theory [11] is designed to deal with the extraction of the primary possible outcome from a multiplicity of information vaguely and imprecisely. Fuzzy set theory treats vague data as possibility distributions in terms of set memberships. TFN is one of the fuzzy numbers with membership functions. According to the definition of Laarhoven and Pedrycz (1983), TFN should possess the following basic features.
Definition 3 A fuzzy number A on to be a TFN if its membership functions
where l and u represent the lower and upper bounds of the fuzzy numbersÃ, respectively, and m is the median value. The TFN is denoted asà = (l, m, u) In a similar way the concept of a TFN was introduced by Dubois and Prade (1980), the concept of TIFNs is defined as follows: Definition 4 A TIFNã = (a, a,ā); wã, uã is a special intuitionistic fuzzy (IF) set on a real number set R whose membership function and non-membership function are defined as follows: and The values of wã and uã represent the maximum degree of membership and the minimum degree of non-membership, respectively, such that the notation satisfies the conditions 0 ≤ wã ≤ 1, 0 ≤ uã ≤ 1 and 0 ≤ wã + uã ≤ 1. If a ≥ 0 and one of the three values a, a andā is not equal to 0, then the TIFNã = (a, a,ā); wã, uã is called a positive TIFN, denoted byã > 0. Likewise, ifā ≤ 0 and one of the three values a, a andā is not equal to 0, then theã = (a, a,ā); wã, uã is called a negative TIFN, and denoted byã < 0.
A new preference scale of IF-AHP
A linguistic data is a variable whose value is a natural language. It deals with complex or undefined situations where conventional quantitative expressions fail to describe (Zhang and Liu 2011). The , by averaging data scaling, μ (x) with respect to consistency expression of membership grades. Based on the table from Hersh (2006), the conversion is proposed in Table 2.
Then, the TIFNs can be calculated by the following equations: where x ij is scale of AHP crisp number and m is total of scale measurement of preference. Membership degree, non-membership degree and degree of hesitation are calculated using the Equations (8)-(10) The following example is given to illustrate the conversion: Let x ij = 1, and m = 9, then According to Table 1, if the value of μ (x) is 0.1111, then the hesitation degree π(x) is 0.8. The value of membership, μ(x), and the non-membership, v(x), function can be obtained using Equations (8) and (9); (Saaty 1980) and the new conversion of AHP linguistic preference into TIFNs are proposed in Table 3.
The proposed IF-AHP procedure with the new preference scale
The IFS framework extends Saaty's AHP method with the IFS theory. Similar to fuzzy AHP and AHP methods, the proposed IF-AHP can also deal with the relative strength between criterion and alternatives of MCDM problems. The new preference scale of the AHP crisp data to TIFNs is used as measurement in the pair-wise comparison judgement matrix. The proposed IF-AHP method is described as below Step 1 Construct hierarchy structure of MCDM problems.
Data for criterion and alternatives must be identified as part of an MCDM problem.
Step 2 Scale the pair-wise comparison scale of IF-AHP with the new preference scale of the TIFNs judgement matrix. In MCDM problems, responses from DMs are mainly focused on the opinion of the DMs regarding rating of the criterion of the problems based on the identified criteria. The DMs were asked to specify rating using nine AHP linguistic scales varying from 'just equal' to 'absolutely more important' over the factors associated with MCDM problems. The new preference scale of IF-AHP is used to define the DMs measurement of each criterion and alternatives of the MCDM problems. The new conversion of AHP linguistic preference into TIFNs proposed in Table 3.
Step 3 Determine the weights of DMs.
The importance of the DMs is considered as linguistic variables. The defined TIFNS for linguistic variables are given in Table 4.
Let D k = (μ k , v k , π k ) be an intuitionistic fuzzy number for rating of the kth decisionmaker. Based on Boran et al. (2009), the weights of the kth decision-maker can be obtained using Equation (11).
Step 4 Construct the aggregated intuitionistic fuzzy judgement matrix based on DMs. Let R (k) = (r (k) ij ) m×n be an intuitionistic fuzzy decision matrix of the kth decision-maker. Let λ = {λ 1 , λ 2 , . . . , λ n } be the weight of the all decision-maker and t k=1 λ = 1 ∈ [0, 1]. In the group decision-making process, all the individual decision opinions need to be fused into group opinion to construct an aggregated intuitionistic fuzzy decision matrix (Zhang and Liu 2011) by applying the intuitionistic fuzzy weighted averaging operator proposed by Xu (2007b) where Step 5 Calculate the consistency ratio (CR) of the aggregated intuitionistic fuzzy judgement matrix.
Since the aggregated IF matrix consists the value of π(x), which is the hesitation value, to express in consistency grades of TIFNs, thus we introduce a new method to calculate the overall consistency value. The value of random indices (RI) is retrieved from Saaty (1980). It is shown in Table 5.
Then the new CR is given in Equation. (13).
Consistency ratio, CR
where assume (λ max − n) is the average value of the hesitation value, π(x) of the aggregated IF matrix of criterion and each alternatives and n is the size of the matrix. CR is acceptable if it does not exceed 0.10 (Saaty 1980). If the CR is greater than 0.10, then the judgment matrix should be considered as inconsistent. In order to ensure that consistency is met, the judgement matrix should be redone.
Step 6 Calculate the intuitionistic fuzzy weight of the aggregated intuitionistic fuzzy judgement matrix.
The weights of criteria are obtained by using the most important of DMs preference weights index, while the IF matrix of alternatives are using the sequence of decision-maker weights index judgement. In order to obtain the weights, we modify the intuitionistic fuzzy entropy introduced by Vlashos and Sergiadis (2007). The summation in Vlashos and Sergiadis (2007) is removed to obtain the aggregated value of each IFS matrix row. The intuitionistic fuzzy entropy of each aggregate of each row of the IF matrix is defined as Equation (14). Here Thus, the final entropy weights of each IF matrix are redefined as Equation (15).
Step 7 Rank all the alternatives.
Compute the relative weight and rank the alternatives.
where w i is the overall relative rating for alternatives i, w i is the average normalised weight for criteria jand A ij the average normalised weights aggregated matrix for criteria j with respect to alternatives i.
In this IF-AHP framework, we introduced a new equation to calculate the CR of each matrix pair-wise comparison by applying the values of the hesitation degree in the TIFNs notation. One of the benefits of considering hesitation degree is to strengthen the TIFNs notations with respect to the intuitionistic fuzzy matrix. Besides, we also modified intuitionistic fuzzy entropy by removing the summation element in order to obtain intuitionistic fuzzy entropy of each aggregated row of the IF matrix. Since, the matrix is proposed by aggregating each row of the intuitionistic fuzzy matrix, it is feasible to calculate the intuitionistic fuzzy entropy of each row instead of summation of the intuitionistic fuzzy matrix.
Implementation
The AHP question was designed to evaluate the sustainable energy planning decision problems. The questionnaire was used as a guideline in personal interview with the experts in sustainable energy. Based on the MCDM framework, this group of experts are called as decision-makers. Three decision-makers were sought to provide linguistic judgement data based on the IF-AHP questionnaire. Three of the experts comprise two academicians from Department of Electrical Engineering, Department of Environmental Sciences at a public university and an engineer from Focus: Sustainable energy planning problem C 1 C 2 C 3 C 4 A 1 : Conventional A 2 : Nuclear A 3 : Solar A 4 : Wind A 5 : Hydraulic A 6 : Biomass A 7 : CHP C 5 C 6 C 7 C 8 C 9 Figure 2. The hierarchical structure for energy planning selection.
Step 1 Construct a hierarchical diagram of the MCDM problem The hierarchical structure of energy planning problem is illustrated in Figure 2.
Step 2 Scaling of the pair-wise comparison scale of fuzzy analytic hierarchy process with the preference scale of TFN. The compilations of three experts' (λ 1 , λ 2 , λ 3 ) linguistics variables for the criteria are constructed in Table 6. The alphabet 'R' represents the reciprocal scale of pair-wise comparisons. Usually, the shaded boxes can be defined by the author by a pair-wise comparison of the preference scale since the decision-maker only has to fill the other boxes of the AHP questionnaire. As example, λ 1 states that C 1 is 'very strong more important (VSMI)' than C 2 which is represented by '(0.62, 0.18, 0.20)'. Then, the reciprocal for 'very strong more important (RVSMI)' of C 2 -C 1 is (0.18, 0.62, 0.20). The abbreviations and preference scale of IF-AHP are shown in Table 3.
Step 3 Determine the weights of DMs.
Step 4 Construct the aggregated intuitionistic fuzzy judgement matrix based on DMs.
Equation (12) is used to aggregate all the conversion of the intuitionistic fuzzy decision matrix of criterion and alternatives. Table 7 shows the example of aggregated matrix judgment by λ 1 . Table 6. Pair-wise comparison of criterion. Criteria Then, the similar fashion of calculation is applied to determine the aggregated matrix for C 2 , C 3 , . . . , C 9 . Then, Equation (12) is also used to execute the aggregated matrix of each criterion with respect to alternatives. Table 8 shows the aggregated matrix of each criterion with respect to alternatives.
Step 5 Calculate the CR of the aggregated intuitionistic fuzzy judgment matrix of the criterion and alternatives. The calculation of CR is based on Equation (13) to test the consistency of the pair-wise comparison of TIFNs. Calculation of CR of aggregated matrix judgment Table 7. Aggregated matrix of criterion. Table 8. Aggregated matrix of criterion with respect to alternatives. of criterion (Table 6) is shown in the following example: C · R = ((0.52 + 0.44 + · · · + 0.48)/8) 1.45 .
= 0.03 From the calculation, the consistency test of aggregated intuitionistic fuzzy judgment for criterion is 0.03. The matrix is consistent.
Step 6 Calculate the intuitionistic fuzzy weight of the aggregated intuitionistic fuzzy judgment matrix.
Obtain the entropy weights and final entropy weights of each criterion and alter natives by using Equations (14) and (15).
The entropy weights and final entropy weight for all criteria are shown in Table 9.
Step 7 Rank all the alternatives.
Compute the relative weight and rank the alternatives using Equation (16): The other alternatives relatives' weights are calculated in a similar manner. The final priority weights are shown in Table 10. The overall weight and rank of sustainable energy planning are obtained by the arithmetic mean of the experts' final weight of alternatives with respect to each criterion. Table 11 summarises the experts' final weight and rank on alternatives problems.
Based on Table 11, the ranking of alternatives in descending order are A 2 A 3 A 5 A 4 A 6 A 7 A 1 According to the framework of IF-AHP, the best alternative is A 2 (nuclear energy). The order of the rest alternatives are solar, hydraulic, wind, biomass, CHP power and conventional.
Results and discussion
A rank and weight of sustainable energy planning in selecting energy alternatives were obtained by applying a new preference scale of IF-AHP procedures. The new preference scale of IF-AHP as a decision tool was successfully implemented. The proposed procedure was considering the hesitation degree as the third parameter in the IFS notation. It seems that the MCDM method managed to cope with the pair-wise comparison in AHP despite the introduction of the third parameter and dual memberships in IFS. The weight obtained was then used to rank the best alternatives in energy planning. The best alternative, A 2 (nuclear energy), gives the highest value of weight among the alternatives. The decision was made by recognising that nuclear energy leads in the first place followed by solar, hydraulic, wind, biomass, CHP power and conventional. The pattern of preferences among the three experts is illustrated in Figure 3.
Expert 1 (E1) and Expert 3 (E3) decide that nuclear energy is the preferred choice in energy planning despite a substantial difference in weight. It is good not to mention that all the three experts are consistent in deciding the rank of five alternatives out of seven.
Although Malaysia has established the Nuclear Agency and has periodically reviewed the nuclear option, however there is no nuclear power generation plant nor is there a plan to embark on a nuclear power programme in the foreseeable future. Generally, it will take years to prepare the nuclear workforce for this technology in all aspects, ranging from the planning until to the decommissioning including the final waste disposal of the nuclear power plant. This study reveals the importance of nuclear energy to be implemented in Malaysia. Nuclear energy has managed to support almost 70% electricity in developed country such as France and South Korea (Oh, Pang, and Chua 2010). As an alternative to energy consumption, nuclear energy is one of the potent energy sources for the future sustainable energy planning. Moreover, supporters of nuclear plants advocate that nuclear energy is a stable and reliable source of energy. The power generated is cleaner because it emits significantly less carbon waste into the environment as compared with coal-and gas-driven generators (Khor and Lalchand 2014).
Conclusions
The aim of this paper was to propose a new preference scale in IF-AHP procedures and test its feasibility in the real case experiment of solving the sustainable energy planning decision problem. Energy planning is a complicated issue in which both qualitative and quantitative criteria must be considered. Thus, IF-AHP with the new preference scale was used to investigate the decision problems and find suitable ways to deal with uncertainty. This new preference scale of measurement has included the degree of hesitation for every single triangular intuitionistic fuzzy number. CR of matrix judgment was calculated using the hesitation degree. Besides the inclusion of the hesitation degree, this proposed IF-AHP also took into consideration the modified intuitionistic fuzzy entropy. The modification was carried out by removing the summation of the column matrix. The aggregated intuitionistic fuzzy matrix was utilised to obtain the final relative weights and ranks. To investigate the feasibility of the proposed method, the energy planning problem was used as a platform to deal with this new preference scale of matrix comparison. The alternative nuclear energy with a weight of 0.1426 was the most preferred choice among all the alternatives. The second preferred choice in sustainable energy planning was solar with a weight of 0. 1424 followed by hydraulic with a weight of 0.1410. The alternatives, wind, biomass, CHP power and conventional, were ranked as the last four in this preference scale. The proposed IF-AHP was successfully tested in energy sources selection. However, this proposed framework warrants further investigations especially in validity and reliability. Further research in some other real case experiments would further enhance robustness of the method. Comparison study and sensitivity analysis are some of the possible validation tools that can be explored to strengthen the proposed IF-AHP framework. | 2019-04-13T13:02:19.443Z | 2016-04-20T00:00:00.000 | {
"year": 2016,
"sha1": "83c02941cd6f88b3860037e097c1c94b1a3e4ece",
"oa_license": "CCBYNC",
"oa_url": "https://repositorio.unal.edu.co/bitstream/unal/79679/5/1081594025.2021.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "78d2ee58b075585d1370a44bdf4660821c3bf5d1",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119676861 | pes2o/s2orc | v3-fos-license | First moment of Rankin-Selberg central L-values and subconvexity in the level aspect
Let $1\le N<M$ with $N$ and $M$ coprime and square-free. Through classical analytic methods we estimate the first moment of central $L$-values $ L(1/2,f\times g) $ where $f\in S^*_k(N)$ runs over primitive holomorphic forms of level $N$ and trivial nebentypus and $g$ is a given form of level $M$. As a result, we recover the bound $ L(1/2,f\times g) \ll_\varepsilon (N + \sqrt{M}) N^\varepsilon M^\varepsilon $ when $g$ is dihedral. The first moment method also applies to the special derivative $L'(1/2,f\times g)$ under the assumption that it is non-negative for all $f\in S^*_k(N)$.
Introduction
In this paper, we investigate the subconvexity problem for certain Rankin-Selberg L-functions in the level aspect. A special feature, compared to other works in the subject, is that the method is relatively straightforward, at least in the prime level case of §3. This seems to suggest that special restrictions on the automorphic cusp form π simplifies the subconvexity problem for L( 1 2 , π). The restrictions on π are essentially of two kinds. The first assumption is that L( 1 2 , π) be non-negative. This should be the same as requiring π to be self-dual. For an unconditional result, we require further that π be symplectic, so that the theorem of Lapid-Rallis [17] applies. The second assumption is that π be a Rankin-Selberg convolution of two forms whose levels are coprime and of different size. These restrictions are non-generic in the sense that we capture a small portion of the whole set of automorphic forms. They are nevertheless interesting, because such forms π are easily constructed and often occur in applications to arithmetic problems.
The first occurrence of such a result is in the work of Michel-Ramakrishnan [23] and in the second-named author's PhD thesis [28]. The subconvexity bounds came for free, in some cases, as a direct result of an exact formula for the first moment of central L-values (see [23,Corollary 2]). The formula is based on the Fourier expansion of the kernel occurring in the first step of the proof of the Gross formula. It was rather striking that subconvexity followed as an immediate corollary to an exact formula for the first moment and therefore Michel and Ramakrishnan raised the question as to whether a purely analytic proof would be possible.
The results of Michel-Ramakrishnan [23] have been greatly generalized by Feigon-Whitehouse [3] using the relative trace formula and Waldspurger formula. Notably the subconvexity bounds have been extended to number fields with the same quality of exponents. The approach in [28] is based on Zhang's notion of geometric pairing of CM cycles [33] and the explicit computations in the original Gross-Zagier article [5]. The approach through Fourier expansions has been reinterpreted by Nelson [25] based on the computations by Goldfeld-Zhang of the kernels for the Rankin-Selberg convolution. Nelson's work is perhaps the closest to ours analytically, but focuses on stable averages rather than subconvexity.
Our present approach is conceptually simpler than in the previous works. We rely on the Rankin-Selberg convolution rather than explicit period formulas and use little input from spectral theory. The treatment of the first moment is indeed purely analytic, using well-known analytic tools: approximate functional equation, Petersson trace formula, Voronoï formula; it is therefore flexible enough to adapt to seemingly more complicated cases. We illustrate this aspect with a subconvex bound for the special derivative in the Gross-Zagier formula, in which case our result is new.
Let M 1 be a square-free integer and let g be a holomorphic cusp form of fixed weight κ 2 on the congruence subgroup In the present context we can assume indifferently that g is a newform because the L-functions associated to g depend only on the automorphic representation it generates. Let χ be the Dirichlet character modulo M which is the nebentypus of g. Now let N be a square-free integer coprime with M and k 2 be a fixed even integer. We let S k (N) be the vector space of holomorphic cusp forms of level N, weight k and trivial nebentypus. It is equipped with a Petersson inner product and an action of the Hecke operators T n . We denote by S * k (N) the set of primitive newforms and ω f = N 1+o(1) the associated spectral weights as in §2.5.
Let f ∈ S * k (N) and denote by λ f and λ g the normalized Hecke eigenvalues of f and g as described in §2.2. The Rankin-Selberg L-function L(s, f × g) of degree four admits an analytic continuation to all of C and a functional equation Here L ∞ (s, f × g) is a product of two complex Γ factors, see §2.3 for details. The (arithmetic) conductor equals Q := (NM) 2 . It happens often that the root number ǫ(f × g) depends only on N and g, such as in the present case where M and N are coprime. We now give a statement of our main theorem.
The same bound holds with L (j) ( 1 2 + it, f × g) for any fixed integer j 0 and t ∈ R.
If one is willing to apply Theorem 1.1 to the subconvexity problem, it is necessary to have an extra assumption. We would want each L-value L( 1 2 , f × g) to be non-negative. In the next subsection we proceed to discuss the significance of this assumption and the instances where it is known to hold. When one does have non-negativity, we obtain the following bound for an individual L-function.
. Then: . The same bounds hold for the first derivative under the same assumption that L ′ ( 1 2 , f × g) 0 for all f ∈ S * k (N). Note that (1.5) is better than the convexity bound when M δ < N < M 1−δ for any δ > 0 and matches Corollary 2 in [23] and Theorem 1.4 in [3]. The above Theorem and Corollary are made possible by the following main estimate.
Let k and κ be fixed positive integers. Let q, D, M be positive integers with M square-free. Let g ∈ S * κ (M, χ) be a newform of weight κ, level M and nebentypus χ. Then the sum where h is a smooth function, compactly supported in [1/2, 5/2] with bounded derivatives is and Remark 1.1. One should compare this bound with the "trivial" bound √ ZqP (1+ P ) −3/2 (qDMZ) ε obtained by an application of the Weil bound for individual Kloosterman sums along with (2.1) and (2.3). In particular, consider the case of the transition range for the Bessel function, i.e. P ∼ 1. Furthermore, one may slightly relax the conditions for bounds on the smooth function h and still obtain similar results.
1.1. Non-negativity of central values. We discuss in this subsection the above question of non-negativity of L( 1 2 , f × g) and L ′ ( 1 2 , f × g). We review what is known unconditionally and what is expected. Non-negativity of central values is a deep fact and there are several works that rely on using this property. Indeed, non-negativity allows for the study of moments of odd order in application to subconvexity. See notably Ivic [9], Conrey-Iwaniec [2], Li [20] and Blomer [1]. We begin by recalling what is known in the general case and then draw the consequences in our setting. This is related to the classification of automorphic forms and the Gross-Zagier formula.
For an irreducible L-function L(s, π) to have real Dirichlet coefficients we assume that π is self-dual. Then L(s, π) > 0 for all s 1. Assuming GRH it would follow that L( 1 2 , π) 0 and that if L( 1 2 , π) = 0 then L ′ ( 1 2 , π) 0. We now recall in which cases these inequalities are known unconditionally. According to the Arthur classification, the cuspidal self-dual representation π is either symplectic or orthogonal. Namely exactly one of the L-functions L(s, π, ∧ 2 ) (adjoint) or L(s, π, sym 2 ) (symmetric square) has a (simple) pole at s = 1. We say that π is symplectic (resp. orthogonal) when the adjoint (resp. symmetric square) L-function has a pole. A general theorem of Lapid-Rallis [17] says that if π is symplectic then L( 1 2 , π) 0. If π is orthogonal it is not known unconditionally that L( 1 2 , π) 0, though we expect moreover that L( 1 2 , π) > 0. For example when π is a GL(1) quadratic Dirichlet character this is an open question related to the effective class number problem [11, §4].
Let π = f ×g be the Rankin-Selberg convolution. The existence of π as an automorphic representation on GL(4) is established by Ramakrishnan [26]. It remains to investigate under which conditions on the forms f and g the representation π = f × g can be self-dual and under which conditions it can be symplectic.
We consider only the case when f and g are both self-dual which assures that π is self-dual. It is not difficult to determine whether a GL(2) form is self-dual because the contragredient of a GL(2) form is its twist by the inverse of the central character. Thus it is necessary that the central characters of f and g are quadratic (i.e. real and valued in {±1}). See also [10,Chap. 6] where, in the classical language, it is shown that if the nebentypus is a quadratic character then the GL(2) form is an eigenvector of the Fricke involution. Note that the central character of π should be trivial because it is the square of the product of the central characters of f and g.
For π to be symplectic it is necessary and sufficient that one of the forms f and g be orthogonal and the other be symplectic. A GL(2) form is symplectic if and only if its central character is trivial. It seems advantageous to average over a family of symplectic forms. This is why in Theorem 1.1 we average over the family f ∈ S * k (N), while the other form g will be assumed to be orthogonal. A GL(2) form g is orthogonal if and only if it is dihedral. The central character is always non-trivial since it equals the character of the quadratic extension it is associated with. The average over the dihedral forms g also can be considered, see [21,29] and the references there.
Let g ∈ S * κ (M, χ). We have arrived at the following conclusion concerning the non-negativity assumption for the special value.
• If g is self-dual (that is if χ is quadratic) then we expect L( 1 2 , f × g) 0 for all f ∈ S * k (N); • this is known unconditionally if g is dihedral, which is the case treated in [3,23].
Next we need to analyse the situation at a finer level in order to take into account the sign of the root number. Assume that π is self-dual; the root number In fact for f ∈ S * k (N) and g ∈ S * κ (M, χ) self-dual we will see in (2.12) that If ǫ(f × g) = −1 we have L( 1 2 , f × g) = 0 in which case it is interesting to focus on the derivative L ′ ( 1 2 , f × g). Its non-negativity is not yet known in general (e.g. if one of the forms in the convolution was taken to be Maass, a case not considered here). However, non-negativity is known in an important special case which is the celebrated Gross-Zagier formula [5]. In our context of Theorem 1.1 this is known by the recent work of Yuan-Zhang-Zhang [32] if f and g have weight k = κ = 2.
We should also mention the Waldspurger theorem [30] which gives a period formula for L( 1 2 , f × g) when f is symplectic and g is dihedral and also implies non-negativity. In the present paper we do not use the Waldspurger period formula in the proof of Theorem 1.1 which is an important difference with [3,23]. Nonnegativity is a robust property which can be established independently of period formulas as shown by the results of Lapid-Rallis [17,18].
1.2.
Detailed outline of the first moment method. We illustrate the main ideas in this subsection by giving a detailed outline of the first moment method under simplifying assumptions.
First assume that we have two distinct prime levels N and M with 1 < N < M. We fix a newform g ∈ S * κ (M, χ) and shall average L( 1 2 , f ×g) over the collection of newforms f ∈ S * k (N) with trivial nebentypus. We further assume that the space of cusp forms is equal to the space of newforms as happens to be the case for example when the weight k 10 or k = 14 and the level N is prime since for such k there are no oldforms of full level. After a standard approximate functional equation argument, we see that the problem reduces to obtaining estimates for sums of the form with V as in §2.4. Assuming non-negativity for each f , bounding the above by (M/N) 1/2 would produce the convexity bound for any individual L-function. We note that such a bound is weaker than Lindelöf on average over f if M δ < N < M 1−δ for any δ > 0.
Summing over f via Petersson's trace formula, we arrive at The "diagonal" delta term contributes V (1/NM) which is of size 1. This prevents one from proving better than Lindelöf on average over the family. To treat the "off-diagonal" sums, we proceed in a manner motivated by the earlier works of Kowalski-Michel-Vanderkam [16], Harcos-Michel [7] and Michel [22] and also seen recently in [8].
We start by breaking the n-sum into dyadic segments, of lengths Z say, through a smooth partition of unity with some nice compactly supported test function h. For the purpose of this outline, we focus on the case of Z = NM. The Weil bound for individual Kloosterman sums and standard bounds for the Bessel functions (see §2.1) allow us to initially truncate the c-sum to length (NM) A with the tail bounded by (NM) −B for some positive A and B. Furthermore, since M is prime, we can restrict to those c which are coprime with M by using the same bounds.
We now proceed with our analysis of these off-diagonal terms by reducing to "shifted sums". To achieve this, we change the order of summation and consider the inner n-sum for each fixed c. What follows are the main ideas behind Lemma 1.3; the details appear in §5.
Opening the Kloosterman sum, an application of Voronoï summation in n changes the inner sum in (1.8), up to some bounded constant factor, to for some other newform g * of weight κ and level M with Analysis of the integral I c (n) restricts the n-sum to roughly those n satisfying by Lemma 2.1 (whose proof is given in the §2). The arithmetic advantage of Voronoï summation is that one now has Ramanujan sums instead of Kloosterman sums for each modulus c. Such an idea is well-known and was already seen in a work of Goldfeld [4]. We write the Ramanujan sums as * e α(M − n) ν and the inner sums over α will detect the congruence condition n ≡ M(ν) with a loss of ν = c/δ. Therefore, we have reduced (1.8) to bounding The Ramanujan-Petersson bound for Hecke eigenvalues then allows one to conclude that Note that, unlike in the case of the second moment [8], the "zero shift" n = M did not need to be treated separately to obtain the above bound, i.e. one does not need to use the additional fact (see (2.8)) that |λ * g (M)| = M −1/2 . Combining this with the contributions of the other dyadic segment in n gives Theorem 1.1 and finally Corollary 1.2 for any individual L-function in our family.
We have written the subconvexity bound in the above form in order to comment on the significance of each term appearing on the right hand side of (1.10). The first term comes from the trivial diagonal in Petersson's trace formula and represents Lindelöf on average. The ratio (N/M) 1/2 provides a natural upper boundary for the size of N relative to M and shows that if N and M are of the same size, then one loses the advantage of having analytically distinguishable choices in which family to average over. The second term comes from analysis of the off-diagonal in Petersson's trace formula. The ratio (1/N) 1/2 provides the natural lower boundary and shows that N must have some significant size relative to M in order for our first moment average to be non-trivial.
Preliminaries
2.1. Bessel functions. We record here some standard facts about the J-Bessel functions as can be seen in [31] as well as several estimates for integrals involving Bessel function which will be required for our application. One may write the J-Bessel functions as which, when k is a positive integer, one has that Using the above facts leads us to the following results. Lemma 2.1. Let k, κ 2 be fixed integers and let a, b > 0. Define where h is a smooth function compactly supported on 1 2 , 5 2 with bounded derivatives. We have Therefore, we see from (2.1) that I(a, b) may be written as the sum of four similar terms, one of them being Repeated integration by parts and an application of (2.3) gives the desired result.
2.2. Automorphic forms. We have two integers M 1 and κ 2 and χ a Dirichlet character of modulus M. Recall that we denote by S κ (M, χ) the vector space of weight κ holomorphic cusp forms with level M and nebentypus χ. We have the Fourier expansion The space S κ (M, χ) is equipped with the Petersson inner product We recall the Hecke operators T n with (n, M) = 1. The adjoint of T n with respect to the Petersson inner product is T * n = χ(n)T n , hence T n is normal. There is an orthogonal basis of S κ (M, χ) consisting of eigenvectors of all the Hecke operators T n with (n, M) = 1.
The subspace of newforms of S κ (M, χ) is the orthogonal complement of the subspace generated by old forms of type g(dz) with g of level strictly dividing M. The set of primitive forms S * κ (M, χ) is an orthogonal basis of the subspace of newforms. A primitive form g is a newform which is an eigenfunction of all T n with (n, M) = 1 and such that ψ g (1) = 1.
A primitive form is actually an eigenfunction of all Hecke operators, and λ g (n) = ψ g (n) is the normalized eigenvalue for all n 1. We have the Hecke relation We also record here that for g primitive with trivial character one has that for m|M (see [13, (2.24)]). When the nebentypus is trivial we remove it from the notation. Our primitive form f of level N is an element of S * k (N). 2.3. Rankin-Selberg L-functions. Let f ∈ S * k (N) and g ∈ S * κ (M, χ) with N and M squarefree and (N, M) = 1. We recall the Rankin-Selberg L-function It admits an analytic continuation to all of C and a functional equation of the form . The product of Gamma factors reads According to [16] the epsilon factor equals Here η g (M) is the pseudo-eigenvalue of g for the Atkin-Lehner operator W M . The formula is consistent if k = κ because k is even which implies that χ(−1) = 1 in such case. References for the Fricke involution and related constructions include Chapter 6 of [10], Li [19] and the Appendix A.1 in [16].
In particular ǫ(f × g) depends only on N and g. This enables us to perform the average over f ∈ S * k (N) in Theorem 1.1. If g is self-dual (equivalently if χ is real), then η g (M) = ±1 is the product of the root numbers of g at the primes dividing M; thus ǫ(f × g) = ±1 depends only on N, χ and k, κ as noted in the introduction.
The formulas for the gamma factor (2.11) and the epsilon factor (2.12) can be found in [16, §4] who quote [19,Th. 2.2]. However it is more satisfactory to verify the functional equation with the framework of automorphic representations, as may be found e.g. in Jacquet [14]. For the sake of completeness we provide some details on how to derive (2.11) and (2.12) in this way; we fix a (standard) additive character ψ and proceed place by place.
Real place.
The component of f (resp. g) at infinity is the discrete series representation of weight k (resp. κ). The central character is 1 = sgn k (resp. sgn κ ). Let W R = C × ∪ jC × be the Weil group.
Under the local Langlands correspondence the discrete series representation of weight k corresponds to the two dimensional representation of W R given by The gamma factor is Γ C (s + k−1 2 ) and the epsilon factor is i k . A small computation shows that the tensor product of the representation of weight k and the representation of weight κ decomposes as the direct sum of two representations of weight k + κ − 1 and max(k − κ, κ − k) + 1, respectively. This implies the formula (2.11) for the gamma factor, while the epsilon factor at infinity is (2.14) ǫ The component of f (resp. g) at p is the Steinberg representation (resp. an unramified principal series representation with central character χ p ). Using standard formulas for the epsilon factors (tensor product with an unramified representation [27, (3.4.6)]), we obtain The component of f (resp. g) at p is an unramified principal series representation with trivial central character (resp. a ramified principal series representation with central character χ p ). Using standard formulas for the epsilon factors Here we used the fact that ǫ p (g, ψ) = η g (p) (the pseudo-eigenvalue at a prime p is the same as the local root number).
Approximate functional equation.
The method is standard to express or approximate values of L-functions inside the critical strip (and actually goes back to Riemann); we shall briefly set it up for the Rankin-Selberg L-functions, see [6,22] and [12, §5.2] for details.
We first treat the central value L( 1 2 , f × g) and assume that χ is non-trivial. We fix a meromorphic function G on C which satisfies the following (i) G is odd, G(s) = −G(−s); (ii) G is holomorphic except at s = 0 where it has a simple pole with residue Res s=0 G(s) = 1; (iii) G is of moderate growth (polynomial) on vertical lines. Then we construct the smooth function The approximate functional equation method shows that the special value L( 1 2 , f × g) is given by We have the following uniform estimates for the functions V and V . This follows by shifting the contour of integration in (2.17).
In Theorem 1.1 we are also concerned with higher derivatives and general critical values. The approximation follows in the same manner (see also [15]). The functions V and V are similarly defined by the same equations (2.17) and (2.18). We now let the meromorphic function G on C be such that (i) G is holomorphic except at s = it, where we have a pole of order j + 1 and (ii) G is of moderate growth (polynomial) on vertical lines.
It may be verified that such a function always exists and may be chosen independently of f ∈ S * k (N) (indeed it only depends on f through the gamma factors L ∞ (s, f × g) which are given in terms of k and κ).
The approximate functional equation method shows that L (j) ( 1 2 + it, f × g) is again given by the same expression (2.19). Lemma 2.2 holds true as well, except that the term δ 0,α has to be replaced by the α-derivative of some polynomial in log y of degree at most j + 1. Let N 1 be an integer and let B k (N) be any Hecke eigenbasis for S k (N). Let S * k (N) denote the collection of newforms in B k (N). For any m, n 1, set where the spectral weights ω f are given by Note that the inner-product is taken at the same level N on which we are averaging our family of forms. This convention differs slightly from [13] in which the innerproduct is always taken at the largest ambient level. We have the following standard tool for averaging Fourier coefficients over a Hecke eigenbasis. For the purpose of our application, we wish to write down a summation formula when the average is restricted to the family of newforms S * k (N). Let (2.21) v One has the following renormalized version of a result of Iwaniec, Luo and Sarnak [13,Proposition 2.8]. Under the assumptions that N be square-free, (m, N) = 1 and (n, N 2 )|N, The identity (2.22) is not exactly sufficient for our purpose when we shall average over families S * k (N) with N square-free in §4. Indeed the condition (n, N 2 ) | N is too restrictive unless N is prime ( §3). Therefore, we shall use the following variant in §4. As will be clear from the proof, this variant is already present in the work of Iwaniec, Luo and Sarnak as it is a particular case of [13, Eq. (2.51)]. ; c) with c ≡ 0(R). The quality of obtained bounds diminishes as (R, ℓ) or (R, ℓ 1 ) increases.
(3) For those L > 1, one may truncate the ℓ-sum to ℓ L A for any A > 0 up to an error of size O ε (nm) ε L −A .
Proof. We begin with the formula in [13, Eq. (2.51)] which is the same as and the Lemma follows by an application of Möbius inversion.
2.6.
A bound on smooth numbers. In the proof for square-free N in §4 we shall need the following. Proof. This is an elementary adaptation of the Rankin method. More precise estimates and asymptotics are discussed in [24,Chap. 7]. 1 Let σ > 0; the number of integers we are estimating is Choosing σ > 0 arbitrarily small concludes the claim. 1 (depending on a, c and g) and a newform g * ∈ S * κ (M, χ * ) of the same level M and the same archimedean parameter κ such that where x denotes the multiplicative inverse of x modulo c.
(iii) Let η g (M 2 ) be the pseudo-eigenvalue of g for the M 2 Atkin-Lehner operator. Then (iv) The Hecke eigenvalues of g * are given by
Proof of subconvexity when N is prime
Let 1 < N < M with N prime and M square-free such that (N, M) = 1. Let f ∈ S * k (N) and let g ∈ S * κ (M, χ) with g self-dual. As discussed in §1.1, this implies that χ is quadratic and that the root number is ǫ(f × g) = ±1. In the case that ǫ(f × g) = 1, the approximate functional equation argument reduces our L-function to the analysis of the sum: where V satisfies the properties of §2.4. Recall further that such an object is known to be non-negative when f × g is self-dual symplectic and therefore we assume in the end that g is dihedral in order to establish subconvexity as a corollary to our first moment bound. To set up for our application of (2.22), we trivially write the above as for any f ∈ S * k (N). Therefore, a first moment average over central L-values reduces to the study of up to an error of size ≪ √ M N −1 .
3.1. From newforms to full bases of Hecke eigenforms. We start by changing the order of summation in S above N (1, n).
Using the fact that N is prime, we convert the sum over newforms by (2.22) to sums over Hecke eigenbases for S k (N) and S k (1) By trivial estimates, one sees that the terms satisfy the same bound and may be added back to S. Therefore, we see that One can simply view the above as rewriting the spectral average over newforms of level N as a sum over all forms of level N minus the contribution of the old forms.
3.2. Old form contribution. Since the weights of our forms are fixed, we can think of ∆ k,1 (1, n) in (3.2) as a fixed form of full level. One has so that we must analyze where G(s) satisfies the properties of §2.4. Shifting the contour to the left, one picks up the contribution from the pole at s = 0 and then applies the convexity bound for the Rankin-Selberg L-function L( 1 2 , g × h) to obtain 1 N n Therefore, the contribution from the old forms is absorbed into the error term and we have that Remark 3.1. We have arrived at the following. For N prime we have 3.3. The average over S k (N). By equation (3.3), we are left with treating the full first moment average as we did in the outline §1.2 with a few additional details. Petersson's trace formula produces a diagonal term and off-diagonal terms of the form where δ(1, 1) = 1, δ(n, 1) = 0 otherwise. Therefore, the diagonal term contribution to our full first moment average is simply V (1/NM). We now turn our attention to the off-diagonal sums. We start by taking a smooth partition of unity for V and consider sums over dyadic segments, of different sizes N Z (NM) 1+ε say, controlled by some positive, smooth function h supported in [1/2, 5/2]. The standard Bessel function bounds in §2.1, along with an application of Weil's bound for Kloosterman sums, allows us to truncate our c-sum to one of length c Z A up to an error term of size Z −B N −1 for some positive A and B (note that k 2). For the remaining sum over c, we change the order of summation and break apart the c-sum relative to (c, M) in order to have For each fixed c in the outer sums, an application of Lemma 1.3 to the n-sum with D = 1 and q = c bounds the above by Therefore, one establishes that after summing over all dyadic segments Z in n. By (3.3) and (3.1) we get that Therefore, assuming that g is also dihedral, such that positivity of all central L-values is known in the average over f ∈ S * k (N), one has by Lemma 2.2 the subconvexity bound Using the approximate functional equation in §2.4 we reduce the analysis of the average of central L-values to the analysis of where V satisfies the properties of §2.4. In this reduction we make crucial use of the fact that the root number ǫ(f × g) is independent of f ∈ S * k (N) (see §2.3).
4.1.
Averaging over newforms. The average S in (4.1) over newforms of level N can be written as follows using Lemma 2.4 Furthermore, by Remark 2.1 (3), one may restrict to considering only those ℓ|L ∞ with ℓ L A for some large A > 0. We change the order of summation to obtain Letting L 1 = L ℓ 1 , we note that v((nℓ 2 1 , L)) = v(ℓ 1 )v((n, L 1 )). Thus it remains to focus on the inner sum
4.2.
Averaging over all forms of level R. We apply Petersson's trace formula. The diagonal contribution is then given by NM so that the total diagonal contribution to S is Ignoring the 2πi −k factor, the off-diagonal terms in S in are It is convenient to apply Selberg's identity to the Kloosterman sums so that S(ℓ 2 , n; c) = ℓ 2 |(ℓ 2 ,n,c) We then let q = c ℓ 2 and pull out the new ℓ 2 factors from n and ℓ 2 . To take care of the term v((nℓ 2 , L 1 )) = v((ℓ 2 , L 1 ))v((n, L 2 )) where L 2 := L 1 (ℓ 2 ,L 1 ) , we set ℓ 3 = (n, L 2 ) and use Möbius inversion: where we have, using the fact that (ℓ 2 , R) = 1, We apply the Hecke relation The inner n-sum may therefore be rewritten as We let D = ℓ 2 ℓ 3 ℓ 4 ℓ 5 ℓ 2 . Note that (D, R) = 1. Breaking the n-sum into dyadic segments of length Z ( N M ℓ 2 1 ℓ 2 ℓ 3 ℓ 4 ℓ 5 ) 1+ε through a smooth partition of unity with h a smooth function compactly supported on [1/2, 5/2], it remains to estimate the inner sums over q and n which may be written as As in the prime level case of §3, the Bessel function bounds in §2.1 along with an application of Weil's bound for Kloosterman sums, allows us to truncate our q-sum to one of length q (DZ) B up to an error term of size (DZ) −C R −1 for some positive B and C. An application of Lemma 1.3 gives that and q 2 = q (D,q) . Note, we have used that (M, D) = 1 which implies (M, q 2 ) = (M, q).
As in the prime level case, we now split the above inner q-sum into two parts based on the size of P relative to 1. Thus, the sum σ Z is bounded by Furthermore, we recall that D = ℓ 2 ℓ 3 ℓ 4 ℓ 5 ℓ 2 . Therefore, combining the above, it remains to treat the sum over the ℓ • , that is: By trivial estimates and an application of Lemma 2.5 one bounds the remaining terms above by N ε for each L|N. Therefore, we see that our first moment average satisfie S ≪ 1 +
Proof of Lemma 1.3
Let Z 1 and let k and κ be fixed positive integers. Let q, D, M be positive integers with M square-free. Let g ∈ S * κ (M, χ) be a newform of weight κ, level M and nebentypus χ. In this section, we consider sums of the form We are now in position to apply the Voronoï formula (Lemma 2.6) to the inner sum over n. Note thatᾱ was first chosen such that αᾱ ≡ 1(q) so that we also have αᾱ ≡ 1(q/D 1 ). Set M 2 := M (q/D 1 ,M ) (we don't assume (D, M) = 1 so it could be that (M 2 , q) > 1 but this won't affect the argument). The inner n-sum becomes, up to some bounded multiplicative factors where M 2 M 2 ≡ 1(q/D 1 ), D 2 D 2 ≡ 1(q/D 1 ) and The sum over α may be written as * a Ramanujan sum. Therefore, (5.2) is reduced to We see that necessarily D 1 is coprime with q/q 1 otherwise the congruence cannot be satisfied. This implies that D 1 | q 1 and D 1 ||q. Then D 2 is coprime with q/q 1 and also because of the Möbius function, D 1 is square-free. These conditions may also be seen by inspecting the Kloosterman sum S(nD, 1; q) we have at the beginning in (5.1).
We are left with bounding a weighted sum of Hecke eigenvalues over an arithmetic progression. A change of variables in the integral in (5.3) shows that with I(a, b) as in the statement of Lemma 2.1. Recall that P := √ DZ/q. The inner sum over n may therefore be restricted, up to a negligible error term, by (5.4) D 1 n ≡ D 2 M 2 (mod q/q 1 ), 1 − D 1 n D 2 M 2 ≪ P −1 (qDMZ) 2ε with I q (n) bounded by √ Z P 2 (1 + P ) 3 in that range. The number of n satisfying (5.4) is Thus, one establishes the final bound (recall that D 1 |q 1 ) which may also be written as in the statement of Lemma 1.3. | 2012-10-16T14:56:09.000Z | 2012-07-14T00:00:00.000 | {
"year": 2014,
"sha1": "7999293757f81efb9f031ad8b3a759f07ffdadc8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1207.3421",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7999293757f81efb9f031ad8b3a759f07ffdadc8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
4395447 | pes2o/s2orc | v3-fos-license | Motor execution detection based on autonomic nervous system responses
Triggered assistance has been shown to be a successful robotic strategy for provoking motor plasticity, probably because it requires neurologic patients’ active participation to initiate a movement involving their impaired limb. Triggered assistance, however, requires sufficient residual motor control to activate the trigger and, thus, is not applicable to individuals with severe neurologic injuries. In these situations, brain and body–computer interfaces have emerged as promising solutions to control robotic devices. In this paper, we investigate the feasibility of a body–machine interface to detect motion execution only monitoring the autonomic nervous system (ANS) response. Four physiological signals were measured (blood pressure, breathing rate, skin conductance response and heart rate) during an isometric pinching task and used to train a classifier based on hidden Markov models. We performed an experiment with six healthy subjects to test the effectiveness of the classifier to detect rest and active pinching periods. The results showed that the movement execution can be accurately classified based only on peripheral autonomic signals, with an accuracy level of 84.5%, sensitivity of 83.8% and specificity of 85.2%. These results are encouraging to perform further research on the use of the ANS response in body–machine interfaces.
Introduction
There is increasing interest in using robotic devices to assist individuals who suffered neurologic injuries such as stroke and spinal cord injury (Marchal-Crespo and Reinkensmeyer 2009). Neurologic patients' active participation is thought to be essential for provoking motor plasticity (Lotze et al 2003, Perez et al 2004, and by assisting the movement that participants cannot achieve by themselves, active assist exercise provides novel somatosensory stimulation that can help induce brain plasticity (Rossini and Dal Forno 2004). Triggered assistance allows the participant to attempt a movement without any robotic assistance, only initiating the assistance when some performance variable (e.g. force generated by the participant, limb velocity, or muscle activity measured with surface EMG) reaches a threshold.
Triggered robotic assistance, however, requires sufficient residual motor ability or remaining muscle activity to activate the trigger, and hence is not applicable to individuals who have no functional motor ability left as a result of a severe neurologic injury. In these situations, brain-computer interfaces (BCI) have emerged as promising solutions (del R Millan et al 2010). Brain-computer interfaces could be used to control robotic devices to move the impaired limb when an intention to move is detected from cortical activity. Intention to move is defined as a supraspinal command that results in a physiological change, and eventually in a movement. Electroencephalography (EEG) and, more recently, functional nearinfrared spectroscopy (fNIRS) are the most widely used non-invasive techniques employed in BCIs. However, the burden of connecting sensors on the patients scalp and the relatively long training period required for the user to produce classifiable brain signals can be time consuming and frustrating. Additionally, the system performance can be severely affected by the interference caused by sensor location and, in the case of fNIRS, hair color and thickness. All these challenges can lead to user frustration and, ultimately, rehabilitation withdrawal (Coyle et al 2004, van Gerven et al 2009. More recently, studies have introduced the concept of body-machine interfaces (BMI), where physiological signals can be self controlled and used to detect functional intent (see Blain et al (2008) for a review). Responses of the autonomic nervous system (ANS), such as cardiorespiratory and electrodermal responses, can be measured with economical off-the-shelf instrumentation and are relatively fast to set up. Physiological signals such as skin conductance response, heart rate, respiration rate and skin temperature have been shown to have the potential of serving as inputs for the development of BMIs (Blain et al 2008). However, these previous studies in BMIs are mainly based on self-paced physiological signal changes, and thus the approach still requires the subject to perform a training phase to learn how to successfully control his/her physiological signals. Nevertheless, recent studies in psychophysiology showed that non-self-paced physiological signals can also provide a proper method to estimate a person's emotion and frustration level Andre 2008, Scheirer et al 2002), mental workload (Wilson and Russell 2003, Collet et al 2009 and activity engagement (Kushki et al 2012) without his/her active participation.
Physiological measurements have also been employed to increase the performance of BCIs. These so-called hybrid BCIs use brain recording technologies in conjunction to physiological signals (e.g. heart rate and blood pressure) to improve the classification performance (for a review, see Pfurtscheller et al (2010)). Most of the work on hybrid BCIs has made use of self-paced physiological signals, whereas there are only few studies that employed non-self-paced physiological data. An example of non-self-paced hybrid BCIs that outperformed classic BCIs included the respiration rate, heart rate, skin temperature and skin conductance response in an BCI based on music imagery (Falk et al 2011). We recently conducted an experiment, the results of which showed that the addition of blood pressure, respiration rate, heart rate, and skin conductance response significantly improved the accuracy of detecting motor execution of an fNIRS-based BCI (Zimmermann et al 2012). Interestingly, while hybrid BCIs have been proposed as an alternative to classic BCIs to improve accuracy, physiological signals have never been employed as stand-alone signals to detect motion execution. This paper suggests a paradigm shift into the use of the ANS responses in BCIs: the physiological signals are treated as the unique main source of information.
This paper investigates the feasibility of a BMI to detect motor execution, monitoring only changes in peripheral autonomic signals, without direct measurement of force, EMG activity and brain activation. The motivation behind our approach is to provide an interface for severely affected neurological patients who cannot rely on their neural circuitry to trigger assistance or control a robotic device. We hypothesize that such a BMI can achieve similar performance in detecting motor execution as BCIs directly based on signals from the central nervous system. This technology could improve not only robot-assisted rehabilitation, but also assist during activities of daily living: a mobile robot or a wearable exoskeleton in a home environment could provide support during any task based on subject's motion intention.
We performed an experiment with six healthy subjects. Four physiological signals were acquired (mean blood pressure, breathing rate, skin conductance response and heart rate) during an isometric pinching task. The physiological signals were used to train and evaluate an individually optimized classifier to detect rest and active pinching periods based on hidden Markov models (HMMs). The rationale behind an individually optimized classifier, rather than the one that generalizes to a wide range of users, is to study the feasibility of a classifier that could ultimately tune its parameters to different subjects, e.g., subjects with neurological injuries such as stroke or SCI.
Measurements of physiological responses
Based on previous research in the fields of BMI and psychophysiology (Blain et al 2008, Koenig et al 2011, four peripheral autonomic signals were recorded online: electrocardiogram (ECG), respiration, blood pressure, and skin conductance response (SCR). All physiological signals were acquired at 600 Hz using a biosignal amplifier (g.USBamp, g.tec, Austria, figure 1).
2.1.1. ECG. ECG was measured using the g R .GAMMAsys active electrode system from g.tech. The electrodes (g R .GAMMAclip, g.tec, Austria) were placed using sticky patches, with the ground on the left shoulder, reference over the left clavicle, channel 1 over the right ribs and channel 2 over the left ribs. The skin area where electrodes were placed was previously cleaned, although no further skin preparation was required (i.e. shaving).
The raw ECG signal was filtered with a fourth-order Butterworth bandpass filter with the frequency band 0.01-40 Hz. The heart rate (HR) was calculated online, detecting the R-wave peaks of the QRS complex using an adaptive threshold algorithm similar to the one described by Christov (2004). The HR was simultaneously calculated using a similar adaptive threshold on the raw blood pressure signal and compared to the HR calculated from the ECG in order to increase the HR detection robustness. Time and frequency domain measures of heart rate variability were discarded as possible features, since the minimum time interval required to measure cardiovascular variability is typically 5 min (Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology 1996).
Skin Conductance Response
Respiration Rate ECG Figure 1. Measurement setup. Four physiological signals were acquired: blood pressure, respiration rate, skin conductance response and electroencephalogram.
Respiration rate.
The respiration signal was acquired using a thermistor respiration flow sensor (SleepSense R , Scientific Laboratory Products, USA) placed at the entrance of the nostrils. The sensor was fixed on the skin using hypoallergenic adhesive tape. The raw respiration signal, measured as the difference of temperature between inhaled and exhaled air, was filtered with an eighth-order Butterworth bandpass filter with the frequency band 0.1-2.1 Hz. The breathing rate (BR) was calculated using an adaptive threshold algorithm, similar to the one employed with ECG. Time and frequency domain measures of breathing variability were not considered due to the short recording periods. Other respiration-related measurements, such as breathing amplitude, were excluded after preliminary testing, since no changes between rest and active periods were observed.
2.1.3. Blood pressure. The raw blood pressure was measured with a continuous non-invasive arterial pressure system (CNAP TM monitor 500, CNSystems, Austria). An inflatable cuff was placed around the left upper arm, and two size-adjustable finger cuffs were attached to the proximal phalanges of the left index and middle fingers. Subjects were requested to position the left arm on the chest over the heart. The arm cuff was employed only for a couple of minutes during the system initialization for scaling purposes. During the experiments, only the finger cuffs were used.
The raw blood pressure signal was detrended subtracting a best-fit line (in the least-squares sense) from the raw signal in order to remove any possible signal drifts during sessions. It was further low-pass filtered with a first-order Butterworth filter with the cutoff frequency of 0.1 Hz, leaving only the low and very low frequency spectra of the signal. The selection of the low and very low spectra (called mean blood pressure in subsequent sections, BP) was performed after comparing which cardiovascular features showed the most significant changes between rest and activation periods. Thus, diastolic, systolic and raw blood pressure signals, although initially considered, were excluded after preliminary testing.
2.1.4. Skin conductance response. Skin conductance was measured attaching two electrodes (g R .GSRsensor, g.tec, Austria) through Velcro R rings to the distal phalanges of the left index and middle fingers. Skin conductance is characterized by a slowly changing background level (tonic), and a rapid time-varying (phasic) response (Malmivuo and Plonsey 1995). The tonic level is related to the general activity of the perspiratory glands influenced by external temperature. The phasic response is called SCR and is usually related to the automatic response to stimuli. The raw skin conductance signal was filtered with an eighth-order Butterworth low-pass filter with a cutoff frequency of 30 Hz. The SCR signal was linearly detrended over each rest-activity period to remove the tonic level and was further normalized.
Experimental protocol
All experiments were approved by the institutional ethics committee of the ETH Zurich (application number EK 2010-N-49), and participants were provided informed consent. Six healthy male subjects between the age of 20-30 yr were recruited from the ETH Zurich students and staff environment. Inclusion criteria were no history of neurological disorders or orthopedic problem affecting the right upper extremity.
The measurements were conducted in a silent, dark room. Subjects were requested to lie supine on a comfortable padded table. The task consisted in isometrically tracking a provided pinching force with the right index finger and thumb. Isometric pinching was chosen partly for convenience (i.e. it is a simple task that minimizes subject movement), but also because it allows for a systematic assessment of subject's performance. The force applied by the subject during pinching was measured with a one-axis thick-film force sensor (CentoNewton 100N, LPM-EPFL, Switzerland) attached to the distal phalanges with Velcro R rings (figure 2(a)). Subjects were instructed to remain as motionless as possible during the experiment. The protocol was implemented in Simulink R . The force sensor was connected to the computer via a USB data acquisition card (NI USB-6008, National Instruments Inc., USA).
The experimental protocol was described in detail by Zimmermann et al (2011). Here, only a brief summary is given for completeness. fNIRS was used to simultaneously record brain activity in motor areas; however, these signals are not used in the present analysis and are beyond the scope of this paper. Three visual commands were presented to subjects using video goggles (z920HR-VGA, Zetronix Corp., USA): (1) rest: the word rest was displayed on the screen (figure 2(b)), and subjects were instructed to remain as relaxed as possible; (2) preparation: the message get ready was displayed and subjects were instructed to be aware they will be asked to move in a few seconds. Subjects were not instructed to imagine the movement or to try to move as quickly as possible; (3) activity: the word squeeze was displayed (figure 2(c)). Subjects were requested to try to match their applied pinching force (visually represented by a dynamic horizontal white bar, figure 2(c)) with a reference force (rendered with a horizontal green bar under the reference bar). In order to prevent subjects from learning the reference force and reduce their concentration level, a complex reference force profile between 1 and 4 N was generated from a truncated Fourier series with frequencies 0.5, 1.0 and 1.1 Hz. The force level and duration of the activity periods were small enough to avoid fatigue.
The protocol consisted of a random presentation of three different sequences of visual commands.
• S1 and S2: a rest command was followed by a preparation command (of 10 or 5 s), and then followed by an activity command that lasted for 20 s. • S3: a rest command was followed by a preparation command of 10 s, and followed again by a rest command. • S4: a rest command was followed by an activity command that lasted for 20 s.
The duration of the rest commands was randomized (from 15 to 24 s) in order to reduce learning effects that may decrease attention and to avoid that the autonomic system synchronizes with the activity periods. The experimental protocol consisted of a total of 10 trials per sequence, presented in random order. The experiment was divided into two sessions of 20 trials each. Each session began with a baseline of 180 s and finished with a baseline of 120 s. The total time required to finish a session was approximately 20 min. Participants paused for 10 min between sessions.
Classifier
In a preliminary study (Zimmermann et al 2011), we found that none of the physiological signals showed a significant change between the rest and the preparation periods. Thus, only periods of activity and the rest periods that preceded were considered in the classifier, independently of the sequences they were part of (i.e. 10 trials per sequences S1, S2 and S4, and thus, a total of 30 rest periods followed by 30 activity periods).
2.3.1. Data pre-processing. The four physiological signals were further decimated to 5 Hz in order to reduce the computational time required to train and evaluate the classifier. The training and testing data sets were generated as vectors of physiological values at each sample time. A 15 s window was selected, corresponding to the shortest rest command possible. Optimizing the window length for each subject would increase considerably the training time of the classifier, and thus, the same conservative window length was fixed for all subjects. Responses of the ANS are rather slow (figure 3), and thus choosing the last 15 s of the rest periods, and the first 15 s of the active periods did not seem reasonable. Different physiological signals have different latency responses (i.e. the SCR generally shows a faster response than other systemic changes, figure 3), and thus, different post-stimulus times could ideally be used for each signal to detect the active periods. However, in order to reduce the computational time that optimizing the latency time for each subject and for each physiological signal would require, we fixed the latency time to 5 s, and thus, the 15 s windows were shifted ahead by 5 s.
A fourfold cross-validation was used to randomly distribute all pairs of associated restpinching trials into train and test data sets (i.e. 25 training trials and 5 testing trials).
Hidden Markov models.
HMMs are well known in temporal pattern recognition applications such as speech and gesture recognition. The main argument for HMMs over other classification techniques (e.g. support vector machines, linear discriminant analysis) is their ability to classify time-sequential data, such as the time-varying physiological signals presented. Here, only a brief introduction to HMMs is given. The reader is referred to Rabiner (1989) for a detailed tutorial.
A HMM is a finite-state machine containing N unobservable (hidden) states (S = {S 1 , S 2 , . . . , S N }). The probability of transition to other states only depends on the current state and is defined by a transition probability matrix A = [a 11 a 12 · · · ; a 21 · · · ; · · · a NN ]. HMMs emit an observation vector at every time sample O t = {O 1 , O 2 , . . . , O F } that depends only on the current state and number of features F. Each state has an associated observation probability distribution B which determines the probability of generating an observation at a certain time step. The probability of starting in a specific state is modeled by the initial state distribution π .
A HMM is completely characterized by defining the number of states N, the initial and transition state probabilities (π and A), and the observation probability distributions B at each state (denoted in short as λ(π, A, B)). In this paper, a left-right Markov model topology was chosen that allowed transitions only from each state to itself and to the state to its right (figure 4). The observation probability distributions were chosen to be mixtures of M Gaussians with full covariance matrices in order to account for possible observation correlations. To reduce the chance of overfitting the classifier, only HMMs with a maximum of five states and two mixtures were considered. The number of observations was set to 4 described in section 2.1 (figure 3).
The initial transition matrix A 0 and initial state probability π 0 were estimated by uniformly distributed random numbers. The observation probability distribution B 0 was initialized using k-means clustering on training observations. In order to find the optimal model parameters λ (π, A, B) given a fixed number of states and mixtures, the initial probability parameters were Figure 4. Illustration of two four-state left-right HMMs for rest and active conditions. The four observations employed in this study were: heart rate (HR), respiratory frequency (RF), mean blood pressure (BP) and skin conductance response (SCR). The observation probability distributions at each state are represented as mixtures of two Gaussians. adjusted using the Baum-Welch algorithm (Rabiner 1989) on the training data. The freely distributed HMM toolbox for Matlab by Murphy (1998) was used.
Given a specific number of states and mixtures and a sequence of test observations, the likelihood that the observed sequence was produced by any of the two HMM models (one for rest, and a second one for active) was computed, using the forward-backward algorithm (Rabiner 1989). Subsequently, each of the testing trials was classified into one of the two models, by selecting the model with the highest likelihood. Based on the different number of hidden states N = {1, 2, 3, 4, 5} and different number of mixtures M = {1, 2}, a total of ten models were trained for each of the rest and active classes.
Evaluation
The metrics used to quantify the classifier performance were accuracy, sensitivity and specificity: where TP is the number of true positives (correctly detected active periods), TN is the number of true negatives (correctly detected rest periods), FP is the number of false positives (rest periods classified erroneously as active) and FN is the number of false negative (active periods classified as rest). The performance metrics were calculated for each k-fold partition and averaged over one complete cross-validation run. A well-known problem with HMMs is their lack of convergence to a global maximum. Changing the initial model parameters λ(π 0 , A 0 , B 0 ) results in a different optimized trained model. In order to reduce the effect on performance variability due to random initial model parameters, we ran the evaluation procedure a total of 7 times. The mean and the standard The chance level in a two-class BCI is not exactly 50%, but 50% with a confidence interval at a certain level (95%) that depends on the number of training trials (Muller-Putz et al 2008). The calculation of the confidence interval was performed using a binomial distribution of p = 0.5, considering 30 trials per class (two sessions), and 15 trials per class (one session). This yielded upper confidence limits of 64.1% when considering the two sessions and 66.5% for one session. Thus, the obtained performance metrics were considered above the chance level when their means were significantly higher than the corresponding upper confidence limits. The significant level was set to p = 0.05.
Results
Due to technical problems, the physiological signals of subject 6 were recorded only during the first session. Figure 5 reports the per-participant classifier performance sensitivity, specificity and accuracy values obtained using the four features with the combination of number of states N and number of mixtures M that yielded the maximum accuracy. The optimum number of states and mixtures per subject, and the mean and SD of the sensitivity, specificity and accuracy for each subject are reported in table 1. All subjects performed significantly above the a priory set 64.1% chance threshold (66.5% for subject 6).
The average classifier accuracy over the six participants was 84.5%. The optimization of the HMM parameters (N and M) for each user required a calibration session with known active and rest intervals from the fourfold cross-validation training data set. The calibration phase that iterates for each number of states and mixtures combination takes a relative long time. In practical BCI situations, however, it would be desirable to reduce the time required for Table 1. Classification sensitivity, specificity and accuracy, (mean ± SD) across seven complete cross-validation runs for each subject, for the best combination of number of states and number of mixtures, and for a fixed HMM structure (N = 3, M = 2). The optimum number of states N and mixtures M for each subject in the personalized HMM are also reported.
Personalized HMM
Fixed HMM Sensitivity Specificity Accuracy Sensitivity Specificity Accuracy 75.5 ± 7.2 78.8 ± 9.9 77.1 ± 5.7 3 2 75.5 ± 7.2 78.8 ± 9.9 77.1 ± 5.7 s2 75.6 ± 4.0 81.2 ± 8.5 78.4 ± 5.6 5 2 76.7 ± 5.5 79.5 ± 7.5 78.1 ± 5.0 s3 99.0 ± 3.7 92.2 ± 2.5 95.6 ± 1.6 2 2 98.0 ± 2.8 92.9 ± 3.0 95.4 ± 2.4 s4 78.4 ± 1.7 79.7 ± 7.8 79.1 ± 3.5 2 1 77.6 ± 4.7 78.3 ± 5.7 77.9 ± 2.8 s5 79.0 ± 4.3 87.5 ± 4.7 83.3 ± 1.7 3 2 79.0 ± 4.3 87.5 ± 4.7 83.3 ± 1.7 s6 95.2 ± 3.3 91.7 ± 2.9 93.5 ± 2.7 4 2 92.6 ± 2.4 90.5 ± 5.6 91.5 ± 3.0 Average 83.8 ± 3.7 85.2 ± 6.1 84.5 ± 3.5 8 3 .2 ± 4.5 84.6 ± 6.1 83.9 ± 3.4 calibration, while still obtaining good accuracies. In order to reduce the calibration computation time during normal BCI investigations, it seems reasonable to use a fixed HMM structure. The effect that fixing the number of states and mixtures had on the overall accuracy was studied. It was found that the number of states and mixtures that maximizes the overall accuracy (N = 3 and M = 2) resulted in only a slight reduction of 0.6% of the average classifier accuracy (83.9%). The mean and SD of the sensitivity, specificity and accuracy for each subject using a fixed HMM structure are reported in table 1. The accuracies of all subjects remained above the chance level. Some subjects showed higher accuracy levels than others (i.e. subjects 3, 5 and 6; Mann-Whitney test, p = 0.05), probably due to the intersubject differences in the ANS responses. In order to investigate the changes in the different physiological signals, the mean of the last 5 s of the rest periods was compared to the mean value from 5 s during the activity periods. Because different physiological signals have different latency responses (see, e.g., figure 3), different times after the onset of the activity period were used for each signal (3 s post-stimulus for the SCR and 5 s for all the others) (Zimmermann et al 2011). Paired t-tests were used to evaluate the presence of a significant change in each physiological signal for each subject. The significance level was set to 5%. The resulting p-values are listed in table 2.
A significant correlation between the accuracy and the number of significant features was found: subjects with a larger number of significant features (e.g. s3 with four significant features, table 2) resulted in higher accuracy (Pearson's correlation, R 2 = 0.78, p = 0.02). In order to investigate the effect that non-significant features had on the classifier accuracy, we performed feature reduction (i.e. selection of a subset of features for the classification). There exist several methods for feature reduction. A common technique to rank individual features is through ANOVA: a statistical method used to rank the features which show the most significant difference between two classes. Then, only the n most significant features are used in the classifier (Wagner et al 2005). Analysis of variance was chosen to reduce the feature dimensionality because the final extracted features are a subset of the original features, while other popular methods (e.g. principal component analysis) create new transformed features. Furthermore, ANOVA is computationally less expensive when compared to recursive feature reduction algorithms (e.g. sequential forward selection).
Based on the values reported in table 2, the p-values from more to less significant (order shown in brackets) were ranked. For each subject, the feature with the lowest p-value was iteratively added in the features list, and the classifier performance was recalculated. The classification accuracies (mean ± SD) across the seven complete cross-validation runs for each subject, for the best combination of number of states and number of mixtures, and for different number of features are reported in figure 6. Even with only one feature, all subjects performed significantly better than the chance. Some subjects showed a slight decrease in the performance as less features were used, while some showed the opposite tendency. Although the overall performance of all subjects decreased when a limited number of features was used, the overall performance between using only one feature (80.9%) and using all the four features (84.5%) was not significantly different (paired t-test, p > 0.05).
Discussion
The goal of this study was to investigate the feasibility of detecting motor executionspecifically isometric pinching in index finger/thumb opposition-with a BMI based only on measurements of physiological signals from the ANS. Four physiological features were measured (mean blood pressure, breathing rate, skin conductance response and heart rate) during an isometric pinching task. The acquired physiological signals were used to train a classifier based on a dual HMM. We hypothesized that activity in cortical areas can be detected by monitoring changes of the ANS, instead of measuring directly at the supraspinal level. We performed an experiment with six healthy subjects the results of which showed that motor execution can be accurately classified based only on peripheral physiological signals.
We hypothesized that such a BMI based on ANS could achieve similar performance in detecting motor execution as a BCI directly based on signals from the central nervous system. This study showed that motor execution can be accurately classified based only on peripheral physiological signals with an accuracy of 84.5%. These results are in line with recent BCI studies that employed EEG to detect movement intention (Boye et al 2008) and motor imagery (Tsui et al 2009). Few non-invasive BCIs, mainly based on fMRI techniques, have shown higher accuracy levels (Lee et al (2010) achieved an accuracy above 90%). However, the infrastructural needs, electromagnetic compatibility limitations and high associated costs make fMRI-based BCIs inappropriate for standard robotic rehabilitation. On the other hand, our results slightly outperformed fNIRS-based BCIs employed to classify mental tasks such as music imagery and mental arithmetic (Falk et al 2011).
In this study, fNIRS was also employed to simultaneously record brain activity in motor areas (contralateral primary motor cortex and ventral premotor cortex). The brain hemodynamics recorded with fNIRS were employed to train a similar dual HMM classifier (Zimmermann et al 2012). The results showed that the classifier based only on the signals measured from the central nervous system with fNIRS achieved an average accuracy of 79.4%, i.e. a slightly lower performance than the classifier based on the ANS response presented here. On the other hand, when the four physiological features described in this paper (mean blood pressure, breathing rate, skin conductance response and heart rate) were added as auxiliary observations into the HMM, the classification accuracy increased significantly to 88.5%. This is in line with recent studies on hybrid BCIs that used brain imaging methods in conjunction with self-paced physiological signals to improve the classification performance (for a review see Pfurtscheller et al (2010)). While physiological measures have been successfully employed to improve the accuracy in hybrid BCIs (Falk et al 2011, Zimmermann et al 2012, the ANS responses have never been employed as the unique information sources to detect motion execution. This paper aims at filling this gap and investigates the feasibility of a BMI to detect motor execution, monitoring only changes in peripheral autonomic signals, reaching similar accuracy levels as hybrid BCIs. Although recent studies have already introduced the concept of BMIs, where physiological signals can be self-controlled and used to detect functional intent (Blain et al 2008), these previous studies are fundamentally based on self-paced physiological signal changes, and thus subjects must be active agents in the changes of their ANS. In our approach, subjects were not requested to change their normal physiological signal responses based on the protocol stimuli. Most of the studies that worked with similar non-self-paced biosignal decoders are found in the field of psychophysiology, where the goal is to estimate subjects' emotions (Kim and Andre 2008), mental workload (Wilson and Russell 2003, Collet et al 2009 and activity engagement (Kushki et al 2012), instead of function intention.
A recently completed study investigated the use of non-self-paced peripherical autonomic signals to detect music imagery . Regardless of the fact that the goal was to decode music imagery instead of motor execution, there are some relevant similarities between these two studies. First, both studies use only physiological autonomic signals (although they used skin temperature, while here mean blood pressure was used). Both studies optimized the number of states and mixtures of a dual HMM classifier and achieved similar accuracy levels (93% in Falk et al (2010), and 84.5% here). The smaller accuracy level achieved in our work may be due to the fixed observation window length: they optimized the window lengths per subject, while we fixed them for all subjects in order to reduce the time required to train the classifier.
HMMs are well known in temporal pattern recognition applications. However, despite their higher ability to classify time-sequential data, compared to discriminative approaches (Sitaram et al 2007, Obermaier et al 2001, they have barely been used in physiology classification (Kulic andCroft 2007, Falk et al 2010). In this paper, we showed that HMMs are a valuable tool to classify motor execution based on time-varying physiological signals. There are, however, some issues with HMMs that must be considered. The per subject optimization of number of states and mixtures requires a large computation time. Here, we studied the effect that fixing the number of states and mixtures for all subjects had on the overall classifier accuracy and found that the optimal fixed model resulted in just a slight reduction of the average classifier accuracy (83.9%). Thus, in order to reduce calibration computation time during normal BCI investigations, it seems reasonable to try to find a reliable fixed HMM structure in future experiments. Some subjects performed significantly better than others. We also noted that the subset of physiological signals with significant changes was different for each participant. We found a significant correlation between the classifier accuracy and the number of significant features in each subject. We performed feature reduction using statistical tools to test how the reduction of observations affected the classifier performance. We did not find an increase in the accuracy in subjects with a reduced number of significant features. Thus, it was not the inclusion of nonsignificant features what decreased the classifier performance. Interestingly, we did not find a clear accuracy decrease neither when we reduced the number of significant physiological signals used in the classifier. The effect of removing features was dependent on each subject's specific ANS responses.
This finding contradicts recent studies that found a clear monotonic increase in classification performance as more physiological signals were added to the decoder , Kushki et al 2012. A possible explanation is that different autonomic systems may react in different ways while performing a movement, compared to a more homogenous response to music imagery , and activity engagement (Kushki et al 2012). Furthermore, the study reported by Kushki et al (2012) was performed with individuals with cerebral palsy and muscular dystrophy who presented some physiological differences due to their disabilities (i.e. features related to respiration and the cardiovascular system may have been affected in some subjects). An a priori detection of the optimal number of features based on training data, as suggested by Kushki et al (2012), could improve the classifier performance for each subject. As an indicative value, selecting the optimum number of features based on all trial data, resulted in an overall performance of 91.6%. However, such an optimization process could also increase the time required to train the classifier.
A major challenge in our research is the comparatively long time periods needed before sufficient information is available to make a decision. As expressed by Blain et al (2008), while some EEG-based BCIs have achieved information transfer rates of up to 27.15 decisions min −1 , to date BMIs that use only peripheral autonomic signals require at least 30 s to make an accurate detection. In this study, a very conservative observation window length was fixed to 15 s (chosen based on the minimum rest period length). Furthermore, a shift of 5 s was applied to account for physiological signal latencies. This led to a maximum detection delay of 20 s. Although 20 s may be seen as an unreasonable delay for BCI applications, for severely disable individuals who rely on access technologies to move and communicate, speed may not be critical. In a survey of 17 patients in the final stage of ALS who were extensively informed about the possibilities and advantages of an invasive electrode-based BCI, only one agreed to implantation. Patients refused the surgical procedure and preferred the slow non-invasive system. They argued that time is no issue if one is completely paralyzed (Birbaumer 2006). Priority will be given in further research steps to shorten this relatively long time delay. A possibility could be to select the window lengths optimally for each subject in order to reduce the delay in subjects with faster ANS responses.
The study reported here was conducted in healthy subjects without neurological lesion. We chose to first study healthy subjects in order to evaluate the normative responses of the noninjured ANS during motion execution. Results from this study provide an important starting point and a framework for comparison for future studies with subjects with neurological injury. As presented in this paper, physiological signals vary significantly between subjects. Neurological injuries, such as stroke or spinal cord injury, may affect the autonomic system, which may introduce further variations in the peripheral signals. For example, traumatic brain injury survivors are known to show abnormalities in the autonomic system (hypofunction or hyperfunction) and show an asymmetric sweating with cold hemiplegic limbs that can affect the SCR signal (Korpelainen et al 1999(Korpelainen et al , 1993. Patients with complete spinal cord injury showed no changes in electrodermal activity below the level of injury (Cariga et al 2002). SCR was shown to be significantly different in patients with multiple sclerosis (Yokota et al 1991). On the other hand, some recent studies have shown the feasibility of using some of the physiological signals presented here (i.e. heart rate, SCR and breathing rate) in stroke rehabilitation (Koenig et al 2011 and with individuals with severe physical disabilities, such as cerebral palsy and muscular dystrophy (Kushki et al 2012). Future work with subjects with neurological injuries will focus on determining if the injured ANS can be consistently employed to control a body-computer interface. We speculate that a good classifier accuracy could still be achieved if a physiological signals analysis with patients is performed prior to the training of the classifier. Weak or absent physiological responses can be discarded by means of feature reduction algorithms (Kushki et al 2012), similar to the statistical approach used in this paper.
The experiment design also suffers from some limitations. It is well known that attention and mental load significantly affect ANS responses. It is therefore possible that the differences between 'activity' and 'rest' periods reported here are associated with mental load, instead of motor execution. A well-designed control task is needed (e.g. mental arithmetics, counting backwards) to really conclude that what is classified is in fact motor execution. Moreover, the proposed method was designed to detect motion execution using data from healthy participants who were actively pinching. Motor imagery has been proposed as a strategy to detect motion intention in BCI studies (Falk et al 2011, Tsui et al 2009. However, we chose to first study isometric pinching partly for convenience (i.e. it is a simple, well-controlled task that minimizes subject movement), but also because motor imagery does not allow for a systematic assessment of subject's performance (i.e. motor imagery ability strongly varies among subjects (Sharma et al 2006)). It is important to establish the normative mechanisms of the ANS during motion execution, thereby providing a framework for comparison for future studies with motor imagery. Future work will focus on testing with a larger group of subjects to determine if motor imagery yields similar results.
Finally, though physiological signals are easy to measure, they are also affected by different environmental disturbances (e.g. auditory or visual stimuli, external temperature) and by the amount of physical activity. In this study, all these disturbances were minimized by conducting the physiological measurements in a silent, dark room while subjects lay supine. However, such a setup is not realistic in a standard therapeutic environment. Ideally, the use of a wide range of different physiological features could account for these undesirable disturbances. Although physiological signals are prone to habituation, no signal degradation was observed during the experiment described here. A possible explanation is that the random presentation of sequences and the complex reference force profile constantly engaged subject's active participation. In order to reduce the negative effects of physiological signal habituation, future experimental protocols will be designed to actively engage the subject in an assist-asneeded manner (Zimmerli et al 2012). The equipment employed to measure biosignals in this study was selected for convenience (it already existed in our laboratories), but other compact, wireless and easy-to-use solutions exist on the market (e.g. Bluetooth heart rate monitors).
Conclusion and outlook
This study showed the feasibility of a BMI to detect motor execution by monitoring only changes of the ANS. Motor execution was accurately classified using a dual HHM classifier based on only peripheral physiological signals with an accuracy level of 84.5%. These results are very encouraging to perform further research on the use of the autonomic system in BMIs for the treatment of severely impaired neurologic patients.
The long term goal of this project is to develop novel human-oriented strategies that enhance the interaction between the robotic system and the user and to incorporate them into robotic systems (e.g. for upper extremity neuro-rehabilitative training). In particular, the robotic system should estimate intention in a continuous manner so that it can optimally assist a human in the anticipated reaching, grasping or manipulation movement. With this approach, participants will control their own movements, while the robotic device will compensate for weakness. The use of physiological signals and binary classifiers may not be enough to achieve the ultimate goal of a continuous decoder. Hence, we plan to use sensor fusion, such that the most likely motor intention can be extracted from a pool of different information sources. These sources include not only physiological recordings, but more sophisticated context analysis (task knowledge and motion history information), gaze and head movement recordings, and recordings of dynamic and kinematic movement components. As an ultimate goal, we plan to incorporate the brain into the loop, integrating measurements of cortical activation acquired through fNIRS. | 2018-04-03T06:10:37.251Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "2fdb7029bb6babf39ba9bb107eea6625176e763d",
"oa_license": "CCBY",
"oa_url": "https://boris.unibe.ch/117040/1/Marchal-Crespo_2013_Physiol._Meas._34_35.pdf",
"oa_status": "GREEN",
"pdf_src": "IOP",
"pdf_hash": "65fbd89938210e21457dab4935868d2c9a619e5e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Computer Science"
]
} |
234601901 | pes2o/s2orc | v3-fos-license | Ethnobotanical study of medicinal plants from West Azerbaijan, Northwestern Iran
Background
This study has identified the applications and local (Azari) and persian names of wild and cultivated plants collected for medical purposes by Azerian people of West Azerbaijan region in the Province of West Aerbaijan, Iran. The aim of this study is to collect information from local population concerning the use of medicinal plants of Khoy and determine the relative importance of the species including use value of species and the informant consensus factor (ICF) were surveyed and calculated in relation to medicinal plant use.
Methods
A field study had been carried out for a period of approximately 2 years (2014–2015). A questionnaire was administered to the local people, through face-to-face interviews. Demographic characteristics of participants, names of the local plants, their utilized parts and preparation methods were asked. The plant species were collected as herbarium specimen. The collected data were used to calculate the ICF and the plant use values. 82 Plants were found to be used for medical purposes in study area.
Results
The results showed that the highest use value were recorded for the species Thymus kotschyanus Boiss. exhibited maximum use value (0.58), while the highest ICF was cited for Cold, flu, fever (0.61).
Conclusions
The results of this research showed that in the studied area the folk medicinal plants are still applied and evaluation of pharmacological activity for the indigenous medicinal plants is recommended.
Introduction
In spite of great progress in modern pharmacology and introduction of several new synthetic medicines, plants and their natural derivatives are widely used for various pharmacological purposes by the people in different regions1-6. These consumptions are due to the presence of different phytochemicals, which makes plants a major source of natural products for various medicinal applications. However, for more than 1000 years' people have used plants based on mouth to mouth information from past generations, without any detailed information regarding their phytochemical constituents (Hayta et al., 2014). Uncontrolled harvest of medicinal plants by local people has increased the risk of extinction of many species and subsequently the loss of local knowledge including instructions of those plants. Documentation of the indigenous knowledge through ethnobotanical studies is important for the conservation and utilization of biological resources. Therefore, establishing the local names and the indigenous uses of plants will be bene cial (Bagcı, 2000). In general, ethnobotany is the scienti c investigations of plants that are used in indigenous culture for food, medicine, rituals, building, household implements, rewood, pesticides, clothing, shelter and other purposes (Ugulu, 2011). By revealing and recording the hidden folk medicinal uses of local plants, ethnobotany has become an important part of our world. Ethnobotany surveys include interviewing local people, using available data in the literature and the folklore of each region. Iran has admirable past regarding traditional medicines, especially medicinal Plant-based medicines (Naghibi et al., 2005). Historical evidence proves the fact that Iran is the most ancient civilization in using medicinal plants. Iran with 8000 plants species and 1727 endemic species, is one of the ten important sources of speciation in the world (Youso , 2007). Ethnobotany surveys were conducted in parts of the country (Ghorbani, 2005 . North west of Iran has a rich ora, due to its diverse climates and high number of ecological zones. This diversity in ora provides a rich source of medicinal plants, which has been utilized by Azerian people from the far past. The number of studies that have been undertaken to document and preserve medicinal plant knowledge in west Azerbaijan is few especially in Khoy region. For this purpose, we just found Study of West Azerbaijan medicinal plants that was conducted by Miraldi et al (2000). But they didn't evaluate the medicinal plants that grow in Khoy region and It's necessary to document biodiversity of medicinal plants that are used in folk medicinine in the study area. Khoy city possesses rich sources of different herbs. People have a ready access to them, especially in rural areas. Main objective of the current study therefore has been to compile and document information on the applications of plants by the people and the therapies offered by conventional healers in this area.
Methodology Study area
Khoy is located north-west of Iran, West Azerbaijan province lying at 38°33′01″N 44°57′08″E, with an average altitude 1139 m above sea level. It is nicknamed as the Sun ower city of Iran. In the past years, Khoy was the gateway of the Parthian Empire in the Northwest. Koppen-Geiger climate classi cation system classi es its climate as cold semi-arid. Qotur river which passes through the city, originates from high altitude of Turkish border. All around the region is surrounded by mountains like Iran's other western highlands. This area is along Mount Ararat (Agri Dagh). The city is located in the vicinity of mountains such as Chelekhaneh (north) Mount, Avrin Mount (southeast) and Aladagh mountain range in west of the city. It shares international border with Turkey (Van) at west. District Chaypare is located at north, Salmas at south and district Marand is allocated at west of Khoy. It is divided to four counties: Central District Interviews with local people People Interviews were done during the busy hours of the known areas -visited by the citizens of West Azerbaijan and its villages. A questionnaire was administered to the local people, through face-to-face interviews. During the interviews, the demographic characteristics of the study participants, local names, utilized parts and preparation methods of the plants were recorded. The people who participated in the study were requested to indicate the wild plants they used.
Plant Materials
The eld study was carried out over a period of approximately 2 years (2014)(2015). During this period, information about the medicinal use of 72 wild and 20 cultivated plants were collected. The plants were pressed in the eld and prepared for identi cation. Plants were identi ed using the standard text, Flora Iranica (Rechinger, 1965(Rechinger, -2008 and Flora of Turkey and the East Aegean Islands (Davis, 1965(Davis, -1985Davis et al., 1988) and were compared with the specimens in Tabriz University Herbarium. The names of plant families were listed in alphabetic order.
Data analysis
The data was analyzed through different quantitative techniques. for this purpose, different approaches are considered for quantitative as well as qualitative analysis of ethnobotanical data. These approaches depend on the objectives of researcher, nature of the study and aim at objective evaluation of the reliability of the conclusions based on the data (Hoft et al., 1999). The indigenous medicinal information of plant species was analyzed using two different techniques: use value (UV) and informant consensus factor (ICF).
Informants consensus factor (ICF)
The importance of each species was represented by using different indices including, relative frequency of citation (FC), Informants consensus factor (ICF), and cultural importance index (CI). Frequency of citation (FC) is de ned as the number of informants who refer to a useful species. Informants consensus factor (ICF) index was used to determine the uniformity of the recorded information. At rst the aliments were categorized and then all the citations were located into the related categories that each plant was claimed to affect. This index was calculated by using following formula: Where N ur and N t denote the number of use report in each use category and number of taxa taken as medicine, respectively. The higher the value of the ICF, the more informants agree on the use of the species in the use-category.
Use-value (UV)
The UV, a quantitative method to determine the relative importance of indigenous plant species was calculated using the following formula; Where, UV represents the use value of a species; Ui represents the number of uses mentioned by each informant for a given species; n is the total number of informants interviewed for a given species (Phillips and Gentry., 1993).
Results And Discussion
Demographic characteristics of study participants Demographic characteristics of the respondents were determined and recorded through face to-face interviews. 55.8% of participants of surveyed people who had knowledge of plants were male and the rest were female. We interviewed 120 persons who are over the age of 25. All females who use these plants are house-wives whereas 41.5% of males are farmers, and 29.2% of them are unemployed while others have various occupations. Demographical characteristics of interviewees according to the results, we obtained in the research, are presented in Table 1. Most frequently used parts are aerial parts (32%), followed by leaves (20%), Seeds (13%), and Fruits (12%) (Figure. Decoctions were the second common method that includes boiling a speci c part of plant in water until it reaches its half the initial volume. Among preparation methods, those which lead to an orally consumable products are preferred (Mood, 2008;Brandao et al., 2012;Sadeghi et al., 2014). It was observed that native people of this region used endogenous plants after drying. This method is preferred because they can store these dried medicinal plants in a house room and use it for all year round, when they need. In West Azerbaijan, the custom of preparing distillates at home is a widespread tradition named "Araghgiri" that produce aromatic water with homemade instruments named "Neygazan". For example, distillate obtained from Mentha, used to treat abdominal pains and also used as carminative often. The most of the people in Khoy and neighboring area use aromatic water of Rosa damascena as "Golab" to treat stomachache, as sedative burns treatment and Natural Brightening Cleanser. In addition, it is used as avor. For example, in 'Doogh' and 'Fereni' as traditional drink and food respectively and the essential oil of Rosa damascena is used in Perfumery. Golab has an important role especially in funeral of Azerian people. They use it for Wash graves and cooking "Halva" which is a traditional pastry for mourning ceremony. Different approaches used for medicinal preparations of plants are presented in Table 2. The consumption of plants in Infusion form is the most frequently used way (40 Species), followed by Decoction (39 Species), Raw (20 Species), and Powder (8 Species) (Fig. 4).
Sadat-Hosseini et al. 11 have carried out ethno-botanical studies in the south of Kerman Province. They have collected data from the native people. Majority of the are used as decoction followed by liniment, and infusion. As in our ndings, most of the herbal preparations are consumed orally, whereas in a few cases Top mode is used. The oral mode of application is the most preferred form of herbal preparations among different ethical groups of Iran. However, according to Vijayakumar et al. most plant preparations are used in the form of paste (32%), followed by powder (22%), decoction and juice (20%). Similarly, oral use too is the most frequently applied mode, followed by topical use. A perusal of the studies published reveals big differences between various cultures, in terms of mode of herbal preparations and application ways, but most are applied orally.
Data analysis (ICF and UV calculations)
The use of medicinal plants as conventional and modern drugs shows that they are acceptable. There may be some plants which are currently not used for medicinal purposes but may actually have medicinal effects (Kaya, 2006). In order to classify major health problems of interviewees, ailments are categorized in to different group. The reported ailments were grouped into 11 categories based on the information gathered from the interviewee. Native people of Khoy use plants for medical purposes mostly for treatment of Gastro-intestinal diseases (e.g. gastric pains, stomach disorders, as carminative, constipation, diarrhea, hemorrhoids and laxatives and cases), cold, u and fever (60 citations ), Diabetes (32 citations ), Skin diseases (28 citations), Nervous system, sedative (20 citations ), Cardiovascular disease (16 citations ), Diuretic (14 citations ), Infections (13 citations ), Rheumatic pain (12 citations ), Respiratory/throat diseases and Cancer (6 citations) (Fig. 5).
Helichrysum arenarium (L.), Lepidium draba L., Mentha longifolia (L.) Huds., Thymus kotschyanus Boiss., Alcea kurdica, etc were reported to be among the plant remedies concerned for treating cold, u and fever which had the highest ICF score (0.61 (Table 3). They expressed that Gundelia tournefortii could be an appropriate adjunctive medicinal plant to help reducing the major risk factors of CAD like cholesterol, LDL-c and BMI.
In addition, local people of Khoy use an expensive tru e, named "Donbalan" widely to use as antiin ammation, antioxidant and tonic. They believe that tru es, found in this region, are important source of protein and they use it for their health. White tru es, Terfezia boudieri Chatin. are found in Khoy and neighbouring area. It is one of seasonal temporary jobs to collect this kind of tru es for farmers which earn a proper income in some years. It's used to treat gastric cancer, hepatitis A, B, C, arthritis, bronchitis, asthma, stomach ulcers, blood pressure and cholesterol. Plant parts used for the treatment of various illnesses include aerial parts, leaves, stems, roots, bark, milky latex, oil seeds, owers, and fruits. Aerial parts were the most plant part (34), followed by fruit and seed (24), leaves (21), ower (9), latex (4) and root, rhizome, bulb (9) and a style respectively. These plants are used for the treatment of gastro-intestinal diseases, cardiovascular problems, diabetes, and skin diseases, cold and respiratory tract problem etc.
Relative importance value of plant species and informant consensus factor (ICF) for plants were calculated. The ICF values were found between 0.61 and 0.23. Diseases with high ICF values according to reported disorders are cold, diabetes and gastrointestinal diseases. We perceived that these plants are used in different parts of the world for the treatment of the same or similar diseases. Some plants that mentioned in this paper, may be edible or have another application. Mode of preparations and their percentages.
Figure 5
Percentage of species and citation in each use category.
Figure 6
Medicinal plants with the most use | 2020-08-27T09:05:24.049Z | 2020-08-24T00:00:00.000 | {
"year": 2020,
"sha1": "e25eeea9f0da18a2b6a2d4f2e94d4728caf8fdd7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-55496/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "87957e2c9b5d7008c94ce362abcd872150b4518e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
5777933 | pes2o/s2orc | v3-fos-license | BMC Complementary and Alternative Medicine
Background: Ginger (Zingiber officinale Rosc) is a natural dietary component with antioxidant and anticarcinogenic properties. The ginger component [6]-gingerol has been shown to exert anti-inflammatory effects through mediation of NF-κB. NF-κB can be constitutively activated in epithelial ovarian cancer cells and may contribute towards increased transcription and translation of angiogenic factors. In the present study, we investigated the effect of ginger on tumor cell growth and modulation of angiogenic factors in ovarian cancer cells in vitro.
Background
In the United States, ovarian cancer is the most lethal gynecologic malignancy and represents the fifth leading cause of cancer death among women [1]. Key goals in the management of this disease are prevention, early detection, and prolongation of disease-free intervals and over-all survival upon development of the disease. Most primary ovarian cancers arise from malignant transformation of the surface epithelium. Although the specific molecular events responsible for this transformation remain unknown, two general theories have been proposed: incessant ovulation [2,3] and excess gonadotropin secretion [4]. Ovulation is essentially a natural inflammatory process; therefore a pro-inflammatory state is felt contribute to ovarian carcinogenesis [5,6]. There is ample evidence that inflammation is causally linked to carcinogenesis [7] in other tumor types, and targeting mediators of inflammation has been used as a strategy to both prevent and treat cancer.
Our understanding of ovarian cancer carcinogenesis is limited. Many of the genes that mediate inflammation and adaptive survival strategies in cancer cells including: self-sufficient growth, insensitivity to growth-inhibitory signals, evasion of apoptosis, limitless replicative potential, and sustained angiogenesis, [8] are under the transcriptional control of NF-κB [9]. Constitutive activation of NF-κB has been described in many tumor types including ovarian cancer [9], suggesting that targeting NF-κB may have anti-inflammatory and anti-neoplastic effects in this tumor type.
Among the myriad of pro-angiogenic cytokines known to induce tumor angiogenesis, vascular endothelial growth factor (VEGF) is the best characterized. In vitro and in vivo studies have shown that VEGF is critically involved in various steps of ovarian cancer carcinogenesis, and recent studies indicate that serum VEGF is an independent prognostic factor for patients with all stages of ovarian cancer [31]. Interleukin-8 (IL-8) was originally found to function as a macrophage derived pro-angiogenic factor [32], and has since been shown to affect cancer progression through mitogenic, angiogenic and motogenic effects [33]. Increased blood levels of IL-8 have been found in ovarian cancer patients [34], and IL-8 has been shown to stimulate proliferative growth in ovarian cancer cells in vitro [35].
In the present study, we tested the hypothesis that ginger could exert inhibitory effects on cell growth, and modulate the production of angiogenic factors in epithelial ovarian cancer cells. Our data reveals that ginger significantly inhibits ovarian cancer cell growth, and that the major bio-active component of ginger is 6-shagoal. More-over, ginger inhibits NF-κB activation and subsequent secretion of the angiogenic factors IL-8 and VEGF in ovarian cancer cells.
Chemicals
Dried whole ginger root powder extract (1:1 extraction solvent: ethanol 50 percent/water 50 percent {}) standardized to 5% gingerols, was obtained from Pure Encapsulations, Inc (Sudbury, MA.). All studies were conducted using a single batch of ginger root extract. Content of gingerols in the ginger root extract were independently verified using appropriate high performance liquid chromatography methods [36]. The total gingerol content in the ginger root extract (12.3 mg/250 mg (4.9 percent) was confirmed the end of the study at Integrated Biomolocule (Tuscon, AZ). For in vitro studies, a stock solution was prepared by vortexing 50 mg of powder into 1 ml of aqueous dimethyl sulfoxide (DMSO). Insoluble particulates were centrifuged to the bottom of the eppendorf tube and the supernatant was then further diluted into cell culture media in the concentrations described. Cisplatin was obtained from Bedford Laboratories (Bedford, OH.) Ginger standards 6-gingerol, 8-gingerol, 10-gingerol and 6-shogaol were purchased from ChromaDex (Santa Ana, CA.) Standards were solubilized in DMSO and molarity was determined per supplier recommendations. Sulforhodamine B was obtained from Sigma-Aldrich, Inc. (St. Louis, MO.)
In Vitro Growth Inhibition Assays
The Sulfhodamine B assay was used according to the method of Skehan et al. [37]. Cells were plated in a 96 well format (3 × 10 3 cells/well) and twenty-four hours after plating, DMSO, ginger, or ginger component standards for indicated time periods. At the end of drug exposure, cells were fixed with 50% trichloroacetic acid and stained with 0.4% sulforhodamine B (Sigma-Aldrich, St. Louis, MO), dissolved in 1% acetic aid (100 μl/well) for 30 minutes, and subsequently washed with 1% acetic acid. Protein-bound stain was solubilized with 150 μl of 10 mM unbuffered Tris base, and cell density was determined using a colorimetric plate reader (wavelength 570 nm). All samples were run in triplicate. Cell number and viability of treated cells were confirmed using the trypan blue dye exclusion assay.
Cell Lines, Plasmids and Immunoblotting
SKOV3 ovarian cancer cells were obtained from the American Type Culture Collection (Manassas, VA.) Dr. K. Cho (University of Michigan) generously provided A2780, CaOV3, and ES2 cell lines. SKOV3, CaOV3, and ES-2 cells were originally harvested from patients with recurrent ovarian cancer. Ovarian cancer cells were maintained in DMEM supplemented with 10% fetal bovine serum, 100 units/ml penicillin and 100 mg/ml streptomycin (Invitrogen Corporation, Grand Island, NY.) Human ovarian surface epithelial cells were obtained, after Institutional Review Board approval, from patients undergoing surgery for non-ovarian cancer gynecologic indication. Cells were initially cultured in Medium 199/105 (1:1) supplemented with 10% fetal bovine serum, 100 units/ml penicillin and 100 mg/ml streptomycin and EGF 10 ng/ml during primary culture. After establishing adequate growth, cells were cultured with the above media, excluding EGF, prior to use in assays [38]. CaOV3 and SKOV3 cell lines were transfected with the indicated expression plasmid using LipofectAMINE Plus, or AMAXA electroporation respectively.
NF-κB promoter-dependant Luciferase Reporter Gene Activation
CaOV3 and SKOV3 cells were plated in 12 well plates. Twenty-four hours after plating, cells were transfected the reporter plasmid pBVIx-Luc. This plasmid contains six NF-κB recognition sites within the promoter sequence linked to the luciferase reporter gene, and was generously provided by Dr. Valerie Castle (University of Michigan, Ann Arbor, MI). Following transfection, cells cultured overnight, then treated with DMSO vehicle control or ginger (75 μg/ml). Following incubation with ginger for 6 hours, cells were harvested, and luciferase activity was determined using a Monolight 2010 luminometer.
VEGF and IL-8 ELISA
Production of VEGF and IL-8 was determined in A2780, CaOV3, ES2, and SKOV3 cells. IL-8 concentrations were undetectable (<0.05 pg/ml) in A2780 and CaOV3 cells lines (data not shown). Cells were cultured in a 96 well format overnight, and then treated with DMSO vehicle control or ginger (75 μg/ml). After 48 hours, cell supernatant was removed and assayed using a commercial ELISA kit from R&D Systems (Minneapolis, MN). Assays were performed in triplicate and concentrations of VEGF and IL-8 (pg/ml) were compared with standard curves obtained with human recombinant VEGF 165 and IL-8 provided with the kit.
Statistical analysis
Standard analysis of variance techniques were used to compare between cell types, or culture conditions, depending on the analysis of interest. An overall F-test was used to determine if there was at least one significant difference between the groups tested. Tukey's honestly significantly different (HSD) multiple comparison procedure was used to determine significant pairwise comparisons, while assure the overall type I error rate was 5% or less. When the comparisons of interest were between treatments and the control condition alone, Dunnett's multiple comparisons technique for a single control was used.
Ginger inhibits growth in ovarian cancer cells as compared to non-transformed ovarian epithelial cells
Continuous exposure to ginger extract resulted in a marked reduction in cell growth after 1-5 days of exposure in A2780 ovarian cancer cells ( Figure 1A, p < .0001 at all doses and time points). We tested additional ovarian cancer cell lines to determine if this was an effect unique to the A2780 ovarian cancer cells. Ginger treatment resulted in similar effects in all cell lines tested, including the chemoresistant cell lines SKOV3 and ES-2 [39] ( Figure 1B,C, p < .05 for all doses and time points). Untransformed human ovarian surface epithelial cells (HOSE) were minimally affected by ginger extract exposure at days 1 and 3, and showed some inhibition in growth by day 5 ( Figure 1D, p > .05 for days 1 and 3, p < .05 for day 5). To confirm that ginger treatment inhibited cell growth, treated cells were analyzed by trypan blue exclusion as well. As expected, ginger treatment resulted in a profound inhibition of cell proliferation and growth at doses of 50 μg/ul and higher ( Figure 2).
To determine whether lower doses of ginger could also inhibit cell growth, an extended range of concentrations was tested. In the A2780 and ES-2 cell lines, ginger concentrations of less than 50 μg/ml did not significantly impact cell growth, whereas in the SKOV3 cell line, some inhibition of cell growth was seen with ginger concentrations as low as 30 μg/ml (Figure 3 and data not shown).
6-Shogaol is the most active of the individual ginger components tested in ovarian cancer cells
Previous investigators have shown bio-activity of various individual ginger components in several tumor types [30,[40][41][42][43]. To determine the relative bio-activity in ovarian cancer, A2780 ovarian cancer cells were treated with 6-, 8-and 10-gingerol as well as 6-shogaol. In contrast to other published findings, we found that 6-, 8-and 10-gingerol had no effect on the growth or viability of ovarian cancer cells (p > .05 at all time points). Treating cells with whole ginger extract or 6-shogaol resulted in profound growth inhibition ( Figure 4A, p < .05 at all time points for both ginger and 6-shogaol treated cells). Morphologically, cells treated with ginger appeared markedly growth inhibited, similar to cisplatin treated cells ( Figure 5). Cells cultured with vehicle control (DMSO) continued to proliferate.
We next determined if continuous exposure to individual ginger components was necessary to cause the growth inhibitory effect seen in ovarian cancer cells. Similar to the use of whole ginger root extract, continuous exposure to 6-shogaol was necessary to cause the growth inhibitory effect seen in ovarian cancer cells. We treated cells with ginger and individual ginger components for 24 hours, after which the cells were washed and media was changed. Once ginger or 6-shogaol was removed from the media, cell growth resumed ( Figure 4B).
Ginger inhibits NF-κB in ovarian cancer cells
Because we found that ginger markedly suppressed ovarian cancer cell proliferation in vitro, and several genes that regulate proliferation are regulated by NF-κB, we hypothesized that ginger may mediate its anti-neoplastic activity in ovarian cancer cells though modulation of this pathway. Constitutive activation of NF-κB has been described in many tumor types including ovarian cancer [9], suggesting that targeting NF-κB may have an anti-neoplastic effect in this tumor type. Natural products such as ginger, or ginger components such as zerumbone can inhibit NF-κB in other cell types [16,44,45]. We chose two chemoresistant ovarian cancer cell lines (CaOV3 and SKOV3) to evaluate the effect of ginger treatment on activation of NF-κB. As shown in Figure 6, treatment with ginger extract resulted in a significant inhibition of NF-κB activation in CaOV3 and SKOV3 cell lines
Ginger Inhibits IL-8 and VEGF Secretion in Ovarian Cancer Cells
IL-8 can function as a paracrine and/or autocrine growth factor in some tumor types, and the secretion of IL-8 pro- tein from tumor cells themselves is thought to be crucial for these effects [46,47]. In ovarian cancer patients, elevated IL-8 expression has been found in ascites as well as in serum [33]. Furthermore, IL-8 has been shown to stimulate proliferative growth in ovarian cancer cells in vitro [35]. Because IL-8 secretion is thought to be regulated in part by NF-κB, and ginger can clearly inhibit NF-κB in ovarian cancer cells, we hypothesized that ginger could also inhibit IL-8 secretion. Using a representative panel of ovarian cancer cell lines, we found that A2780 and CaOV3 cells produced negligible amounts of IL-8 (<0.05 pg/ml), whereas the cell lines ES-2 and SKOV3 had high constitutive expression of IL-8 ( Figure 7A). Treatment with ginger resulted in significant inhibition of IL-8 production in the ES-2 and SKOV3 cell lines (p < .05 for both cell lines).
Continuous ginger exposure inhibits growth in ovarian cancer cells in vitro
VEGF, the most important inducer of angiogenesis, is also under transcriptional control of NF-κB [9]. Serum VEGF levels as well as tumor expression of VEGF are associated with poor prognosis in ovarian cancer patients [31], and inhibition of VEGF function using Avastin™ has shown promise in the treatment of ovarian cancer patients [48]. Because ginger treatment resulted in inhibition of NF-κB, we next sought to determine whether ginger could similarly inhibit VEGF in ovarian cancer cells. In all cell lines tested, there was high endogenous production of VEGF, and ginger treatment resulted in inhibition of VEGF secre- Figure 7B).
Discussion
The analysis of epidemiologic data and disparities in global incidence of ovarian cancer may provide clues to uncover environmental and biologic factors that contribute towards the development of ovarian cancer. Dietary prevalence of foods such as ginger, garlic, soy, curcumin, chilies and green tea are thought to contribute to the decreased incidence of colon, gastrointestinal, prostate, breast and other cancers in South East Asian countries [49]. Accumulating evidence suggests that many dietary factors may be used alone or in combination with traditional chemotherapeutic agents to prevent or treat cancer.
The potential advantage of many natural or dietary compounds seems to focus on their potent anticancer activity combined with low toxicity and very few adverse side effects.
Epithelial ovarian carcinoma is the leading cause of death among patients with gynecologic cancers. Despite multiple modalities of treatment including surgery and chemotherapy, ovarian cancer patients continue to have one of the lowest 5-year survival rates [1]. The significant morbidity and limited success of surgery and chemotherapy for ovarian cancer has led to searches for alternative therapies. Recently, ginger root and its main poly-phenolic constituents (gingerols and zerumbone) been shown to exhibit anti-inflammatory [16][17][18][19], and anti-neoplastic activity [20][21][22][23][24] in several cell types through inhibition of the transcription factor NF-κB [25][26][27][28]. NF-κB plays an important role in tumorigenesis, given its ability to control the expression and function of numerous genes involved in cell proliferation, sustained angiogenesis, and evasion of apoptosis. Different tumor types, including ovarian cancer, have been shown to express high constitutive NF-κB activity [9]. In this study we show that ginger blocks NF-κB activation in ovarian cancer cells, resulting in inhibition of NF-κB regulated gene products involved in cellular proliferation and angiogenesis.
Many of the pathways that mediate adaptive survival strategies in cancer cells are under the transcriptional control of NF-κB [9]. We have shown here that in ovarian cancer cells, NF-κB is constitutively activated, and blocking NF-κB activation with ginger results in suppressed production of NF κB regulated angiogenic factors and selectively inhibits ovarian cancer cell growth. We have found that ginger selectively inhibits ovarian cancer cell growth, as compared to non-transformed ovarian epithelial cells.
Previous reports indicate that the ginger component 6shogaol induces cell death in chemoresistant hepatoma cells [50], yet inhibits cell death in non-neoplastic spinal cord cells [51], suggesting that ginger and ginger components' effects are cell type specific. The apparent contradictory findings may be due to a differential effect of ginger on transformed cells (i.e. cancer cells) vs. untransformed cells. Phytochemicals such as ginger, generally have multiple molecular targets. This pleiotropism may constitute an advantage in the treatment of ovarian cancer, where multiple factors contribute towards the carcinogenic process.
Conclusion
The results of this study indicate that ginger may exhibit anti-neoplastic effects through the inhibition of NF-κB. Further studies utilizing ginger in an in vivo model of ovarian cancer will provide a platform for the development of ginger as a therapeutic tool in this disease. were transfected with an NF-κB-dependent reporter plasmid (pBVIx-Luc). Cells were treated with DMSO (vehicle control) or ginger (75 μg/ml). NF-κB activation was determined by measuring relative luciferase activity 48 hours after treatment. Luciferase activity is reported as arbitrary relative light units (mean +/-S.D.) Ginger treatment resulted in inhibition of NF-κB activation (p < .05 for both cell lines). Representative data is shown. | 2014-10-01T00:00:00.000Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "2599283726a54221c62575d3aacdc5f9b047261b",
"oa_license": "CCBY",
"oa_url": "https://bmccomplementalternmed.biomedcentral.com/track/pdf/10.1186/1472-6882-7-44",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2599283726a54221c62575d3aacdc5f9b047261b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2032436 | pes2o/s2orc | v3-fos-license | Helicobacter pylori Vacuolating Toxin and Gastric Cancer
Helicobacter pylori VacA is a channel-forming toxin unrelated to other known bacterial toxins. Most H. pylori strains contain a vacA gene, but there is marked variation among strains in VacA toxin activity. This variation is attributable to strain-specific variations in VacA amino acid sequences, as well as variations in the levels of VacA transcription and secretion. In this review, we discuss epidemiologic studies showing an association between specific vacA allelic types and gastric cancer, as well as studies that have used animal models to investigate VacA activities relevant to gastric cancer. We also discuss the mechanisms by which VacA-induced cellular alterations may contribute to the pathogenesis of gastric cancer.
Description of VacA
H. pylori VacA derives its name from the protein's ability to induce vacuolation in intoxicated cells. Vacuolation of epithelial cells was the first reported effect of VacA [1,2], but many other cellular effects have been reported subsequently, and many cell types are now known to be susceptible to the toxin [3][4][5][6]. The effects of VacA on gastric epithelial cells include cytoplasmic vacuolation [7,8], disrupted endocytic trafficking, mitochondrial perturbations, depolarization of the plasma membrane potential, efflux of various ions (including chloride, bicarbonate, and urea), activation of MAP kinases, modulation of autophagy, and potentially cell death [3][4][5][6]9]. VacA can inhibit the function and proliferation of a variety of immune cells, including T cells, B cells, eosinophils, macrophages, dendritic cells, and neutrophils [3][4][5][6]10,11].
The membranes of VacA-induced vacuoles contain markers of late endosomes and lysosomes [44,49,59,60], suggesting that VacA-induced vacuoles are derived from the endosome-lysosome pathway. It has been proposed that the formation of VacA anion channels in endosomal membranes, coupled with vacuolar ATPase activity, leads to the osmotic swelling of endosomal compartments and the formation of vacuoles visible by light microscopy [40, 61,62]. VacA-induced alterations in endocytic processes or intracellular trafficking result in inhibited intracellular degradation of epidermal growth factor (EGF), inhibited maturation of procathepsin D, perturbation of transferrin receptor localization, and inhibition of antigen presentation [63][64][65]. VacA's association with mitochondria can lead to decreased mitochondrial membrane potential, the activation of BAX and BAK, cytochrome c release, and mitochondrial fragmentation [45-48, [66][67][68]. Mitochondrial perturbation by VacA is dependent on VacA channel activity [46,47] and contributes to cell death through apoptosis or necrosis [48, [69][70][71][72]. VacA-induced cell death may also be a consequence of the reduced expression of pro-survival factors [73].
Heterogeneity among vacA Alleles
All H. pylori strains contain a vacA gene, but there is substantial variation among strains in VacA toxin activity. A lack of vacuolating toxin activity occasionally results from nonsense mutations or frameshift mutations in vacA [74], but this is a relatively uncommon phenomenon; most strains contain intact vacA ORFs. Among strains containing an intact vacA ORF, differences in VacA toxin activity are attributable to variations in VacA amino acid sequences [75][76][77][78][79], as well as differences among strains in the levels of VacA transcription or secretion [80]. The vacA alleles in different H. pylori strains have been categorized into several families, based on sequence heterogeneity in specific regions. The three most extensively studied regions of heterogeneity correspond to the signal or "s" region, the intermediate or "i" region, and the middle or "m" region [75,81]. The sequences in each of these regions can be classified into two main families (e.g., s1 and s2; i1 and i2; m1 and m2) ( Figure 1). vacA alleles have also been classified into two families (d1 and d2) based on the presence or absence of a segment ranging from about 60 to 100 nucleotides in length, designated the d-region [82], which encodes a region of VacA located at the junction of the p33 and p55 domains.
The "s" region of diversity corresponds to sequence differences within the amino-terminal signal peptide and the amino-terminal end of the secreted toxin. Compared with s1 VacA toxins, s2 forms of VacA contain a 12-amino-acid amino-terminal extension that alters the hydrophobicity of the amino-terminal end of the secreted protein [75][76][77][78]. In comparison to s1 VacA toxins, s2 VacA toxins are impaired in terms of their ability to form anion channels in planar-lipid bilayers and do not cause vacuolation of mammalian cells [75][76][77][78]. Type s2 forms of vacA are also transcribed at lower levels than type s1 forms, resulting in reduced levels of type s2 VacA protein production and secretion [80].
The "i" region of diversity is located within the p33 domain of VacA [81]. One study reported that the i-region is a determinant of vacuolating toxin activity in strains that produce type s1-m2 forms of VacA [81]. Type i1 VacA toxins are also more active than i2 VacA toxins in assays monitoring the inhibition of NFAT activation and IL-2 production by Jurkat T cells [83]. The "s" region of diversity corresponds to sequence differences within the amino-terminal signal peptide and the amino-terminal end of the secreted toxin. Compared with s1 VacA toxins, s2 forms of VacA contain a 12-amino-acid amino-terminal extension that alters the hydrophobicity of the amino-terminal end of the secreted protein [75][76][77][78]. In comparison to s1 VacA toxins, s2 VacA toxins are impaired in terms of their ability to form anion channels in planar-lipid bilayers and do not cause vacuolation of mammalian cells [75][76][77][78]. Type s2 forms of vacA are also transcribed at lower levels than type s1 forms, resulting in reduced levels of type s2 VacA protein production and secretion [80].
The "i" region of diversity is located within the p33 domain of VacA [81]. One study reported that the i-region is a determinant of vacuolating toxin activity in strains that produce type s1-m2 forms of VacA [81]. Type i1 VacA toxins are also more active than i2 VacA toxins in assays monitoring the inhibition of NFAT activation and IL-2 production by Jurkat T cells [83].
Finally, the "m" region of diversity is located within the p55 domain of VacA [75]. In comparison to type m2 VacA proteins, type m1 VacA proteins have greater vacuolating activity on HeLa cells, but m1 and m2 VacA proteins have similar vacuolating activity on RK13 cells [84][85][86][87]. A region responsible for cell type specificity is localized to a 148 amino-acid segment of the m region [85,86]. The difference in HeLa cell vacuolating activity when comparing m1 and m2 VacA proteins has been attributed to differences in channel-forming properties [88], as well as differences in cell-binding properties [84,86]. Type m1 VacA, but not m2 VacA, binds to the LRP1 receptor on host cells, resulting in decreased levels of intracellular glutathione, an accumulation of reactive oxygen species, autophagy, and apoptosis [89,90].
vacA Allelic Types and Gastric Cancer Risk
There has been considerable interest in the possibility that the VacA toxin activity of strains might be a determinant of gastric cancer risk [95][96][97]. To test this hypothesis, H. pylori strains cultured from individuals with gastric cancer or premalignant gastric pathology (such as atrophic gastritis, intestinal metaplasia, or dysplasia) have been compared to strains cultured from individuals with non-malignant gastric histology. Collectively, these studies have shown that strains containing type s1, i1, and m1 vacA alleles are associated with a higher risk of gastric cancer or premalignant Finally, the "m" region of diversity is located within the p55 domain of VacA [75]. In comparison to type m2 VacA proteins, type m1 VacA proteins have greater vacuolating activity on HeLa cells, but m1 and m2 VacA proteins have similar vacuolating activity on RK13 cells [84][85][86][87]. A region responsible for cell type specificity is localized to a 148 amino-acid segment of the m region [85,86]. The difference in HeLa cell vacuolating activity when comparing m1 and m2 VacA proteins has been attributed to differences in channel-forming properties [88], as well as differences in cell-binding properties [84,86]. Type m1 VacA, but not m2 VacA, binds to the LRP1 receptor on host cells, resulting in decreased levels of intracellular glutathione, an accumulation of reactive oxygen species, autophagy, and apoptosis [89,90].
vacA Allelic Types and Gastric Cancer Risk
There has been considerable interest in the possibility that the VacA toxin activity of strains might be a determinant of gastric cancer risk [95][96][97]. To test this hypothesis, H. pylori strains cultured from individuals with gastric cancer or premalignant gastric pathology (such as atrophic gastritis, intestinal metaplasia, or dysplasia) have been compared to strains cultured from individuals with non-malignant gastric histology. Collectively, these studies have shown that strains containing type s1, i1, and m1 vacA alleles are associated with a higher risk of gastric cancer or premalignant conditions, compared to strains containing type s2, i2, or m2 vacA alleles, respectively [81,[98][99][100][101][102][103][104][105][106][107]. Strains containing type s1 and m1 vacA alleles have also been associated with an increased severity of gastric inflammation, epithelial damage, or ulceration, compared to strains containing type s2 or m2 vacA alleles (Table 1) [75,[108][109][110]. Thus, strains encoding forms of VacA with greater activity in cell culture models are associated with an increased risk of gastric cancer and premalignant histologic changes, as well as an increased risk of peptic ulceration, compared to strains encoding forms of VacA that lack activity or have relatively low levels of activity in cell culture models.
Association between vacA Allelic Types and Other Strain-Specific Virulence Determinants of Virulence
In addition to allelic variation in vacA, H. pylori strains exhibit diversity in other genetic elements that are relevant for gastric cancer pathogenesis. One of the most prominent genetic variations among H. pylori strains is the presence or absence of a~40 kb chromosomal region known as the cag pathogenicity island (PAI). The cag PAI encodes an effector protein (CagA), as well as components of a type IV secretion system that delivers CagA into host cells [111][112][113]. Upon entry into epithelial cells, CagA interacts with multiple host cell proteins and causes alterations in cell signaling [114,115]. H. pylori strains also differ in the production of outer membrane proteins (OMPs), including adhesins that mediate adhesion to gastric epithelial cells. Examples of adhesins that are produced by some H. pylori strains but not others include BabA, SabA, and HopQ [116,117].
H. pylori cagA-positive strains (corresponding to strains that contain the cag PAI) are associated with a higher risk of gastric cancer or premalignant lesions than cagA-negative strains [98,118,119]. Similarly, H. pylori strains containing specific OMP-encoding genes (babA, homB, type I hopQ, in-frame hopH/oipA, or in-frame sabA alleles) are associated with an increased risk of gastric cancer or premalignant changes compared to strains that lack these genes or that harbor out-of-frame genes [120][121][122][123][124][125][126].
Determining the specific contribution of VacA to gastric cancer risk is challenging, since the strains associated with gastric cancer potentially contain multiple strain-specific features relevant for gastric cancer pathogenesis. Collectively, the epidemiologic studies suggest that the risk of gastric cancer is highest in persons infected with strains producing multiple host-interactive components (type s1-i1-m1 VacA, CagA, the cag T4SS, and certain strain-specific OMPs) [98,117,120]. Strains that do not produce these components are associated with a lower level of gastric cancer risk.
Multiple vacA allelic types (s1 or s2, i1 or i2, m1 and m2) are present in H. pylori isolates in Western countries [75,81], and both cag PAI-positive strains and cag PAI-negative strains are common in Western countries [110]. In contrast, nearly all H. pylori strains cultured in several regions of East Asia, including Japan and Korea, contain s1 vacA alleles [137,138], and nearly all H. pylori strains in Japan and Korea contain the cag PAI [110,137]. Strains containing type s2 vacA alleles and lacking the cag PAI are relatively uncommon in East Asia [110,137,138]. These characteristics of East Asian strains may be an important factor contributing to the high rate of gastric cancer in East Asia compared to many other parts of the world [139].
Impact of VacA on H. pylori Gastric Colonization of Animal Models
Nearly all H. pylori strains contain an intact vacA ORF, which suggests that VacA has an important role in H. pylori colonization of the stomach, persistence, or transmission to new hosts. Several studies have evaluated the role of VacA in H. pylori colonization of animal models by testing vacA null mutant strains. Such mutant strains are capable of colonizing the stomach in gnotobiotic piglet, mouse, and gerbil models [107,[140][141][142][143][144]. Moreover, several closely related H. pylori strains (strains B128, B8 and 7.13) capable of colonizing the Mongolian gerbil do not produce a detectable VacA protein due to the presence of a naturally occurring mutation in vacA [145][146][147]. Although VacA is not essential for H. pylori colonization of the stomach in animal models, vacA mutant strains do not colonize mice as well as VacA-producing strains, and the mutant strains exhibit a competitive disadvantage in mixed infections with VacA-producing strains [107,142,144].
H. pylori strain SS1, a strain commonly used for experiments in mouse models, contains a non-toxigenic vacA allele (s2/i2/m2). SS1 vacA null mutant strains exhibit a colonization defect when compared to the wild-type strain [107,142,144]. In one study, SS1 variants producing s1-i2 or s1-i1 forms of VacA exhibited reduced colonization rates compared to strains producing an s2-i2 form of VacA [107]. Thus, despite the lack of detectable activity in vitro, type s2 VacA proteins appear to have an important activity in vivo that contributes to colonization or persistence.
The mechanisms by which VacA contributes to H. pylori colonization are not yet well understood, but several hypotheses are plausible. VacA proteins tethered to the surface of H. pylori might act as adhesins to promote bacterial adherence to gastric cells, and thereby enhance colonization [148]. VacA-induced alterations of gastric epithelial cells could potentially modify the gastric environment to promote colonization and bacterial replication [65]. VacA-induced inhibition of parietal cell function might facilitate H. pylori colonization of the stomach [149,150]. Finally, VacA can attenuate the functions of many types of immune cells [3][4][5]10,11,[151][152][153][154], so immunomodulatory actions of VacA might facilitate colonization.
Role of VacA in Gastric Cancer and Gastric Pathology in Animal Models
Mouse models, gnotobiotic piglets, and the Mongolian gerbil model of H. pylori infection have been used to evaluate a potential role of VacA in gastric pathology and carcinogenesis. Mice, piglets, and gerbils each develop a gastric mucosal inflammatory response in response to H. pylori. H. pylori-induced gastric inflammation is relatively mild in wild-type mice, and H. pylori-infected wild-type mice do not develop gastric cancer. H. pylori-infected gerbils develop more extensive gastric pathology than mice, including severe gastric inflammation, parietal cell loss and hypochlorhydria, dysplasia, and gastric adenocarcinoma [147,155,156]. The carcinomas in gerbils exhibit some characteristics similar to gastric adenocarcinoma in humans, such as penetration through the muscularis mucosa into the submucosa, but in contrast to gastric cancer in humans, the lesions in gerbils remain relatively small in size and are not known to metastasize. H. pylori-infected gerbils do not develop intestinal metaplasia or gastric atrophy (two common precursors of gastric cancer in humans). Thus, the gerbil model of H. pylori infection recapitulates several features of gastric carcinogenesis in humans, but some features of the gerbil model differ from features of gastric adenocarcinoma in humans.
One approach for studying the effects of VacA in vivo has been to administer the purified VacA protein or VacA-containing H. pylori extracts directly into the stomach of animal models. These studies concluded that VacA can damage the gastric mucosa of mice and stimulate the recruitment of inflammatory cells [18,[157][158][159].
A more physiologic approach has entailed the infection of animals with viable H. pylori and a comparison of wild-type and vacA mutant strains. In experiments with gnotobiotic piglets, no differences in the severity of gastric inflammation were detected when comparing animals colonized with a wild-type strain or a vacA null mutant [140]. Similar results were reported in experiments with mice [142], but a subsequent study detected stronger Th1 and Th17 responses and more severe pathology in mice colonized with a vacA null mutant strain, compared to the wild-type strain [144]. To compare the activities of different forms of VacA, one study infected mice with strain SS1 variants encoding different forms of VacA [107]. At three weeks post-infection, mice infected with a strain encoding the s1/i1 form of VacA exhibited a significantly greater degree of spasmolytic polypeptide expressing metaplasia (SPEM) than mice infected with a strain encoding the s2/i2 form of VacA [107]. There was also a trend toward higher levels of gastric inflammation in mice infected with strains producing s1/i1 forms of VacA compared to s1/i2 or s2/i2 forms of VacA [107].
No differences in the severity of gastric inflammation have been detected when comparing gerbils colonized with a wild-type strain or a vacA mutant strain for time periods of three months to 62 weeks [141,143]. However, at 62 weeks post-infection, animals infected with the wild-type strain had a higher incidence of gastric ulceration compared to animals infected with the vacA mutant strain [141]. One H. pylori strain commonly used for studies of gastric cancer in the gerbil model (strain 7.13) does not produce a detectable VacA protein [145][146][147]. Therefore, VacA is not required for gastric carcinogenesis in the gerbil model.
Integrating Results of Human Epidemiologic Studies with Results of Experiments in Animal Models
Many human epidemiologic studies have detected an association between H. pylori strains containing certain types of vacA alleles (encoding forms of VacA that are active in cell culture models) and an increased risk of gastric cancer or premalignant gastric lesions. In contrast, VacA is not required for the development of gastric cancer in the gerbil model. There are multiple possible explanations for this apparent discrepancy.
One interpretation is that the human epidemiologic results simply reflect the association between certain vacA allelic variants and other strain-specific genetic elements that contribute to gastric cancer pathogenesis (e.g., the cag PAI or strain-specific genes encoding certain OMPs), and VacA has no direct role in the pathogenesis of gastric cancer. An alternate interpretation is that the rodent models used thus far do not accurately reproduce pathologic events leading to the development of gastric cancer in humans. In support of this latter interpretation, there are known differences in the susceptibility of human CD4+ T-cells and mouse CD4+ T-cells to VacA [38,160]. VacA binds to human CD4+ T-cells and inhibits the activation-induced proliferation of these cells; in contrast, VacA binds at significantly lower levels to murine CD4+ T-cells than human CD4+ T-cells, and does not inhibit the activation-induced proliferation of murine T-cells [38,160]. This difference in susceptibility has been attributed to differences in the β2 integrin receptors present on human and mouse T cells [38]. Limitations of rodent models have also been encountered when studying interactions of H. pylori outer membrane adhesins with host cell receptors. For example, the outer membrane protein HopQ binds to CEACAM1 on the surface of human cells, but not to a mouse CEACAM1 orthologue or to any CEACAM receptors produced in gastric tissue from Mongolian gerbils [161,162].
Mechanisms by which VacA may Influence Gastric Cancer Risk
There are multiple biologically plausible mechanisms by which specific forms of VacA could enhance gastric cancer risk (Figure 2). Since H. pylori binds to gastric epithelial cells in vivo, these cells probably encounter relatively high concentrations of VacA in vivo. Type s1-m1 forms of VacA promote the death of gastric epithelial cells in vitro [48, [69][70][71][72], and the toxin might have similar effects in vivo. VacA-induced death of gastric epithelial cells would be expected to result in increased cellular proliferation, which could be associated with increased cancer risk. VacA has been reported to disrupt the integrity of epithelial monolayers, either by causing cell death or by the loosening of cell-cell junctions [163,164]. Consequently, VacA might also enhance the entry of carcinogens into the gastric mucosa, or may enhance the invasiveness and spread of malignant cells.
Connexin 43 (Cx43) is required for VacA-induced necrosis of the AZ-521 cell line (recently reported to be a misidentified cell line of HuTu-80, human duodenum carcinoma) [165,166]. Cx43 is a tumor suppressor in multiple cell types, and gastric cancers frequently exhibit a loss of Cx43 expression [167]. Therefore, in individuals infected with H. pylori strains producing high levels of s1-i1-m1 VacA, there may be a selective pressure for the emergence of Cx43-deficient cells (resistant to VacA-induced cell death), which could contribute to gastric cancer pathogenesis. There are multiple biologically plausible mechanisms by which specific forms of VacA could enhance gastric cancer risk (Figure 2). Since H. pylori binds to gastric epithelial cells in vivo, these cells probably encounter relatively high concentrations of VacA in vivo. Type s1-m1 forms of VacA promote the death of gastric epithelial cells in vitro [48, [69][70][71][72], and the toxin might have similar effects in vivo. VacA-induced death of gastric epithelial cells would be expected to result in increased cellular proliferation, which could be associated with increased cancer risk. VacA has been reported to disrupt the integrity of epithelial monolayers, either by causing cell death or by the loosening of cell-cell junctions [163,164]. Consequently, VacA might also enhance the entry of carcinogens into the gastric mucosa, or may enhance the invasiveness and spread of malignant cells.
Connexin 43 (Cx43) is required for VacA-induced necrosis of the AZ-521 cell line (recently reported to be a misidentified cell line of HuTu-80, human duodenum carcinoma) [165,166]. Cx43 is a tumor suppressor in multiple cell types, and gastric cancers frequently exhibit a loss of Cx43 expression [167]. Therefore, in individuals infected with H. pylori strains producing high levels of s1-i1-m1 VacA, there may be a selective pressure for the emergence of Cx43-deficient cells (resistant to VacA-induced cell death), which could contribute to gastric cancer pathogenesis. Most H. pylori localize within the mucus layer overlying foveolar surface mucous epithelial cells, but H. pylori can also enter the gastric glands [168,169]. Within gastric glands, H. pylori localizes in close proximity to gastric stem cells, and within the oxyntic glands of the gastric corpus, H. pylori localizes in close proximity to parietal cells. VacA intoxication of gastric stem cells and parietal cells [149,150]. The inhibition of parietal cell function by VacA would be expected to result in hypochlorhydria, which could increase gastric cancer risk by allowing the proliferation of nitrate-producing bacterial populations that do not normally grow in the acidic gastric environment.
VacA inhibits the activities of multiple types of immune cells in vitro, including T cells, B cells, dendritic cells, eosinophils, mast cells, macrophages, and neutrophils [3][4][5]10,11,[151][152][153], and VacA immunomodulatory activity has been detected in vivo [144,170,171]. VacA-induced alterations in immune function could potentially result in impaired tumor surveillance. VacA is also reported to have pro-inflammatory activity [18,153,158,159,172]. Inflammation is a well-known promoter of carcinogenesis [173], so VacA pro-inflammatory activity could contribute to gastric cancer pathogenesis.
Summary
In summary, numerous epidemiologic studies have shown that H. pylori strains containing specific vacA allelic types (encoding forms of VacA that are active in cell culture models) are associated with increased gastric cancer risk, and there are multiple biologically plausible mechanisms by which VacA may contribute to gastric carcinogenesis. Conversely, there is relatively little direct evidence in animal models demonstrating a role of VacA in the pathogenesis of gastric cancer. In future studies, it will be important to investigate the actions of VacA in vivo using animal models that are optimized to express cell types susceptible to VacA and that closely replicate the cascade of events leading to gastric adenocarcinoma in humans. | 2017-10-29T07:04:30.786Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "9b230890bb1587c9797140c7c432871fd5341e61",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/9/10/316/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b230890bb1587c9797140c7c432871fd5341e61",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
210283909 | pes2o/s2orc | v3-fos-license | Investigation on dry machining of stainless steel 316 using textured tungsten carbide tools
In this research, austenitic stainless steel SS316 material has been machined using textured carbide cutting tools under dry conditions. Micro-textures were made on tool rake face using wire spark erosion machining technology. Effects of three important machining process parameters i.e. cutting speed, depth of cut and feed rate on machinability (MRR, average roughness, and tool wear) of SS316 have been investigated. Taguchi L27 orthogonal array based twenty seven experiments have been carried out by varying machining parameters at three levels. Feed rate has been identified as the most important parameter. Machining parameters have been optimized by grey entropy method to enhance the machinability. Optimal combination of machining parameters i.e. 170 m min−1 cutting speed, 0.5 mm/rev feed rate and 1.5 mm depth of cut produced the best machinability with 3.436 μm average roughness, 105187 mm3 min−1. MRR, and tool wear 234.63 μm. Lastly, a tool wear and chip morphology study have been done where textured tools have been found outperformed plain (Non-textured) tools.
Introduction
There are different grades of stainless steel used to fulfil specific application requirements. Austenitic chromium-nickel stainless steel SS316 is a prime candidate material for heat exchangers, furnace parts, jetengine parts, and chemical processing and pharmaceutical equipment etc [1,2]. Excellent toughness, superior strength, and high corrosion resistance etc are the special characteristics of SS 316. It undergoes extensive machining operations while producing the aforementioned parts and equipment. But its machining by conventional processes is challenging and results in poor machinability in terms of frequent tool wear, deteriorated work surface quality, high consumption of lubricants, escalated machining cost, and high environmental footprints [3,4]. To address these challenges, many efforts as regards to research and development and innovations have been made such as using green lubricants and sustainable lubrication techniques (i.e. minimum quantity lubrication, dry cutting, cryogenic machining), machining with the assistance of vibration and heat source, using treated and coated tools, intelligent modelling and optimization of parameters etc [5,6]. It is noticed from the literature that textured tools have potential to reduce tool wear and prolong tool life [7,8]. Textures on rack and flank face help to reduce the cutting forces and friction, and thereby reduce the heat generation, which further improves tool's performance. Texturing on cutting tools also affects the chip adhesion and improves the effectiveness of lubrication and tool-chip contact length. There are few available articles provide superficial understanding of machining of steels with textured tools [8][9][10]. It was reported that textures created by laser machining on rake face of carbide tool outperformed plain carbide tool for machining of AISI 316 stainless steel [9]. Similar results were found during the machining of AISI 52100 by textured Al 2 O 3 /TiCN ceramic cutting tool on rake face and plain ceramic cutting tool, where textures were developed by wire-EDM process [3]. Moreover, other machining processes were also used for creating the texture on tool such as electro chemical machining and abrasive jet machining etc [10]. There are some investigations on utilizing optimum techniques as well for machinability enhancement of various engineering materials. In a study, the optimum values of process parameters namely cutting speed, feed rate and depth of cut Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
were obtained using grey relational with Taguchi method for the machining of Inconel 825 [7]. This set of techniques was also used for machining of AISI 1040 steel with carbide coated tool under dry conditions [8,9]. Grey relational with response surface technique was implemented to get the best combinations depth of cut, spindle speed, width of cut and feed rate for the better material removal rate with surface roughness during the milling process. In addition, the highest value of grey relation grade was considered to identify the best combination process parameters [10]. ANOVA based grey relational technique was used for optimum setting of machining parameters for 15-5 PH stainless steel [11].
There is a scarcity of work on textured tool based machining and machinability investigation of stainless steel material. Therefore, to fulfil the research gap, an attempt is made in the present work to machine SS316 using textured cutting tools under dry environment and to investigate and optimize its machinability. Authors have selected a tungsten carbide cutting tool with an objective to facilitate the small scale machinists and manufacturers who probably can afford this tool based on its cost economy. Textures have been made on the rack face of Tungsten carbide cutting tool inserts to machine SS316. Various settings of machining parameters have been obtained by Taguchi L 27 design of experiment technique. The effects of machining parameters on material removal rate and average surface roughness under the influence of dry conditions have been investigated. Further, parameters have been optimized by grey relational technique to get the best machinability. The subsequent section of this paper discusses the machining process and results of investigation.
Experimental details
Austenitic stainless steel SS316 has been machined on conventional Lathe (Model No. COLCHESTER Mascot 1600) under dry condition using textured tungsten carbide cutting tool (TNMG-160408-45) insets having geometry 6°-6°-6°-6°-15°-75°-0.8 mm. Figure 1 depicts the experimental setup and sequence of steps followed in the present work. The overall work plan consists of machining of SS316 at various combinations of process parameters during experimentation stage, followed by measurement of machinability indicators i.e. MRR and R a , analysis of the effects of machining process parameters on MRR and R a , optimization of process parameters to obtain the best values of MRR and R a , and lastly characterization of chip morphology and tool wear mechanisms for a comparison between textured and plain tools. Important process parameters i.e. cutting speed, feed rate, and depth of cut have been varied at three levels (table 1). A total of twenty seven experiments have been designed and conducted based on Taguchi robust design of experiment technique with L 27 orthogonal array. The levels of machining parameters have been determined based on some preliminary experiments, machine constraints and literature review. Each experiment was conducted for a time period of seven minutes. Texturing on rack face of the cutting tools is done by wire spark erosion machining (wire-EDM/ WEDM) process. Average surface roughness 'R a ' and material removal rate (MRR) have been considered as machinability indicators or responses. R a is measured using handheld surface profiler TMteck makesTMR200. While, MRR has been evaluated using the following equation- Material removal rate Cutting speed Feed rate Depth of cut 1 =´´( ) Moreover, a scanned electron microscopic study has also been done to analyse the tool wear mechanisms and chip morphology. Table 2 presents all twenty-seven combinations of machining parameters and corresponding values of machinability indicators i.e. R a and MRR. Further, the effects of machining parameters on R a and MRR are discussed here as under.
Analysis of surface roughness
The influence of selected process parameters on average roughness is analysed and discussed in this section. Figures 2(a)-(c) presents the variation of average roughness with machining parameters. It is observed that at maximum cutting speed of magnitude 170 m min −1 with 0.15 mm/rev feed rate and 1 mm depth of cut, minimum roughness value 0.541 μm is obtained. Whereas, at the same cutting speed with increasing rate of feed and depth of cut 0.2 mm/rev and 1.5 mm respectively, the roughness value increased to 4.476 μm.
From the graph it is clear to confirm the surface roughness is completely dependent on the tool feed rate. At 70 m/min cutting speed value, the average roughness varies from 0.638 μm for 0.15 mm/rev and 4.08 μm for 0.5 mm/rev. In case of machining at other cutting speed too, the roughness was found to have increasing trend Cr-16%-18%, Ni-10%-14%, Mo-2%-3%, K-0.045%, S-0.030%, C-0.08%, N-0.10%, Mg-2%, Si-0.75%, Fe-balance with increase in feed rate. Advancement of the cutting tool with reference to the cutting path is feed rate and it also depends on the cutting speed [12]. At minimum feed rate, the advancement of cutting tool will be slow and tends to remove bulk proportionally. At higher feed rate, the tool travels fast and produced roughness surface with improper removal of bulk. The stainless steel material will plastically deform and tends to adhere over rake face of the hard cutting tool. Over a time of period, the build-up edge on the cutting tool will support to change the geometrical cutting phenomena in terms of shear, temperature and friction at higher end. Therefore the rapid growth in build-up edge (maximum wear) will cause rougher surface [13].
Analysis of MRR
The variation of MRR with machining parameters is shown in figures 3(a)-(c). It is observed that at lower cutting speed (i.e. 70 m/min), the maximum amount of material removed from the workpiece is 51 843 mm 3 min −1 at a process condition of 0.5 mm/rev feed rate and 1.5 mm depth of cut. For the same cutting speed, the minimum material removal (4790 mm 3 min −1 ) took place at 0.15 mm/rev feed rate and 0.5 mm depth of cut. Therefore, it can be said that the amount of material removal is significantly dependent on feed rate followed by depth of cut. The variations in material removal for 0.15 mm/rev and 0.2 mm/rev are almost similar. When calculating the material removal rate, the maximum amount of bulk metal removed is 105 187 mm 3 min −1 at same cutting speed of 170 m min −1 . It is noticed that MRR is increasing with the increment of feed rate, because at higher feed rate tool is moving fast towards the workpiece and remove more amount of material from the surface of workpiece [14]. Similarly, with increase in depth of cut, the amount of bulk material removal will be maximum and hence the MRR will be higher. The aforementioned discussion on variation of machining parameters on Ra and MRR indicates towards a trade-off and prompts to go for optimization of machining parameters to attain the best values of machinability indicators at a single set of process parameters.
Optimization
To obtain the best set of machining parameters, a multi-response optimization is done using grey-relational technique, which is an important statistical optimization technique having a previous track record for successful optimization of manufacturing processes [15]. In the current work, entropy based grey relational technique (GAT) has been used to find the optimal solution.
It is an analytical optimizations technique which provides appropriate tools for examine a rank of order of multiple objects with resemblance from an objective [15]. It requires less information to predict the behaviour of discrete date problem and an uncertain system. If number of experiments range is more, the factors are effaceable; hence data pre-processing which is a significant step to manage the factors of GTA is required to be done [16]. Lower the better for roughness and higher the better for MRR were used for date pre-processing in the present study. In addition entropy measurement techniques used as an objective weighting technique. Discrete type of entropy is used in grey entropy measurement for properly conduct weighting analysis [17].
The objective functions are framed with surface roughness and material removal rate having equal weightage 50%-50% respectively. The data processed to find optimum solution is given in table 3. To find the grey relational grade (GRG), it has been assumed to have higher material removal rate and lower surface roughness as the best condition. The average grey relational grade in combination of MRR and R a is found optimum with trial number 23, and the corresponding parameters are-170 m min −1 , 0.5 mm/rev and 1.5 mm depth of cut. It is the optimum condition to cut austenitic stainless steel SS316 using tungsten carbide textured cutting tool.
To study the influence of process parameters on GRG, counter plots are made and as shown in figures 4(a)-(c). It is to note that the maximum value of GRG exhibits the best combination of process parameters [10]. It can be observed from figure 4(a) that the best solution can be achieved with the feed rate in the range of 0.3 to 0.45 mm/rev at any cutting speed. Furthermore, as shown in figure 4(b), there is not much influence of depth of cut on GRG and its values from 0.75 to 1 mm influence GRG at some extent. In essence, feed rate is the predominant factor influencing both surface roughness and material removal rate. The values of R a and MRR at optimal machining conditions are 3.436 μm and 105 187 mm 3 min −1 respectively. Further, an extra experiment has been conducted where SS316 is machined under dry conditions and at the aforementioned optimum Where, Nor.=Normalization of data with respect to higher the better for MRR and lower the better for surface roughness. Dev.=Deviation sequence and GRC is the Grey relation coefficient.
parameters using a non-textured or plain cutting tool and the values of responses found are R a -4.526 μm and MRR-105 187 mm 3 min −1 . Since, it is known that the values of MRR as obtained using equation (1), which is same in case of both, textured and plain tools. Therefore, to assess cutting tool (textured and non-textured) performances, machining parameter combinations for maximum and minimum roughness at run numbers 22 and 24 respectively have been considered to make further comparison.
Tool wear study
To confirm the results of optimization and to make a comparative evaluation between textured and plain tools, investigations on tool wear morphology along with chip morphology have also been done. Figure 5 shows the obtained from the hybrid method shows an average wear of 234.63 μm with an ideal result on machinability (table 4). The wear mechanism at optimal machining condition is due to friction and chipping of hard metal. While machining at 0.2 mm/rev and 1.5 mm depth of cut the cutting energy induced the metal to fuse and adhere over rake face as a weldment. Thickness of this adhesive wear is around 210 μm beyond the fraction of flank face. Experiments with minimum depth of cut and feed rate (1 mm and 0.15 mm/rev) have produced minimum tool wear of 180 μm with a same cutting speed of 170 m min −1 . These results are similar to the machinability investigation of SS304 reported in [12]. In this condition, the amount of force induced to cut the bulk material has been reduced and minimum R a /MRR was achieved. To study the process response in detail, the flank wear, material removal and chips produced at proposed condition are correlated. For the average and minimum tool wear, the chip removed from the bulk is continuous and MRR are significantly varying. At high machining rate, the weldment produced over the crater has induced the metal removal rate to hike and the stress induced on cutting has yielded the metal chip to produce in the form of serration (discontinuous) chips. Therefore, it has been confirmed that, at an optimal cutting condition can produce best result even the speed does not varied. In depth investigation reveals that, the plain cutting tool has been forced to severe tool wear with ridges and grooves, as shown in figures 5(d)-(f). While, comparing the same with textured cutting tool, severity in wear found less with built up edge over the cutting nose as shown in figures 5(a)-(c). In plain cutting tool, the material deformation and shearing bulk removed yields to produce continuous chip with sharp edges (figures 6(a)-(b)). The mechanical action between the sharp continuous chip and tool interface cause ridged and grooves over the cutting edge. However, in textured cutting tool, the wear mechanism is in adhesion (in the form of built-up edge) due to plastic deformation of material welded over the step pattern. The built up edge induce the bulk deformed from ductile-to-brittle transformation and led to produce serrated metal chips ( figures 6(c)-(d)).
Therefore, the textured tool has better life compared to plain tool with good surface roughness. This can also be controlled with suitable process parameters and machining conditions.
Conclusions
Investigation on machinability of SS316 under dry machining using textured carbide tools is reported in this paper. The following conclusions can be drawn from this research work-• The average roughness measured is highly influenced by the feed rate and the best value of average roughness 0.541 μm was achieved while machining stainless steel 316 at high speed (170 m min −1 ) cutting with an average feed rate of 0.15 mm/rev.
• The maximum material removal obtained is 105 187 mm 3 min −1 at 170 m min −1 , 0.5 mm/rev and 1.5 mm depth of cut. While comparing average roughness and MRR the individuality of feed rate and depth of cut has highly contributed.
• The set of optimum process parameters for multi objective function of ideal response for R a and MRR is 170 m min −1 cutting speed, 0.5 mm/rev feed rate and 1.5 mm depth of cut. In addition, the tool flank wear of 230 μm has been obtained.
• Therefore, it has been suggested that machining of hard material at higher speed with an average feed rate and depth of cut can produced ideal machining condition as a solution to secure the best machinability.
• While comparing the performances of textured and plain cutting tools, abrasive wear mechanism was noticed in plain cutting tool and adhesive wear on textured nose. Due to adhesion built-up edge was found at tool tip to produce discontinuous chip on machining process. | 2019-11-14T17:08:56.735Z | 2019-11-25T00:00:00.000 | {
"year": 2019,
"sha1": "2475ef3c9ab17cd9a40b1b877b297975841263d6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ab5630",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "42042811348b65359217d5cb585a1cc5c6f76f65",
"s2fieldsofstudy": [
"Materials Science",
"Business"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
244714671 | pes2o/s2orc | v3-fos-license | Non-minimal Lorentz invariance violation in light of muon anomalous magnetic moment and long-baseline neutrino oscillation data
In light of the increasing hints of new physics at the muon $g-2$ and neutrino oscillation experiments, we consider the recently observed tension in the long-baseline neutrino oscillation experiments as a potential indication of Lorentz invariance violation. For this purpose, the latest data from T2K and NO$\nu$A is analysed in presence of non-minimal Lorentz invariance violation. Indeed, we find that isotropic violation in dimensions $D =$ 4, 5 and 6 can alleviate the tension in neutrino oscillation data by 0.4$-$2.4$\sigma$ CL significance, with the isotropic coefficient $\gamma^{(5)}_{\tau \tau} =$ 3.58$\times$10$^{-32}$GeV$^{-1}$ yielding the best fit. At the same time, the anomalous muon $g-2$ result can be reproduced with an additional non-isotropic violation of $d^{zt} =$ -1.7$\times$10$^{-25}$. The analysis highlights the possibility of simultaneous relaxation of experimental tensions with Lorentz invariance violation of mixed nature.
In light of the increasing hints of new physics at the muon g − 2 and neutrino oscillation experiments, we consider the recently observed tension in the long-baseline neutrino oscillation experiments as a potential indication of Lorentz invariance violation. For this purpose, the latest data from T2K and NOνA is analysed in presence of non-minimal Lorentz invariance violation. Indeed, we find that isotropic violation in dimensions D = 4, 5 and 6 can alleviate the tension in neutrino oscillation data by about 0.4-2.4σ CL significance, with the isotropic coefficient γ (5) τ τ = 3.58×10 −32 GeV −1 yielding the best fit. At the same time, the anomalous muon g − 2 result can be reproduced with an additional non-isotropic violation of d zt = -1.7×10 −25 . The analysis highlights the possibility of simultaneous relaxation of experimental tensions with Lorentz invariance violation of mixed nature.
I. INTRODUCTION
Standard Model (SM) of particle physics is the most successful theory to describe properties of elementary particles and their interactions. Its robustness has been tested in numerous experiments, culminating in the discovery of Higgs boson at the LHC. The conservation of Lorentz invariance and CPT symmetry are an inseparable part of SM physics, as they ensure that physics stay the same regardless of the observer. Only recently, the integrity of SM has started to falter as mounting evidence from electroweak precision observables, CKM matrix element measurements and neutrino experiments are showing divergences between experiments and SM predictions. As physicists look for methods to accommodate SM physics in a more complete theoretical framework, searching evidence of violation to fundamental symmetries could provide hints of the underlying theory, such as the formulation of quantum gravity [1][2][3].
CPT is a fundamental symmetry that is conserved in quantum field theories set in a flat spacetime [4,5]. It is closely related to Lorentz invariance [6]. Well-known examples of theories that give rise to Lorentz Invariance Violation (LIV) or CPT violation are found in the string theory [7,8], while LIV could also arise in supersymmetry together with CPT violation [9] and even without it [10]. An example of a simple non-local field theory of CPT violation can be found in Ref. [11]. Generally speaking, theories that uphold any non-trivial space-time dependence on the vacuum also lead to violation of either Lorentz invariance, CPT symmetry, or both. Lorentz invariance can furthermore be violated isotropically or into * linhx55@mail2.sysu.edu.cn † tangjian5@mail.sysu.edu.cn ‡ sampsa@mail.sysu.edu.cn § ppasquini@sjtu.edu.cn a specific direction. Effects of LIV and CPT violation are typically studied in the effective field theory framework, the most general one being the famous Standard Model Extension [12] where SM Hamiltonian is expanded to arbitrary dimensions. Evidence of LIV or CPT-odd operators have been searched in many experiments from atmospheric neutrino fluxes to gravitational waves [13][14][15][16][17], which have led to very stringent constraints especially for high-order operators [18,19].
In this work, we study the prospects of uncovering LIV physics in neutrino oscillation experiments. Neutrino oscillations stand out as the first direct evidence of physics beyond SM, making neutrino experiments an ideal platform to look for new physics. In recent years, significant advances have been made in the precision measurements of the standard oscillation parameters: θ 13 has been measured at a few-percent-level at 90% confidence level (CL) in reactor experiments, θ 23 and |∆m 2 31 | in atmospheric and long-baseline neutrino experiments, and θ 12 and ∆m 2 21 in a combination of solar and reactor neutrino experiments [20]. At the same time, observations of neutrino anomalies and tensions in oscillation data [21][22][23][24] have seeded an intense discussion over whether new physics is in play [25][26][27][28]. We investigate the tension in the long-baseline neutrino experiments T2K and NOνA [29,30], where recent data has shown contradicting results on the parameters θ 23 and δ CP . We study the parameter discrepancies in T2K and NOνA data in light of non-minimal LIV, focusing on the lesser constrained dimensions 4, 5 and 6. It is shown in this work that that isotropic LIV could notably alleviate the tension observed in the three-neutrino mixing. We also show how the recent muon g − 2 measurements [31] could simultaneously arise from the LIV operators when non-isotropic coefficients are present.
This article is organized as follows: In section II, we provide a brief overview of the theoretical formalism of LIV effects in neutrino oscillations and muon g − 2. In section III, we describe the experimental data and numerical methods adopted in this work. The results on fitting LIV coefficients into T2K and NOνA data are presented in section IV. We leave the concluding remarks in section V.
II. LORENTZ INVARIANCE VIOLATION IN NEUTRINO OSCILLATIONS
In this section, we review the theoretical framework to study LIV in neutrino oscillations. We first consider the isotropic LIV effects in section II A. The non-isotropic implementation of LIV is then presented in section II B. Finally, we discuss the implications to LIV coefficients from recent muon g − 2 measurements in section II C.
A. Isotropic violation
In the following, we summarize the effective Hamiltonian describing neutrino oscillations under Lorentz invariance violation. The method described in this section was originally proposed in [18] and further developed in [32][33][34]. We focus on LIV terms that appear in the kinetic term. The generalized kinetic therm of the neutrino Lagrangian can be written as are tensors of rank D − 4 that parametrizes the size of the LIV and σ µ are the Pauli matrices.
In this work, we mainly focus on the isotropic part of the Lorentz violation. The isotropic sector is generally confined by less stringent bounds [19], which makes it more interesting to probe in experiments. Hence, we will set µ i = 0 unless stated otherwise. Under the isotropic hypothesis, there is no violation related to direction of the momentum and the neutrino propagation Hamiltonian can therefore be written by two terms, that is, where H 0 stands for the standard Hamiltonian consistent with SM and H LIV contains the Lorentz-violating terms.
In the flavour basis, the standard Hamiltonian is given by where we simplified our notation to γ 0...0 αβ = γ are complex parameters with phases φ (D) . In the following, we mainly study LIV parameters in the flavour basis where γ (D) = |γ (D) |e −iφ (D) .
B. Non-isotropic violation
In case of non-isotropic Lorentz invariance violation, the strength of LIV effect depends on the direction of the propagating neutrino. In such case, the equation of state can be decomposed into two parts [35]: where H 0 conserves Lorentz invariance and δH violates it. Assuming the neutrino propagates into direction x, the conserving part can be written as where the neutrino mass matrix is given by M αβ (α, β = e, µ, τ ). The perturbation introducing Lorentz invariance violation on the other hand can be presented as where d µj αβ is the strength of the Lorentz violation into the direction x j (j = 1, 2, 3). We assume neutrino propagation align with direction x 3 and keep only the axialcurrent operator related to γ 5 γ µ . As we shall see in the following section, parameter d µj αβ could have profound implications in the calculation of muon anomalous magnetic moment.
In order to derive the evolution equation for neutrinos, the neutrino mass matrix M must be diagonalized. This can be accomplished within the relativistic limit |p| m i , where p µ ≈ (|p|, −p). In such a case the evolution equation is given by [36] i∂ where ψ i is the mass eigenstate of the Hamiltonian H = H 0 + δH. It is also convenient to express coordinates in the Sun-centered inertial frame [35], where Θ denotes colatitude [37] of the Earth. Neutrino experiments with large variation in colatitude can therefore provide sensitive probes to LIV of this kind.
In neutrino oscillations, we introduce non-isotropic LIV to oscillation probabilities. To do this, we include the non-isotropic coefficient d zt from equation (8) in addition to the isotropic LIV parameters γ (D) in the Hamiltonian (3). C. Implications of (g − 2)µ measurement In this subsection, we calculate the contribution to anomalous muon magnetic moment from Lorentz invariance violation. It was recently reported by g − 2 col-laboration that the muon magnetic moment is different from the Standard Model prediction by 4.2 σ CL [31]. We show in the following that this value can be accommodated with the non-isotropic LIV coefficient d zt in mass dimension D = 5.
The general Lagrangian responsible for LIV in muonic sector can be written as [38] where a α , b α , c αβ , d αβ and H αβ parameterize LIV for muons and D α = ∂ µ + A µ is the covariant derivative. Whereas a α and b α are CPT-odd, other coefficients are CPT-even. Most of these parameters are strictly constrained by previous experiments [19].
The general contribution to the muon g − 2 frequency is given by [39] where E D−3 denotes neutrino energy in dimension D = 3, 4, 5..., ω ⊕ is the Earth's frequency around the Sun and factorsȞ D nlm andǧ D nlm are combinations of parameters b µ , d αβ and H αβ . Function G jm = G jm (Θ) on the other hand depends on colatitude Θ and vanishes for the isotropic case j = 0. LIV must therefore be non-isotropic to explain the anomalous muon g − 2 value.
In case of non-isotropic LIV, the contribution to muon g − 2 arises from mass dimension D = 5. When j = 1, m = 0, the G jm function yields where G 10 (Θ) = 0.5 3/π cos Θ. The total correction to SM value of muon magnetic moment a µ is therefore where B is the magnetic field. The g−2 collaboration [31] reported the value of ∆a µ = 251 × 10 −11 in their recent measurement, which was acquired in magnetic field B = 1.45 T and colatitude Θ = 48.2 • [40]. This correction to muonic g −2 can be also obtained with non-isotropic LIV coefficient d zt = -1.7×10 −25 from equation (12). Such a coefficient is well within the present experimental bounds from the neutrino sector [19]. For more details about the g − 2 measurement, see also Refs. [41][42][43][44][45].
III. DESCRIPTION OF THE NEUTRINO OSCILLATION DATA
In the present work, we focus on the analysis of the neutrino oscillation data from the presently running Tokai-to-Kamioka (T2K) and NuMI Off-axis ν e Appearance (NOνA) experiments. T2K and NOνA are longbaseline accelerator based experiments where intensive beams of neutrinos and antineutrinos are created by colliding protons on a fixed target. In this work, we focus on the recent data releases from the T2K and NOνA collaborations reported in Refs. [29] and [30], respectively. A particular area of interest is the observed discrepancy in the θ 23 measurement, which has shown a tension in neutrino oscillation data between T2K and NOνA according to an analysis in the standard neutrino mixing picture. In this section, we briefly review the experimental data and the analysis methods that are used in our work.
A. T2K experiment
Tokai-to-Kamioka (T2K) experiment is one of the two presently running long-baseline neutrino oscillation facilities. T2K uses a proton accelerator of 750 kW average output to generate muon neutrino and antineutrino beams. Based on the J-PARC campus in Tokai, Japan, the neutrino beam traverses 295 km away to Kamioka, where it is met by SuperKamiokande neutrino detector. The neutrino beam is also monitored at the near and intermediate detector facilities ND280 and INGRID, respectively. Super-Kamiokande and ND280 are both located 2.5 • off the beam axis, where the observed neutrinos are mainly of about 600 MeV energy. The beam polarity can be switched between muon neutrino and antineutrino modes, with the initial strategy of dividing the operational time into 2 years in neutrino mode and 6 years in antineutrino mode, respectively. A second run is planned for T2K experiment (T2K-II), with the aim to continue the successful run of the first stage.
The neutrino beam used in T2K consists predominantly of muon neutrinos (96%-98%), accompanied by smaller components of beam-related backgrounds. The corresponding antineutrino beam has similar composition with muon antineutrinos taking the majority. Most neutrinos and antineutrinos interact via charged-current quasi-elastic (CCQE) interaction with a small but observable chance for resonant charged-current pion pro-duction (CC1π). The neutrino data collected in Su-perKamiokande is typically reported in five different samples: two appearance channels measuring ν µ → ν e and ν µ →ν e oscillations via CCQE interaction and two disappearance channels ν µ → ν µ andν µ →ν µ . There is also a third appearance channel dedicated to ν µ → ν e oscillations observed via ν e CC1π + interaction. All neutrino events are reconstructed from the information coming from the Cherenkov light when charged particles interact with the water content of the neutrino detectors.
In this work we analyse the neutrino data that has been collected in the first phase of T2K between years 2009 and 2018. In Refs. [29,46,47], T2K collaboration reported neutrino oscillation data from 3.13×10 21 protonson-target (POT). The collected data contains the information about the reconstructed neutrino and antineutrino events from the combined run of 1. NuMI Off-axis ν e Appearance (NOνA) experiment is the second of the two long-baseline neutrino oscillation experiment currently collecting data. Based in the United States, NOνA generates beams of muon neutrinos and antineutrinos and sends them to traverse 810 km underground. The neutrino source is based on the NuMI beamline in Fermilab, Illinois, which produces neutrinos with an average beam power of 700 kW. The detector facilities in NOvA are a near detector located 1 km from the beam facility and a far detector stationed at an underground laboratory in Ash River, Minnesota. Both detector facilities are placed 0.8 • off-axis from the source. In contrast to T2K, neutrinos and antineutrinos produced in NOνA spread over a wide range of energies around 2 GeV. Neutrino interactions observed in NOνA therefore consist of various types of charged-current (CC) interactions.
The neutrino and antineutrino data analyzed in this work is based on the events that were collected in NOνA far detector in 2014-2020 [30]. The considered data sets were acquired in the far detector, which is a segmented detector consisting of alternating planes of PVC scintillator. The neutrino detection in NOνA is based on scintillation light that is emitted by the charged particles created in neutrino-nucleus interactions. The data consist of six different samples which correspond to 12.5×10 20 POT in neutrino mode and 13.6×10 20 POT in antineutrino modes. There are four electron-like characterising ν µ → ν e andν µ →ν e oscillations and two muon-like samples describing ν µ → ν µ andν µ →ν µ disappearance. The electron-like samples are assigned into two categories based on the purity of each event: low-CNN evt and high-CNN evt . Here CNN evt stands for the convolutional neural network used in particle identification. The muonlike events on are split into 19 unequally spaced energy bins in the range [0.75, 4.0] GeV, whereas the electronlike events are distributed in 6 same-size bins over [1.0, 4.0] GeV interval. There is also the so-called peripheral sample included in NOνA, which is used to increase the number of pure electron-like events. In this work, we consider the samples for the electron-like, muon-like and peripheral events while assuming 14 kton fiducial mass in the far detector.
C. Numerical analysis
The analysis of T2K and NOνA data is based on the χ 2 method. The numerical analysis conducted in this work is done with General Long Baseline Experiment Simulator (GLoBES) [48,49], which has been modified to calculate neutrino evolution with Lorentz invariance violation. The essential parameters as well as the collected neutrino events are summarized for the T2K and NOνA experiments in table I.
The neutrino data is analysed with the following χ 2 function: where index i = 1, 2, ... runs over the energy bins. Here O i and T i stand for the observed and theoretical (predicted) events in the far detectors of T2K and NOνA. Nuisance parameters ζ sg and ζ bg are used in the calculation to reflect systematic uncertainties in the signal and background events, respectively. The systematic uncertainties are addressed with the so-called pull-method [50]. The systematic uncertainties considered in this work influence the predicted events T i in neutrino detectors with a simple shift: i,d and N bg i,d denote the predicted signal and background events, respectively. The prior function is defined as the Gaussian distributions of each of the standard neutrino oscillation parameters.
Calculation of theoretically predicted events T i is performed entirely by GLoBES. The number of events can be described with the formula
Data source
Ref. [29] Ref. [30] where properties of the neutrino experiment are integrated over the true energy E and reconstructed energy E of the incident energy. The first part, N nucl T , is independent of energy and is defined by the number of nucleons in the neutrino detector, operational time of the experiment and detector efficiency, respectively. The integrand on the other hand is formed by the neutrino flux φ(E), cross-section σ(E), energy resolution function R(E, E ) and oscillation probability P ν →ν (E). In this work, we adopt the neutrino fluxes and cross-sections for T2K and NOνA from Refs. [51,52] and [53][54][55], respectively.
One of very important elements in the analysis of the neutrino oscillation data is the detector response associated with T2K and NOνA experiments. In the event calculation, the detector response is mainly represented by function R(E, E ) which relates the incident and reconstructed neutrino energies E and E with a Gaussian function of width σ res (E). We analyse the T2K and NOνA far detector data using a modified energy resolution function, where the Gaussian width is given by σ res (E) = αE + β √ E. We furthermore introduce an additional phase shift γ in detector response function R(E, E ). The energy resolution function is determined for T2K and NOνA by fitting parameters α, β and γ recursively for each channel until the correct spectral shape is achieved. Remaining inconsistencies between the official data and prediction of GLoBES are mitigated with channel and bin-based efficiencies.
In order to compute the probabilities P ν →ν with general Lorentz invariance violation and l, l = e, µ, τ , a custom-made probability code is adopted to include the calculation of isotropic and non-isotropic LIV effects in GLoBES. It should be noted that Lorentz invariance violation of higher dimensions could in principle influence the neutrino fluxes φ(E). In this work, however, we assume the effect on fluxes to be small and fall within the present uncertainties of the neutrino fluxes.
The systematic uncertainties used in the χ 2 function (13) are one of the key characteristics in the analysis of the neutrino oscillation data. In the analysis of experiment data from T2K experiment, we impose 5% systematic uncertainty on the signal events undergoing CCQE interaction. The corresponding background events are addressed with 10% systematic uncertainty. Identical systematic uncertainties are used for the events associated with CC1π. This choice of priors for the nuisance parameters are found to reproduce the fit results reported by the T2K Collaboration in Ref. [29] with sufficient accuracy. In a similar manner, we implement systematic uncertainties on the each channel considered in the NOνA experiment. In the analysis of NOνA far detector data, systematic uncertainties are driven by detector calibration, which amounts to about 5% systematic uncertainty [30]. We treat electron-like samples with low and high CNN evt with the same pull parameters. The systematic uncertainties used here are found to be adequate to reproduce the official results reported by the NOνA Collaboration in Ref. [30].
IV. NUMERICAL RESULTS
We present the results of our numerical analysis in this section. In this work, we investigated effects of general Lorentz invariance violation (LIV) in dimensions D = 4, 5 and 6. The effects were studied in the context of the θ 23 discrepancy recently observed in T2K and NOνA. In the following, we examine whether LIV could alleviate the observed tension between the recent data in T2K and NOνA and improve the fit to the standard neutrino oscillation parameters. At the same time, we study the effect of LIV in the anomalous muon magnetic moment. As we shall see in this section, isotropic LIV with dimension D = 5 provides the best fit result to the T2K and NOνA data while its non-isotropic version is simultaneously able to resolve (g − 2) µ .
We first begin investigation with the isotropic Lorentz invariance violation. The analysis of the T2K and NOνA far detector data is carried out in dimensions D = 4, 5 and 6. The first goal of this study is to identify the LIV parameters that lead to significant improvement on the goodness-of-fit in the T2K and NOνA data. The second goal is to determine the resulting effect on the fit values of sin 2 θ 23 and δ CP . The analysis of the T2K and NOνA data is carried out as follows. Using the methods described in section III, the far detector data in the two experiments is fitted with a χ 2 function. We fix the solar parameters θ 12 and ∆m 2 21 to their respective best-fit values 33.4 • and 7.4×10 −5 eV 2 following the most recent global fit [20]. We also impose a prior from the reactor experiments: sin 2 2θ 13 = 0.0857±0.0046. The χ 2 distribution is computed for parameters γ (D) = |γ (D) | exp (−iφ (D) ) keeping one LIV parameter free each time. To see the difference to the standard fit, we also obtain the χ 2 distribution corresponding to the case where all LIV parameters are kept at zero.
The fit results to sin 2 θ 23 and δ CP from the joint analysis is T2K and NOνA data are presented in table II. The LIV parameters γ (D) were analysed one by one in dimensions D = 4, 5 and 6. In each row, the denoted parameter was let to run free in order to obtain the fit result. The resulting improvement to the fit result is listed in the ∆χ 2 column, where ∆χ 2 = χ 2 SM − χ 2 LIV is calculated from χ 2 values attributed to the standard and LIV-influenced fits, respectively. Finally, the significance column indicates the confidence level (C.L.) at which the fit obtained in presence of the LIV parameter is favoured over the standard fit result. The best fit result is obtained with parameter γ (5) τ τ , which can yield an enhancement of about 2.4 σ C.L. in statistical significance. All results are consistent with the experimental bounds reported in the literature [19], including the stringent bounds from IceCube [56]. The statistical significance is computed using Wilks' theorem [57]. For the caveats concerning the statistical analysis in T2K and NOνA data, see Ref. [58].
The effects of the Lorentz invariance violation on the θ 23 and δ CP measurements in T2K and NOνA experiments are shown in Fig. 1. The standard fits on T2K and NOνA data are illustrated with solid red and blue lines, respectively, whereas the fits obtained with Lorentz invariance violation are indicated with dashed lines. The discrepancy in the θ 23 measurements is evident in the standard fits, which are clearly separated in T2K and NOνA. This tension is partially removed with the introduction of LIV coefficient γ (5) τ τ . Imposing LIV into the fit leaves the T2K fit mostly unaffected but shifts the NOνA fit significantly towards the θ 23 value that is preferred by the T2K data. This results in an alleviation of tension in the T2K and NOνA measurements. The tension between the T2K and NOνA data can also be seen the fit results on δ CP , which show nearly ∆χ 2 5 difference at the value that is currently preferred by T2K data. Marginalizing over γ We finally make a remark on the effect of nonisotropic Lorentz invariance violation in neutrino experiments. As we pointed out in section II B, the recently measured anomalous muon magnetic moment [31] could be explained with non-isotropic LIV coefficient d zt = −1.7×10 −25 . We investigated the potential effect of this solution in the long-baseline experiments T2K and NOνA. Our results show non-isotropic coefficient too small to induce notable effects in T2K and NOνA [59], where the value of d zt required to satisfy (g − 2) µ leads only to χ 2 min ∼ 10 −3 correction to the fit result. It is therefore possible to recover the measured (g − 2) µ value with non-isotropic LIV and alleviate the tension on θ 23 in T2K and NOνA at the same time.
V. CONCLUSIONS
Lorentz invariance violation could have profound effects in the interpretation of physical observations in laboratories and astrophysical environments. In the present work, we have investigated non-minimal LIV as a potential solution to the recently observed tension in the measurement of the atmospheric mixing angle θ 23 and Dirac CP phase δ CP in the long-baseline neutrino experiments T2K and NOνA. To this extent, we interpreted the recently published experiment data in terms of isotropic and non-isotropic LIV effects.
In contrast to the previous studies conducted on the topic, we studied the relatively unexplored Lorentz invariance violation in dimensions D = 4, 5 and 6. Investigating the isotropic effect on the fits to T2K and NOνA data, we found that diagonal parameters γ (D) ( , = e, µ and τ ) could resolve the tension by about 0.4-2.4 σ confidence levels when one parameter is considered at a time. The extracted fit results are consistent the existing bounds on Lorentz invariance violation in the neutrino sector [19]. We found the best fit with γ Lorentz invariance violation is also able to explain the recent results on the anomalous muon magnetic moment as reported by the Muon g − 2 Collaboration [31]. The measured value of (g − 2) µ could be generated by nonisotropic Lorentz invariance violation with directional coefficient d zt −1.7×10 −25 . Isotropic Lorentz invariance violation on the other hand has no effect on the muon magnetic moment. We estimated the impact of this specific non-isotropic Lorentz invariance violation on the neutrino oscillations in T2K and NOνA. We find the effect of the non-isotropic LIV parameter to be indistinguishable in T2K and NOνA due to relatively small change in colatitude in their experimental setups. Though neutrino oscillations and muon g − 2 do not favour the same type of LIV, it is noteworthy that both solutions can exist simultaneously.
In summary, we have shown that non-minimal Lorentz invariance violation can notably alleviate the θ 23 discrepancy in T2K and NOνA data and simultaneously give rise to the anomalous muon magnetic moment. Respecting the present experimental bounds, we note that the significance for isotropic LIV can be as large as 2.41σ C.L. Our results place a mild preference on dimension-5 LIV. We are looking forward to the future long-baseline neutrino oscillation data to check whether the observed discrepancy is due to the violation of Lorentz invariance. In the present work, we have conducted the analysis on T2K and NOνA data assuming normal ordering for neutrino masses. For completeness, we now consider the results in the case of inverted ordering and discuss the implications on the sensitivity to neutrino mass ordering. inverted mass ordering is assumed. The standard threeneutrino oscillation scenario is taken into account in the red regions, which correspond to the scenario where no Lorentz invariance violation takes place. Isotropic LIV is taken into account in the blue regions, which were obtained by letting γ LIV parameter, whilst other LIV parameters are fixed at zero. We have similarly performed the analysis for LIV parameters in dimensions D = 4 and 6, finding analogous results.
This project was supported in part by
The effects of LIV is studied in the sensitivity to neutrino mass ordering by comparing the fit results in both neutrino mass orderings. The results are presented in Figure 4, where the fit results are shown separately for T2K and NOνA in left and right panels, respectively. While the fit results associated with normal ordering (NO) are shown in blue colour, the results corresponding to inverted ordering (IO) are indicated with red colour. The sensitivity to the neutrino mass ordering can be seen from the relative difference of ∆χ 2 between the NO and IO curves. In the standard oscillation scenario (solid curves), the difference between NO and IO is more significant in NOνA than in T2K. The higher sensitivity in NOνA arises mainly from the longer baseline length in the experiment. When the LIV effect is taken into account (dashed curves), the LIV parameter γ (5) τ τ is let to vary freely. In such case, the difference between NO and IO is nearly halved for most values of δ CP , indicating significant loss in mass hierarchy discrimination. | 2021-11-30T02:16:13.533Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "969916a6a083d58235649d12680d96979b64f1c1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "969916a6a083d58235649d12680d96979b64f1c1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
73476804 | pes2o/s2orc | v3-fos-license | Current Therapeutic Results and Treatment Options for Older Patients with Relapsed Acute Myeloid Leukemia
Considerable progress has been made in the treatment of acute myeloid leukemia (AML). However, current therapeutic results are still unsatisfactory in untreated high-risk patients and poorer in those with primary refractory or relapsed disease. In older patients, reluctance by clinicians to treat unfit patients, higher AML cell resistance related to more frequent adverse karyotype and/or precedent myelodysplastic syndrome, and preferential involvement of chemorefractory early hemopoietic precursors in the pathogenesis of the disease further account for poor prognosis, with median survival lower than six months. A general agreement exists concerning the administration of aggressive salvage therapy in young adults followed by allogeneic stem cell transplantation; on the contrary, different therapeutic approaches varying in intensity, from conventional salvage chemotherapy based on intermediate–high-dose cytarabine to best supportive care, are currently considered in the relapsed, older AML patient population. Either patients’ characteristics or physicians’ attitudes count toward the process of clinical decision making. In addition, several new drugs with clinical activity described as “promising” in uncontrolled single-arm studies failed to improve long-term outcomes when tested in larger randomized clinical trials. Recently, new agents have been approved and are expected to consistently improve the clinical outcome for selected genomic subgroups, and research is in progress in other molecular settings. While relapsed AML remains a tremendous challenge to both patients and clinicians, knowledge of the molecular pathogenesis of the disease is fast in progress, potentially leading to personalized therapy in most patients.
Introduction
Acute myeloid leukemia (AML) occurs more commonly in elderly people. Median age at diagnosis is over 65 years, and incidence progressively increases with age, such that more than 40% of patients are currently diagnosed over the age of 70 [1][2][3]. Apart from accrual into experimental trials, therapeutic options in these patients include intensive chemotherapy (ICT) followed by allogeneic stem cell transplantation (SCT), hypomethylating agents (HMAs) such as azacytidine (AZA) and decitabine (DAC), low-dose cytarabine (LDARAC), and best supportive care (BSC), including hydroxyurea for the control of leukocytosis and transfusion support [4][5][6]. New treatment options, including venetoclax in combination with HMAs or LDARAC [7] and glasdegib in combination with LDARAC [8], have recently been approved by the Food and Drug Administration (FDA) for the treatment of newly diagnosed patients aged over 75 years and/or ineligible for intensive induction, where the treatments
Intensive Salvage Chemotherapy and Hypomethylating Agents
Overall, ICT should be reserved for a small minority of patients not allografted for different reasons in CR1, in whom allogeneic transplant is feasible once CR2 has been achieved. There is increasing evidence supporting the utility of SCT in fit, older patients after CR achievement following intensive induction or HMA; accordingly, eligible patients should receive the procedure in CR1 with curative intent [26][27][28]. As a consequence, SCT in CR2 is unavoidably applicable in a negligible minority of cases. This further limits the role of salvage ICT, which should be reserved for patients with ELN-favorable criteria and CR1 lasting for more than one year, or for the very small number of patients where allogeneic SCT had been planned as part of their treatment program at diagnosis and who have not experienced prohibitive toxicity during first-line therapy.
The role of HMAs is well-established in the frontline treatment of older patients with AML, including bridging to transplant [29], while no randomized trials have been performed in refractory or relapsed disease. In the absence of prospective studies, different retrospective observations have demonstrated the potential utility of either AZA or DAC. Most relevant data derive from the analysis of a large international multicenter retrospective database, focusing on the effectiveness of HMA as well as on predictors of response and overall survival (OS). A total of 655 patients, including 290 refractory and 365 relapsed patients, were given AZA (57%) or DAC (43%). Median age at diagnosis was 65 years. The CR rate (CR + CRi) was 16%, while hematologic improvement was observed in 8.5%. Median OS was 6.7 months and strictly related to best response achieved (25.3 months for patients achieving CR and 14.6 months for CRi). The presence of more than 5% of blasts in peripheral blood and >20% blasts in the bone marrow were significantly associated with shorter OS in the multivariate analysis, while a 10-day schedule of DAC induced a higher response rate [30].
Similarly, we retrospectively reviewed clinical records of 79 patients treated with HMAs as salvage therapy at nine institutions in Italy, with a median age of 64 and with secondary AML in 29%.
Intensive Salvage Chemotherapy and Hypomethylating Agents
Overall, ICT should be reserved for a small minority of patients not allografted for different reasons in CR1, in whom allogeneic transplant is feasible once CR2 has been achieved. There is increasing evidence supporting the utility of SCT in fit, older patients after CR achievement following intensive induction or HMA; accordingly, eligible patients should receive the procedure in CR1 with curative intent [26][27][28]. As a consequence, SCT in CR2 is unavoidably applicable in a negligible minority of cases. This further limits the role of salvage ICT, which should be reserved for patients with ELN-favorable criteria and CR1 lasting for more than one year, or for the very small number of patients where allogeneic SCT had been planned as part of their treatment program at diagnosis and who have not experienced prohibitive toxicity during first-line therapy.
The role of HMAs is well-established in the frontline treatment of older patients with AML, including bridging to transplant [29], while no randomized trials have been performed in refractory or relapsed disease. In the absence of prospective studies, different retrospective observations have demonstrated the potential utility of either AZA or DAC. Most relevant data derive from the analysis of a large international multicenter retrospective database, focusing on the effectiveness of HMA as well as on predictors of response and overall survival (OS). A total of 655 patients, including 290 refractory and 365 relapsed patients, were given AZA (57%) or DAC (43%). Median age at diagnosis was 65 years. The CR rate (CR + CRi) was 16%, while hematologic improvement was observed in 8.5%. Median OS was 6.7 months and strictly related to best response achieved (25.3 months for patients achieving CR and 14.6 months for CRi). The presence of more than 5% of blasts in peripheral blood and >20% blasts in the bone marrow were significantly associated with shorter OS in the multivariate analysis, while a 10-day schedule of DAC induced a higher response rate [30].
Similarly, we retrospectively reviewed clinical records of 79 patients treated with HMAs as salvage therapy at nine institutions in Italy, with a median age of 64 and with secondary AML in 29%. According to ELN criteria, 10 patients were favorable-risk, 35 were intermediate-, and 30 were adverse-risk. All patients had been given ICT at the onset of disease; 61% of patients received HMA as second-line therapy, 26% as third-line, and 13% were beyond the third line. Note that 18% of patients had received SCT before HMA. Overall response rate (CR + CRi) was 18%. Median OS in patients with relapsed disease was 14.9 months vs. 5.1 for refractory ones ( Figure 3). Best results were observed in 46% of patients, who showed either CR, CRi, hematologic improvement, or stable disease after salvage HMA [31]. These data seem to compare favorably with intensive salvage chemotherapy, and suggest that HMAs represent an acceptable therapeutic option for the selected population of relapsed elderly patients, especially those previously treated with ICT and not suitable for allogenic bone marrow transplantation. Finally, AZA has been proven as potentially useful for relapse after SCT in either AML or MDS, with a response rate and survival which do not substantially differ from those reported after ICT and would represent the preferred option in the adverse subset of elderly patients with relapsed disease after SCT [32].
According to ELN criteria, 10 patients were favorable-risk, 35 were intermediate-, and 30 were adverse-risk. All patients had been given ICT at the onset of disease; 61% of patients received HMA as second-line therapy, 26% as third-line, and 13% were beyond the third line. Note that 18% of patients had received SCT before HMA. Overall response rate (CR + CRi) was 18%. Median OS in patients with relapsed disease was 14.9 months vs. 5.1 for refractory ones (Figure 3). Best results were observed in 46% of patients, who showed either CR, CRi, hematologic improvement, or stable disease after salvage HMA [31]. These data seem to compare favorably with intensive salvage chemotherapy, and suggest that HMAs represent an acceptable therapeutic option for the selected population of relapsed elderly patients, especially those previously treated with ICT and not suitable for allogenic bone marrow transplantation. Finally, AZA has been proven as potentially useful for relapse after SCT in either AML or MDS, with a response rate and survival which do not substantially differ from those reported after ICT and would represent the preferred option in the adverse subset of elderly patients with relapsed disease after SCT [32]. An emerging clinical challenge concerns the treatment of older AML patients who relapse after CR or progress after any response following initial therapy with HMAs. For this category, we should consider that ICT in most cases was already excluded at the time of diagnosis and therefore it should be even more so at the time of relapse. Furthermore, results of ICT in patients treated with AZA for MDS and who progressed to AML while on therapy are disappointing, with low CR rate and high treatment-related mortality [33][34][35]. In the absence of a clinical trial based on the use of experimental drugs, BSC and/or hydroxyurea for the control of leukocytosis still represent the best option for this subset of patients, with the aim of improving quality of life in an outpatient setting. Recently, evidence has been provided for the use of venetoclax in combination with HMAs in patients with An emerging clinical challenge concerns the treatment of older AML patients who relapse after CR or progress after any response following initial therapy with HMAs. For this category, we should consider that ICT in most cases was already excluded at the time of diagnosis and therefore it should be even more so at the time of relapse. Furthermore, results of ICT in patients treated with AZA for MDS and who progressed to AML while on therapy are disappointing, with low CR rate and high treatment-related mortality [33][34][35]. In the absence of a clinical trial based on the use of experimental drugs, BSC and/or hydroxyurea for the control of leukocytosis still represent the best option for this subset of patients, with the aim of improving quality of life in an outpatient setting. Recently, evidence has been provided for the use of venetoclax in combination with HMAs in patients with refractory/relapsed AML treated outside of a clinical trial. In a small series of 33 consecutive adults with a median age of 62 years (range , in which 20 out of 33 (61%) have been pretreated with HMAs, CR + CRi accounted for 51% with a median survival of six months [36].
FLT3 Inhibitors
In the past decade, there has been considerable progress in understanding the molecular pathogenesis of AML, which has led to the development of potential therapeutic targets so that selective treatment approaches aimed at rational and personalized treatment strategies are now available [37][38][39][40]. In the past two years, different new agents for AML have become available for newly diagnosed or relapsing/refractory patients, and others are the object of clinical investigation. The last approval by the FDA for refractory/relapsed AML patients refers to gilteritinib (G), a powerful FLT3 inhibitor, for the treatment of adult patients with relapsed or refractory AML with an FLT3 mutation as detected by an FDA-approved test. The incidence of FLT3/ITD mutations varies according to age and clinical risk group, being less common in pediatric AML and in AML arising from an antecedent myelodysplastic syndrome. However, the frequency of the mutation in the elderly accounts for more than 20%, so a non-negligible percentage of patients would benefit from FLT3 inhibitors [41][42][43]. Approval of G was based on an interim analysis of the ADMIRAL trial (NCT02421939), which included 138 adult patients with relapsed or refractory AML carrying an FLT3-ITD, D835, or I836 mutation. G was given orally at a dose of 120 mg daily until unacceptable toxicity or a lack of clinical benefit was observed. After a median follow-up of 4.6 months (range: 2.8 to 15.8), 29 patients (21%) achieved CR or CRi and 33 (31.1%) sustained red blood cell and transfusion independence. Toxicity was acceptable; the most common adverse effects (>20% of patients) consisted of myalgia/arthralgia, transaminase increase, fatigue/malaise, fever, noninfectious diarrhea, dyspnea, edema, rash, pneumonia, nausea, stomatitis, cough, headache, hypotension, dizziness, and vomiting [44]. Note that approval was based on a phase 2 non-randomized study, because available treatment options are limited and largely unsatisfactory for patients with relapsed/refractory FLT3-ITD AML. These data, along with the oral formulation, suggest the possibility of managing selected patients on outpatient basis and make treatment with G particularly attractive in the older AML population.
More recently, the FDA granted a priority review designation to quizartinib (Q), a new FLT3 inhibitor for the treatment of adult patients with relapsed/refractory FLT3 ITD positive AML [45]. The efficacy and safety of single-agent Q were evaluated in the phase 3 Quantum R randomized trial, aimed at comparison of Q vs. investigator choice (IC), including conventional salvage ICT or LDARAC. In 367 patients randomized with a 2:1 ratio (245 to Q and 122 to the control arm), the median OS was 6.2 months, with an estimated 12-month OS probability of 27% vs. 20% in Q and IC arms, respectively; median event free survival (EFS) was 6.0 vs. 3.7 (95% CI, 0.4-5.9) weeks, respectively. The superiority of Q was confirmed by analyses across subgroups, including FLT3 allelic ratio, prior HSCT, AML risk score, and response to prior therapy. The CR + CRi rate was 48% in Q and 27% in the IC arms (nominal p = 0.0001) and the transplant rate was 32% and 12% in Q and SC arms, respectively. Toxicity was comparable between the two arms and only two patients discontinued Q due to QTcF prolongation [46]. These data strongly suggest that Q may represent an important therapeutic option for older patients with refractory/relapsed AML in the near future.
IDH1 Inhibitors
Recurring mutations in isocitrate dehydrogenase (IDH) genes are detected in approximately 20% of adult patients with AML and 5% of adults with MDS [47,48]. The prognostic significance of mutant IDH is controversial, but appears to be influenced by co-mutational status and the specific location of the mutation [49,50]. For relapsing AML patients harboring a mutation in IDH 1 or 2 (IDH1/2), potential treatment options have undergone a paradigm shift away from intensive cytotoxic chemotherapy to targeted therapy with selective inhibitors, such as enasidenib (ENA) for IDH2 or ivosidenib (IVO) for IDH1, both recently approved by FDA [51][52][53]. In addition, the possibility of combining aggressive or attenuated chemotherapy with either ENA or IVO is currently the object of investigation in ongoing clinical trials. ENA was approved by the FDA for relapsed or refractory AML with an IDH2 mutation, concurrently with a companion diagnostic, the RealTime IDH2 Assay, used to detect the IDH2 mutation. Approval was based on Study AG221-C-001, an open-label, single-arm, multicenter clinical trial that accrued 199 adults with relapsed or refractory AML. Patients received ENA orally at 100 mg/day. Twenty-three percent of patients achieved CR or CRi lasting a median of 8.2 months, with 19% of patients having a CR lasting a median 8.2 months, and 4% with a CRi lasting a median 9.6 months. Noticeably, among 157 patients who were transfusion-dependent at the beginning of the trial, 34% no longer required transfusions during at least one 56-day time period on treatment. The most common adverse reactions occurring in more than 20% of patients were gastrointestinal and included nausea, vomiting, diarrhea, elevated bilirubin, and decreased appetite [54].
A recent phase 1 dose-escalation clinical trial with IVO has prompted approval by FDA for the treatment of patients with IDH1-mutated AML in the relapsed and refractory setting due to favorable results [55]. In the refractory/relapsed population (179 patients), the rate of CR was 21.8% and CRi 11.7%. With a median follow-up of 14.8 months, the median OS in the primary efficacy population was 8.8 months; the 18-month survival rate was 50.1% among patients who had CR or CRi. Estimates of median OS were 9.3 months among patients obtaining CR and 3.9 months among patients who did not have a response. Transfusion independence was attained in 29 of 84 patients (35%). Among 34 patients who had a complete remission or complete remission with partial hematologic recovery, 7 (21%) had no residual detectable IDH1 mutations on digital polymerase-chain-reaction assay. No pre-existing co-occurring single gene mutation predicted clinical response or resistance to treatment. Treatment-related adverse events of grade 3 or higher that occurred in at least three patients included QT interval prolongation in 7.8% of the patients, the IDH differentiation syndrome in 3.9%, anemia (2.2%), thrombocytopenia or a decrease in the platelet count (3.4%), and leukocytosis (1.7%). These results suggest that in patients with advanced IDH1-mutated relapsed or refractory AML, IVO at a dose of 500 mg daily was associated with sustained clinical benefit, including transfusion independence, durable remissions, and molecular remissions in some patients with CR. Note that these results compare favorably with those described for salvage intensive chemotherapy, resulting in significantly lower response rate and survival [56]. The incidence of differentiation syndrome (DS) with IVO and ENA in the treatment of patients with relapsed or refractory AML has recently been evaluated through a systematic analysis by the FDA [57]. Criteria previously established for acute promyelocytic leukemia (APL) [58] were utilized so that patients with two or three criteria were classified as having moderate DS and patients with at least four criteria were classified as having severe DS. DS was excluded in cases with an alternative explanation (e.g., septic shock). Overall, 72/179 (40%) cases of potential DS for IVO and 86/214 (40%) for ENA were identified by FDA reviewers [56]. This contrasts with the DS incidence of 11% (19/179) for IVO (52) and 12% (26/214) for ENA reported by investigators and review committee determination, respectively [59]. Of note, for both IVO and ENA, the CR + CRi rate in patients with DS was numerically lower than that in patients without DS (IVO: 18%; ENA: 18%), while age, demographics, and cytogenetic risk of patients with FDA-identified DS were similar to those of patients without DS. As in APL, leukocytosis was not always present. Obviously, earlier and more careful recognition of signs and symptoms of DS may lead to earlier diagnosis and treatment, which may decrease severe complications and mortality. NPM1 mutation confers a relatively more favorable prognosis in patients with FLT3-unmutated AML. Its frequency is higher in younger patients, but even after the age of 75, NPM1-mutated AML may account for more than 30% of cases [60,61]. The long-term efficacy of iCT in these patients is debated, and may be related to the mutational status of leukemia, owing to the frequent co-occurrence of epigenetic mutations, most frequently in the DNMT3α gene. Regimens other than iCT with limited toxicity like actinomycin D or all-trans retinoic acid have proven effective in occasional patients with relapsed or refractory disease, or considered unfit for iCT [62,63]. While prospective trials are underway to confirm these results, these agents may be considered as an alternative to BSC in patients with advanced NPM1-mutated AML. Molecular targets and therapeutic results of recently approved new agents for refractory/relapsed AML are summarized in Table 1.
Conclusions
Following relapse, the prognosis of patients with AML remains extremely poor and curative options are limited, especially in the older patient population. A minority of patients over 65 years are actually eligible for SCT in CR1 and significantly less at relapse, therefore new strategies are needed in order to improve therapeutic results. More than 15 years ago, we demonstrated that intensive salvage therapy was not indicated in the majority of relapsed older patients with AML, namely in those with CR1 duration less than one year and/or with unfavorable cytogenetics [64]. As shown in Figure 4, in addition to ICT just for the very fit patient candidate for SCT and non-intensive treatment with hypomethylating agents, for patients with FLT3 and IDH1/2 mutations we now have the possibility of offering new agents which have been found to be more effective and probably less toxic than conventional salvage CT. In addition, the oral formulation represents a substantial advantage for the older AML population because of the possibility of managing a substantial number of patients in an outpatient setting. Finally, unlike ICT, clinical benefit can be achieved with these agents also in the absence of CR, with reduction of or independence from transfusion support and improved quality of life. It should be considered that the above genetic patterns account for no more than 40% of the whole older AML patient population and results need to be confirmed on larger patient series and in real-life studies. However, the landscape of AML treatment is undergoing dramatic evolution due to progress in understanding the molecular pathogenesis of the disease and the introduction of additional new agents either at diagnosis or relapse. In this regard, high-risk patients-especially older patients with refractory or relapsed disease-represent an ideal field of clinical investigation. | 2019-03-11T17:16:44.867Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "5cbf7f2878ff79cecfaa7afe417b9dc4977c0bdc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/11/2/224/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cbf7f2878ff79cecfaa7afe417b9dc4977c0bdc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134778431 | pes2o/s2orc | v3-fos-license | Quaternary fluvial environments in NE Morocco inferred from geochronological and sedimentological investigations
The investigation of fluvial archives in NE Morocco is of high interest for unravelling palaeoenvironmental changes linked to Quaternary climate fluctuations, long-term tectonic activity and/or human influence in NW Africa. The prehistoric site of Ifri n’Ammar is situated in NE Morocco (Fig. 1) and represents a key location for deciphering the history of anatomically modern humans (AMHs) in northern Africa because it reveals Middle and Late Palaeolithic occupation phases since ∼ 170 ka (Moser, 2003; Nami and Moser, 2010, Richter et al., 2010; Klasen et al., 2018; Fig. 2). This study uses two fluvial systems of different nature – the ephemeral stream Wadi Selloum and the perennial Moulouya River (Fig. 1) – in order to reconstruct the varying environmental conditions for the last ∼ 170 kyr, the time when AMH started to disperse into the region. Both fluvial systems provide valuable insights into the geomorphic evolution of the region. It could be demonstrated that both responded to different environmental triggers: the small catchment of the Wadi Selloum (∼ 290 km2) is highly affected by the sensitive ecosystem of the Mediterranean region. This ephemeral stream is characterised by a discontinuous and heterogeneous sediment record (Fig. 1a) caused by short-term climatic shifts and human influence (Bartz et al., 2015, 2017). In contrast, the terrace record of the lower Moulouya was considerably affected by tectonic processes related to the collision between the African and Eurasian plates. A W–E-striking thrust fault, associated with N–S compressive shortening in this region, could be identified; it strongly deformed the late Neogene sedimentary sequence of the lowermost basin drained by the Moulouya (Fig. 1) (Rixhon et al., 2017). While long-lasting aggradation led to the formation of composite fill terraces several tens of metres thick in the footwall reach (Triffa plain; Fig. 1b), a terrace staircase with at least three distinct terrace levels characterises the hanging wall reach (Ouled Mansour plateau; Fig. 1c). Tectonic activity appears thus to be the main driver for the evolution of the lower Moulouya terraces (Rixhon et al., 2017; Bartz et al., 2018). Establishing chronostratigraphies of river sedimentary sequences always remains challenging. However, fluvial deposits of the Wadi Selloum could be well dated via optically stimulated luminescence (OSL) of quartz and post-infrared infrared stimulated luminescence of K feldspar. Although an independent age control was not possible, inter-method comparisons with thermoluminescence (TL) dating of a pottery shard (Bartz et al., 2015) and OSL and post-infrared infrared dating of two samples from the same sedimentary unit (Bartz et al., 2017) allowed the establishment of robust chronologies of the ephemeral stream deposits. The ages range between
The investigation of fluvial archives in NE Morocco is of high interest for unravelling palaeoenvironmental changes linked to Quaternary climate fluctuations, long-term tectonic activity and/or human influence in NW Africa.The prehistoric site of Ifri n'Ammar is situated in NE Morocco (Fig. 1) and represents a key location for deciphering the history of anatomically modern humans (AMHs) in northern Africa because it reveals Middle and Late Palaeolithic occupation phases since ∼ 170 ka (Moser, 2003;Nami andMoser, 2010, Richter et al., 2010;Klasen et al., 2018;Fig. 2).This study uses two fluvial systems of different nature -the ephemeral stream Wadi Selloum and the perennial Moulouya River (Fig. 1) -in order to reconstruct the varying environmental conditions for the last ∼ 170 kyr, the time when AMH started to disperse into the region.
Both fluvial systems provide valuable insights into the geomorphic evolution of the region.It could be demonstrated that both responded to different environmental triggers: the small catchment of the Wadi Selloum (∼ 290 km 2 ) is highly affected by the sensitive ecosystem of the Mediterranean region.This ephemeral stream is characterised by a discontinuous and heterogeneous sediment record (Fig. 1a) caused by short-term climatic shifts and human influence (Bartz et al., 2015(Bartz et al., , 2017)).In contrast, the terrace record of the lower Moulouya was considerably affected by tectonic processes related to the collision between the African and Eurasian plates.A W-E-striking thrust fault, associated with N-S compressive shortening in this region, could be identified; it strongly deformed the late Neogene sedimentary sequence of the lowermost basin drained by the Moulouya (Fig. 1) (Rixhon et al., 2017).While long-lasting aggradation led to the formation of composite fill terraces several tens of metres thick in the footwall reach (Triffa plain; Fig. 1b), a terrace staircase with at least three distinct terrace levels characterises the hanging wall reach (Ouled Mansour plateau; Fig. 1c).Tectonic activity appears thus to be the main driver for the evolution of the lower Moulouya terraces (Rixhon et al., 2017;Bartz et al., 2018).
Establishing chronostratigraphies of river sedimentary sequences always remains challenging.However, fluvial deposits of the Wadi Selloum could be well dated via optically stimulated luminescence (OSL) of quartz and post-infrared infrared stimulated luminescence of K feldspar.Although an independent age control was not possible, inter-method comparisons with thermoluminescence (TL) dating of a pottery shard (Bartz et al., 2015) and OSL and post-infrared infrared dating of two samples from the same sedimentary unit (Bartz et al., 2017) allowed the establishment of robust chronologies of the ephemeral stream deposits.The ages range between Bartz et al., 2015), thermally transferred OSL (TT OSL) (see Bartz et al., 2019), post-infrared infrared stimulated luminescence and electron spin resonance (ESR) dating (see Bartz et al., 2017Bartz et al., , 2018)).From these results, phases of morphodynamic activity and stability can be deduced.The chronological framework of the prehistoric site of Ifri n'Ammar is based on radiocarbon (Moser, 2003) and luminescence dating (Richter et al., 2010;Klasen et al., 2018).
Due to the expected Early to Middle Pleistocene age of the fluvial terraces of the lower Moulouya it was challenging to establish luminescence chronologies.Quartz and K feldspar luminescence signals of the studied deposits had reached saturation, suggesting fluvial deposition at least as early as the Middle Pleistocene (Bartz et al., 2018;Fig. 2).However, electron spin resonance (ESR) dating offered a useful alternative way to gain further chronological information.Based on the multiple-centre approach in fluvial environments (Duval et al., 2015), aluminium (Al) and titanium (Ti) centres were measured in quartz; this was cross-checked with palaeomagnetic analyses (Bartz et al., 2018).Thus, a robust geochronological framework was established for the fluvial terraces, with numerical ages dating back to the Early Pleistocene and ranging between ∼ 1.5 and ∼ 1.1 Ma (Bartz et al., 2018) (Fig. 2).Recently, Bartz et al. (2019) additionally applied thermally transferred (TT) OSL on the same strata.The single-grain TT OSL results matched well with the newly established ESR chronology and proved the lower Quaternary (Calabrian) age of the fluvial terraces (Bartz et al., 2018(Bartz et al., , 2019)).
Bearing in mind that chronostratigraphies of the ephemeral stream deposits and of the pre-Holocene Moulouya fluvial terraces do not yet exist, the application of different trapped charge dating techniques in combination with palaeomagnetic research served as a valuable tool to obtain chronological information about the deposition in the different fluvial systems.
In addition to using numerical and relative dating techniques, sedimentological, geochemical, mineralogical and micromorphological analyses have been carried out to distinguish periods of enhanced flooding-aggradation from periods of relative stability favourable for pedogenesis.The Wadi Selloum gives information about morphodynamic phases in the time of the settling of AMH (Fig. 2): periods of enhanced aggradation occurred around ∼ 100, ∼ 75 and ∼ 55 ka, after the Last Glacial Maximum, and during the Holocene, whilst sedimentation ended after ∼ 1.3 ka.Pedogenesis may be used as an environmental indicator for more humid climate conditions during MIS 3 (palaeo-Calcisol), the early Holocene (Calcisol) and the late Holocene (Fluvisol) (Fig. 2).
Although palaeoenvironmental implications should be taken with caution due to the discontinuity of the ephemeral stream system, it appears that more humid and warmer climate conditions may have favoured human settling in this area.This study thus provides the first insights into the palaeoenvironmental changes around the prehistoric site of Ifri n'Ammar during the last glacial-interglacial cycle.In contrast, the absence of Middle and Late Pleistocene deposits in the sedimentary record (Fig. 2) of the lower Moulouya seems to rule out climate as the main driver for long-term fluvial evolution in that region at least during the lower Qua-ternary.However, it provides valuable information on the regional tectonic history in NE Morocco (Bartz et al., 2018).
Figure 1 .
Figure 1.Relief map (based on ASTER Global DEM) of the lower Moulouya catchment including the two study areas (greyish rectangles) of the Wadi Selloum and the ∼ 20 km long investigated river reach of the Moulouya River.The red star denotes the prehistoric site of Ifri n'Ammar.(a-c) Images of the study areas.(a) Ephemeral stream Wadi Selloum in the direct vicinity of Ifri n'Ammar with up to 5 m high Holocene overbank fines (view towards the SE).(b) Footwall reach of the thrust zone showing stacked fluvial terraces of the Moulouya with Early Pleistocene gravel deposits and Holocene overbank fines (view towards N).(c) Hanging wall reach characterised by a well-preserved staircase of up to three Pleistocene Moulouya terraces above Holocene overbank fines and the modern floodplain.A clear unconformity (dashed line) between Neogene marls and Pleistocene river gravel is illustrated.Person for scale (ellipse) (view towards the NE).
Figure 2 .
Figure2.Chronological correlation between the on-site archive (occupation phases of the rock shelter Ifri n'Ammar) and the off-site archives (ephemeral stream deposits of Wadi Selloum and fluvial terrace deposits of the lower Moulouya).Chronological data of the two fluvial systems are based on optically stimulated luminescence (OSL), thermoluminescence (TL) (seeBartz et al., 2015), thermally transferred OSL (TT OSL) (seeBartz et al., 2019), post-infrared infrared stimulated luminescence and electron spin resonance (ESR) dating (seeBartz et al., 2017Bartz et al., , 2018)).From these results, phases of morphodynamic activity and stability can be deduced.The chronological framework of the prehistoric site of Ifri n'Ammar is based on radiocarbon(Moser, 2003) and luminescence dating(Richter et al., 2010;Klasen et al., 2018). | 2018-12-15T07:30:47.314Z | 2019-01-25T00:00:00.000 | {
"year": 2019,
"sha1": "8f1d67ab497b4f657a118d72f8d278cdedb4d506",
"oa_license": "CCBY",
"oa_url": "https://egqsj.copernicus.org/articles/68/1/2019/egqsj-68-1-2019.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8f1d67ab497b4f657a118d72f8d278cdedb4d506",
"s2fieldsofstudy": [
"Geography",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
232228891 | pes2o/s2orc | v3-fos-license | Nasopharyngeal Carcinoma Ex Pleomorphic Adenoma: Case Report and Comprehensive Literature Review
Carcinoma ex pleomorphic adenoma (CXPA) is an epithelial malignancy that transforms from benign pleomorphic adenomas (PA) at a rate of 1.5% after 5 years and 10% after 15 years. The average age of reported nasopharyngeal CXPA is 56.7 years. However, the present case describes a 19-year-old making this case exceptionally rare. Standard treatment is wide local excision with adjuvant treatment. We report the demographics, presentation, treatment, and outcomes of 8 cases of nasopharyngeal CXPA. While surgical excision is the mainstay of treatment, negative margins can be difficult to obtain at the skull base, and we report a recurrence rate of 50% in nasopharyngeal primaries. Due to the aggressive nature of the disease and high rate of recurrence, the majority of patients in our review received adjuvant radiation with some receiving adjuvant chemotherapy in addition.
Introduction
Carcinoma ex pleomorphic adenoma (CXPA) is a carcinoma arising from a primary or recurrent benign pleomorphic adenoma (PA) and accounts for approximately 12% of all malignant salivary carcinomas. However, its occurrence in the nasopharynx is exceedingly rare [1][2][3]. CXPA generally occurs in the 5 th to 8 th decade of life and is more common in females [4]. e later presentation of CXPA has been attributed to the transformation of a longstanding untreated PA, with a rate of transformation ranging from 3%-13.3% [5]. Standard treatment for CXPA is wide local excision with consideration for adjuvant therapy (either radiation and/or chemotherapy). However, the benefit of adjuvant therapy has not been clearly elucidated in the literature. e reported survival ranges from 30% to over 70% depending on stage [6].
Although several studies report the rarity of sinonasal and nasopharyngeal PA and CXPA, to our knowledge, no studies specifically review nasopharyngeal CXPA. e present study aims to report a rare case of nasopharyngeal CXPA in a young adult with review of the literature on previously reported cases of nasopharyngeal CXPA.
Case Report
A 19-year-old Caucasian female was referred for evaluation of a nasopharyngeal mass. She was initially seen by her primary care physician (PCP) for complaints of bilateral nasal congestion, facial pain, right-sided otalgia, rhinorrhea, and epistaxis for 2.5 months. She was treated by her PCP with antibiotics followed by steroids for several weeks with no improvement. She had persistent symptoms and developed throat pain, dysphagia, snoring, and "throat closing" sensation ultimately leading to otolaryngology referral. Nasal endoscopy by an outside otolaryngologist revealed a large fungating mass emanating from the right nasopharynx extending into the oropharynx. CT scan with IV contrast showed a soft tissue mass 2.5 × 5.1 × 5.9 cm with extension into parapharyngeal, prevertebral, carotid, retropharyngeal, and masticator spaces. e patient was subsequently referred to our institution for further workup. PET scan was performed which showed a bulky FDG avid mass centered in the right nasopharynx with no distant metastasis. e patient underwent biopsy of the nasopharyngeal mass and was diagnosed with a myoepithelial CXPA ( Figure 1).
One month after initial presentation to our clinic, she presented to the emergency room with significant oropharyngeal obstruction and severe shortness of breath requiring urgent tracheostomy. Preoperative MRI revealed isointense T1 and hyperintense T2 avidly enhancing mass 7 × 7 × 6.5 cm in the right nasopharynx with extension across midline, inferiorly into the oropharynx, laterally into paraphernal space, and superiorly encroaching the skull base but without evidence of skull base invasion or intracranial extension ( Figure 2). One week later, she was taken to the operating room where she underwent excision of the nasopharyngeal mass with right lateral pharyngotomy, right selective neck dissection (levels II, III, and V), right marginal mandibulectomy, and transpalatal approach for nasopharyngeal resection with partial resection of the hard palate and placement of right tympanostomy tube. Her postoperative course was uneventful, and she was decannulated and discharged one week after surgery. Final pathology confirmed myoepithelial CXPA with tumor focally present at the tumor margin, no evidence of lymphovascular or perineural invasion, and no neck metastasis.
Postoperatively, she received 7 weeks of proton beam radiation with weekly cisplatin treatments. After completion of adjuvant therapy, she was without evidence of recurrence until 15 months postoperatively. Unfortunately, at 15 months of postoperation, the patient experienced local recurrence within the retropharyngeal space which was found on surveillance imaging and confirmed with biopsy. She is currently alive with disease at 21 months after initial diagnosis.
Discussion and Review of Literature
CXPA occurs in major and minor salivary glands. However, its occurrence in the nasopharynx is exceedingly rare. Tumors in the nasopharynx and sinonasal region arise from minor salivary glands in this region and tend to be more aggressive with a higher rate recurrence. However, this is based on very limited available data [2]. Due to the rarity of disease, the exact pathogenesis of transformation is unknown. Further reporting of these tumors will help guide clinicians on treatment options, expected course of disease, and patient counseling.
Only seven previous cases of nasopharyngeal CXPA are reported in the literature, making ours the eighth reported case (Table 1) [1,7]. e average age at presentation was 50 years with a range from 19-65, and the most common 2 Case Reports in Otolaryngology presenting symptom was nasal obstruction. All patients were treated with primary surgical resection, and 87.5% were treated with adjuvant treatment. Of those receiving adjuvant therapy (n � 7), two were treated with adjuvant chemoradiation and four were treated with adjuvant radiation alone. e recurrence rate was 50% with an average followup time of 2.74 years. At last known follow-up, 2/4 patients with recurrence had died from disease and two were alive with disease. To our knowledge, our case is the youngest nasopharyngeal CXPA reported in the literature. e average age of patients diagnosed with CXPA, including all head and neck subsites, is 62.1, which is nearly a decade older than our average of 50 years in the nasopharyngeal subsite [6]. CXPA arises from a benign PA, and the overall rate of malignant transformation is 3%-13.3%. However, the incidence increases with time and is 1.5% after 5 years and 10% after 15 years [5,8]. Our patient likely had a subclinical nasopharyngeal PA as a child or adolescent which transformed into a CXPA and began to rapidly enlarge. To date, there are no reported cases of pediatric patients with confirmed PA who were followed until malignant transformation occurred. erefore, the rate of malignant transformation of pediatric PA is unknown [5].
Treatment with primary surgical excision is considered the mainstay of treatment [6]. Adjuvant therapy may be used in the form of radiation and/or chemotherapy. However, its effect on overall survival has not yet been determined [6]. In our review of the literature, only one patient did not receive any adjuvant treatment. Of the patients receiving adjuvant therapy, 29% received chemoradiation, while the remaining 71% received adjuvant radiation alone.
We found that positive margins were associated with recurrence in two patients. Yet, in the other two cases, recurrence of margin status was not reported, and we are unable to draw conclusions on the effect of positive margins in nasopharyngeal CXPA recurrence. Margin status in the sinonasal and nasopharyngeal region is difficult to assess due to limited access and frequent piecemeal resection. Furthermore, when tumors abut the skull base or orbit, negative margins may be difficult or impossible to obtain [1]. Other studies reviewing sinonasal and nasopharyngeal CXPA have also been unable to draw reliable conclusions on this correlation [1,9]. With regard to mortality, Toluie et al. found that disease recurrence in the nasal cavity and nasopharynx was a significant predictor of patients dying from disease. ey found that all six patients in their study with recurrence died from their disease. Additionally, all patients in their study with tumor size >4 cm (2/9) died from disease [9]. A recent review by Gupta et al. queried the Surveillance, Epidemiology and End Results (SEER) database to determine predictors of survival for CXPA in all head and neck subsites. Although only 5.2% of tumors reported were outside the major salivary glands, they also found that tumor size >4 cm was a significant predictor of mortality. When considering all head and neck subsites, predictors for mortality were high grade, late stage, distant metastasis, tumor size, extraparenchymal extension, multiple lymph node involvement, and parotid tumors treated with a partial parotidectomy [6]. CXPA diagnosis is made by biopsy with histopathologic diagnosis, but classification can be confusing because tumors are named for their malignant component.
e World Health Organization (WHO) recently revised the CXPA tumor classification stating that tumor biology must be determined by the extent of invasion and specific carcinoma subtype [10]. e most common type of CXPA is adenocarcinoma not otherwise specified, followed by salivary duct carcinoma and myoepithelial [11][12][13]. However, other subtypes do exist and are listed in Table 2. Histological degree of invasion beyond the pleomorphic adenoma further classifies the tumor and is also listed in Table 2 [14][15][16]. Overall, approximately 90% of CXPA cases are invasive, and the myoepithelial subtype portends the worst prognosis with a high rate of invasive disease [11][12][13].
is review is limited due to the rarity of nasopharyngeal CXPA. In the future, increased reporting on CXPA disease subsite may help clinicians determine if certain subsites are more aggressive or present at more advanced stages and, in turn, help guide treatment to improve survival. e current review supports evidence that CXPA in the nasopharynx may have a high rate recurrence due to the difficulty in obtaining negative surgical margins. Although CXPA typically presents in the 5 th -6 th decade of life, we report a large, aggressive case occurring in a 19-year-old female. erefore, CXPA should be considered on the differential diagnosis in patients with a nasopharyngeal mass regardless of age.
Conclusion
CXPA is an aggressive tumor arising from a benign PA. e mainstay of treatment is surgery with adjuvant chemoradiation. However, there is still a high rate of recurrence and mortality. e disease generally presents later in life and is not typically on the differential diagnosis for young patients with nasopharyngeal masses. However, this report outlines the importance of considering this diagnosis and exploring symptoms of unremitting nasal congestion early, even in young, otherwise, healthy individuals.
Data Availability
Previously published case reports were used to support this study which are cited at relevant places within the text as references [1,7,9].
Conflicts of Interest
e authors declare no conflicts of interest. | 2021-03-16T05:32:34.522Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "a7a7acdabc5f08106c63953dd3e06a1e66d5bd1f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/8892280",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7a7acdabc5f08106c63953dd3e06a1e66d5bd1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245497029 | pes2o/s2orc | v3-fos-license | Case Series Role of yoga prana vidya healing techniques in successful and speedy recovery of orthopaedic cases of bone injuries and fractures: a multiple case study
The include the long bones the or the and the sesamoid bones which prevent and tear of the like near the and lastly the and bones in the pelvic are responsible for protecting the of body. A fracture is a crack or break in the bone. It can occur as a result of some accident, fall or sport injury. In few cases, a bone injury could be a result of the inability of the bone to resist the shock due to low bone density. These may cause dislocation of the bone or tendon and ligament tear reflecting in the mobility of a person or the structure of a human body. ABSTRACT Bones form a vital part of the skeletal system providing mechanical support, strength, structure and protection to the human body. Inability of the bone to resist any kind of stress caused accidently can result in a bone injury or a fracture. This article provides a summary of eleven cases of bone injury and fracture treated successfully by yoga prana vidya (YPV) techniques as a complementary medicine for faster recovery. The study was carried out by two healers who independently healed eleven cases of bone injury and fracture using the bone regeneration techniques of YPV. Further, the data was collected and the results were analysed. By application of YPV healing techniques complementarily, it is observed that full recovery took place within 10 days to 45 days for the 3 hospitalised cases, and within 3 to 8 days for the two patients who had bandage/dressing done at a medical facility. In case of the remaining 6 patients who sought YPV healing help in preference to seeking medical help the recovery took place within 5 to 20 days. helping the patients to lead a normal life thereafter. It is observed that YPV techniques can be used for faster recovery of patients with injured and fractured bones. This paper shows the successful results when the techniques were applied on eleven participants. It is recommended to conduct further studies on a larger scale for the healing of bone related cases such as injury and fractures.
Human bone structure
Bones hold the body structure together. They come in various shapes and sizes and each performing unique functions depending on the location of the bone. There are around 270 bones during birth which combine together to give 206 bones in a healthy human body. The bones are made up of protein collage and 99% of the body calcium is found in the bones and the teeth. 1,2 The different kinds of bones in a human body include the long bones in the limbs which help in body movement, the short bones or the wrist and ankle, the sesamoid bones which prevent wear and tear of the tendons like near the knee caps and lastly the other bones like the spine, and bones in the pelvic area which are responsible for protecting the organs of the body.
A fracture is a crack or break in the bone. It can occur as a result of some accident, fall or sport injury. In few cases, a bone injury could be a result of the inability of the bone to resist the shock due to low bone density. These may cause dislocation of the bone or tendon and ligament tear reflecting in the mobility of a person or the structure of a human body.
Bone fracture may result in swelling or unbearable pain or inability to use the limb. The treatment of bone fracture or injury includes plastering of the two pieces of the bone or inserting a metal rod or a plate to bring the pieces together and thereby give a structure to the body. 2 A bone fracture can be diagnosed by various tests including X-ray, computed tomographic (CT) scan or a magnetic resonance imaging (MRI) test. One may prevent bone injury by avoiding falls and accidents or sudden shock or stress, by staying fit and involving physical exercises and breathing exercises in order to avoid deterioration of the bone or by eating right food to help give the right nutrition to the bones.
Yoga prana vidya
Yoga prana vidya (YPV) is an integrated and a holistic healing stechnique which has found application in treating a variety of simple and difficult cases both physical and psychological in nature. 3 It is integrated and holistic as it involves not only healing of the energy body but also promotes practicing breathing exercises, physical exercises, meditations and promotes intake of right diet. 4 The purpose of yoga is to achieve oneness or union. This union is achieved by the incarnated soul along with the 3 vehicles namely the energy body, emotional body and the mental body, with the higher soul. 5 Healing is a process by which the healer cleans the diseased or dirty energy from the energy body or an affected part of a person and fills it with fresh energy thereby accelerating the body's ability to recover. The healing takes place either face to face with the patient or at a distance with an intention that energy follows thoughts. YPV is totally a safe healing modality which does not use any drug or physical contact between a healer and a patient. Literature shows several cases of successful YPV healing of a variety of medical conditions such as management of post-herpetic neuralgia (PHN), exostosis of ear, hypothyroidism, conservative management of CVJ anomaly, a rare case of urinary fistula, and status epilepticus. 4,6-11
Chakrams or energy centres
As shown in Figure 1, there are 11 major chakrams and many mini and minor chakrams in our energy body. These chakrams have special functions depending on which organ or part of the body they control and energize. They are responsible for absorbing the fresh energy from the environment and they also expel the diseased and the dirty energy from the organ which they control. 3
Chakrams controlling the skeletal system of the body
The basic chakram is the most important chakram which controls the skeletal and the muscular system of our body. It is responsible for maintaining the vitality or energy of our body and also controls the production of blood in the body through the bone marrow. The basic chakram along with the minor chakrams of the legs and the hands are responsible for the proper functioning of the skeletal system of our body giving proper structure and strength to the person.
Figure 1: Chakrams or energy centres.
The other chakrams which control the bones and muscles of the human body include the solar plexus chakram which is the centre of lower emotions. Hence, application of YPV level 3 techniques becomes important. The spleen chakrams which absorbs the energy from the environment and distributes it to the entire energy body and the navel chakram. The navel chakram controls all the lower chakras which are responsible for the proper functioning of the physical body. Lastly, the ajna chakram also substantially controls the skeletal and muscular system of the human body.
In this case study, for the eleven cases of bone injury and fracture, the above all chakrams were treated along with level 3 protocol. The bone regeneration technique of YPV healing was additionally applied to the affected bone and the muscle for faster recovery.
The anatomy of the energy body consists of many chakrams which have certain functions and help in cleaning and energizing the whole body or part of it. The chakrams are like energy pumps as they absorb the fresh energy and also expel the toxic or diseased energy from the organ or part of the body that it controls.
Method
This is a case series study with data of 11 patients of bone injury and fracture, successfully healed by YPV healers, and patients fully recovered. All available data was collected, analysed and presented as follows. Table 1 shows demographic details of a total of 11 patient cases of bone injury and fracture occurred, of which 9 cases were of limbs, one case of spine and one case of collar bone. This sample consists of 6 adult males aged 18 to 55 years, 4 adult females aged 23 to 55 years and one female child 6 years old.
YPV interventions
Healing was carried out for these eleven cases by two healers independently. The healing was carried out once a day for about 20 to 30 min for each patient.
The data was collected including the photographs (for some of them where possible), and the results were then analysed and presented as follows.
From the collected data, the patients are categorised into three categories (categories 1, 2 and 3) and presented in Tables 2-4.
Category 1
This category consists of three patients who were hospitalised and had undergone surgeries. YPV healing was given to these patients as complementary medicine. Their data analysis is presented in Table 2.
Category 2
This category consists of 2 patients who visited the hospital for bandage and dressing. They approached for YPV healing and their data is presented in Table 3.
Category 3
This category consists of six patients who did not visit a clinic or hospital as they did not find the need of consulting a doctor, but approached for YPV healing. Their data is given in Table 4.
DISCUSSION
From this study it is observed that YPV healing played a major role of eliminating the pain and enabled faster recovery of patients without touch or use of drugs. It is necessary to treat the broken bones to join and maintain the body structure, aid movement and give strength. All complicated and serious bone fractures would require surgery and medical assistance. An untreated bone fracture can cause infection in the bone or the bone marrow. This can affect production of blood cells thereby resulting in various other ailments. An untreated bone injury or fracture can also lead to long term nerve damage which can lead to decreased sensitivity or difficulty in movement. If a broken bone is not fixed correctly it can result in visible deformity hence the bones sometimes may be replaced by a rod or a plate to set the structure of the body right. Unattended bone injury can lead to stress on the muscles or ligaments thereby damaging them too. 2 There are various other ways of treating an orthopaedic case, such as Ayurvedic, natural herbal treatment, and homeopathy. However, YPV healing seemed to be a better option for the patients due to its underlying advantages of being a no drug treatment and also its ability to work from a distance. This makes it easier for the patients to get help of healing by contacting healers through a phone call.
CONCLUSION
From this case study, it follows that YPV healing protocols worked well to eliminate the pain and helped in faster recovery of the patients in bone fracture and injury, producing favourable results in all 11 reported cases. The healers independently handled their respective cases using appropriate and standard YPV protocols, and this confirms repeatability of the designated YPV protocols. However, further treatment may be carried out by a medical practitioner in case a rod is required to be inserted or an operation or dressing is needed. It is thus recommended that YPV protocols can be effectively used as complementary medicine for orthopaedic patients to eliminate pain and to help the patients lead a normal life soon without drugs as a preliminary treatment paving the way for a medical practitioner to also do the needful in terms of surgical operation or plastering as needed. | 2021-12-27T16:03:45.975Z | 2021-12-24T00:00:00.000 | {
"year": 2021,
"sha1": "eb5b560c0ea804ae99f5c65a8400a7563178978a",
"oa_license": null,
"oa_url": "https://ijoro.org/index.php/ijoro/article/download/2255/1285",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "366c6e54f1e66504057459908d922a9241a96d97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
55068376 | pes2o/s2orc | v3-fos-license | Experimental Study on Stress-Dependent Nonlinear Flow Behavior and Normalized Transmissivity of Real Rock Fracture Networks
The mechanism and quantitative descriptions of nonlinear fluid flow through rock fractures are difficult issues of high concern in underground engineering fields. In order to study the effects of fracture geometry and loading conditions on nonlinear flow properties and normalized transmissivity through fracture networks, stress-dependent fluid flow tests were conducted on real rock fracture networks with different number of intersections (1, 4, 7, and 12) and subjected to various applied boundary loads (7, 14, 21, 28, and 35 kN). For all cases, the inlet hydraulic pressures ranged from 0 to 0.6MPa. The test results show that Forchheimer’s law provides an excellent description of the nonlinear fluid flow in fracture networks. The linear coefficient a and nonlinear coefficient b in Forchheimer’s law J = aQ + bQ2 generally decrease with the number of intersections but increase with the boundary load. The relationships between a and b can be well fitted with a power function. A nonlinear effect factor E = bQ2/(aQ + bQ2) was used to quantitatively characterize the nonlinear behaviors of fluid flow through fracture networks. By defining a critical value of E = 10%, the critical hydraulic gradient was calculated. The critical hydraulic gradient decreases with the number of intersections due to richer flowing paths but increases with the boundary load due to fracture closure. The transmissivity of fracture networks decreases with the hydraulic gradient, and the variation process can be estimated using an exponential function. A mathematical expression T/T0 = 1− exp(−αJ−0.45) for decreased normalized transmissivityT/T0 against the hydraulic gradient Jwas established. When the hydraulic gradient is small, T/T0 holds a constant value of 1.0. With increasing hydraulic gradient, the reduction rate of T/T0 first increases and then decreases. The equivalent permeability of fracture networks decreases with the applied boundary load, and permeability changes at low load levels are more sensitive.
Introduction
Rock fracture networks constitute the main pathways of fluid flow and solute migration in deep underground projects, and during the past several decades, substantial efforts have been devoted to the estimation of fluid flow behavior and transmissivity of fractures in many geoengineering and geosciences such as underground tunneling [1][2][3], CO 2 sequestration [4,5], geothermal energy extraction [6][7][8], and hazardous wastes isolation [9][10][11].The fluid flow in rock fractures is commonly assumed to follow the cubic law, in which the flow rate is linearly proportional to the pressure gradient [12][13][14].However, when the flow rate/hydraulic head difference is large, deviation from the linear Darcy law may occur.In such case, the conductivity of the fractures calculated using the cubic law will be overestimated [15][16][17].In addition, natural rock fractures are often characteristic of rough walls, intersections, and asperity contacts, which make the fluid flow process even more complex and difficult to accurately describe [18][19][20][21][22]. Therefore, a thorough understanding of Figure 1: Relationships between hydraulic gradient (Reynolds number Re) and the (normalized) transmissivity in previous studies.(a) Relationships between / 0 and Re (after [23]).(b) Relationships between and (after [24]).(c) Relationships between / 0 and (after [14]).(d) Relationships between and Re (after [25]).nonlinear flow properties within fracture networks is of great significance to ensure performance and safety of engineering activities.
The transmissivity , which is linked with volumetric flow rate, pressure gradient, and the fracture width, has been applied to characterize the onset of nonlinear flow regimes in rock fractures in some previous studies.Zimmerman et al. [23] conducted Navier-Stokes simulations and fluid flow tests in a natural 3D sandstone fracture and confirmed that weak inertia regimes exist for Reynolds number (Re) in the range of 1-10, where the fluid flow enters the nonlinear flow regime.The normalized transmissivity (/ 0 ) remains constant when Re < 1 but shows a decreasing trend when Re = 1-10 (see Figure 1(a)).Zhang and Nemcik [24] experimentally investigated the fluid flow regimes through deformable rock fractures subjected to changing confining stress from 1.0 to 3.5 MPa.They found that for a nonmated sandstone fracture with the joint roughness coefficient of 9.2, the transmissivity is not a constant value but decreases with the hydraulic gradient (see Figure 1(b)).Liu et al. [14] calculated the / 0 - curves of single fractures with various ratios of mechanical aperture to the fracture length, in which the mechanical aperture is defined as the arithmetic average of the point-to-point distance between the two walls of a fracture.The results show that as increases, / 0 decreases.When = 10 −7 -10 −5 , / 0 remains constant approaching 1.0; the fluid flow is linear.For 10 −4 < < 10 0 , / 0 decreases dramatically; fluid flow enters a transitional regime.When > 10 2 , / 0 is sufficiently small and approaches 0; fluid flow enters a complete nonlinear regime (see Figure 1(c)).Yin et al. [25] estimated the relationships between and Re for rough-walled fractures during shearing and subjected to various normal loads from 7 to 35 kN.As an example, for one of their samples with the shear displacement of 9 mm, exhibits a decreasing trend with Re and the rate of its decrease diminishes. as a function of Re can be well described using a six-order polynomial function (see Figure 1(d)).The above analysis generally focused on variations in (normalized) transmissivity of single fractures.However, when it comes to complex fracture networks, the stress-dependent nonlinear flow mechanisms and transmissivity are still not fully understood.
In engineering practices, both nature activities and human perturbations entail significant changes in the effective stress field of underground rock masses, which would then impact the fracture aperture in fracture networks.The void space between opposing surfaces can vary due to normal stress-induced closures or opening [18,24,26] or shear stressinduced dilations [27][28][29].Hence, the fluid flow behavior and the transmissivity of rock fracture networks are stressdependent, which has so far rarely been investigated.
The purpose of this paper is to investigate the stressdependent nonlinear flow properties and normalized transmissivity of real rock fracture networks.First, plate granite specimens containing fracture networks with different number of intersections (1, 4, 7, and 12) were machined using a high-pressure water jet cutting system.Next, a series of high precision stress-dependent water flow tests with respect to different inlet hydraulic pressures ranging from 0 to 0.6 MPa and increasing applied boundary loads from 7 to 35 kN were conducted.The nonlinear flow behaviors, variations of the flow nonlinearity, normalized transmissivity and equivalent permeability of the fracture networks as a function of fracture networks geometry, and boundary load conditions were all examined.
Governing Equations of Fluid Flow in Fractures
Fluid flow through a single fracture is generally governed by the following Navier-Stokes (NS) equations, written in a tensor form as [15,30,31]: where u is the flow velocity vector, is the hydraulic pressure, T is the shear stress tensor, is the fluid density, is the time, f is the body force acting on the fluid, and in gravity environment f denotes the gravitational acceleration term.
For the case of incompressible and steady-state Newtonian flow, the terms involving time drop out and the NS equations can be expressed as [1,17] where is the dynamic viscosity.Equation ( 2) is composed of a set of coupled nonlinear partial derivatives of varying orders.In order to solve these equations, an additional equation, called the mass conservation equation which can be achieved through mass continuity, is employed and the equation takes the form [1,15] ∇ ⋅ u = 0. ( In ( 2), the convective acceleration term (u ⋅ ∇)u denotes the inertial forces acting on the fluid, which are the source of nonlinearity [14].The complexity of the Navier-Stokes equations makes exact solutions extremely difficult to achieve, especially for natural rock fractures with complicated geometries.
For certain cases with very low Reynolds number or flow rate, by assuming that the inertial forces of fluid flow through the fractures are negligible compared to the viscous forces.The convective acceleration term (u ⋅ ∇)u in (2) vanishes, and the Navier-Stokes equations can be reduced to solvable forms as [12] where is the total volumetric flow rate, ∇ is the pressure gradient, is the fracture width, and h is the hydraulic aperture.The flow rate is proportional to the cube of h .This equation is commonly referred to as the well-known cubic law.
The linear relation between and ∇ in (4) can only be anticipated for parallel-plate models under a limited range of flow rates.Besides, natural rock fractures are characterized by fracture intersections and complex fracture surface geometries, and the inertia items are often not zero.With increasing hydraulic head differences, the flow rate is nonlinearly related to the pressure drop and (4) is no longer applicable.Some empirical expressions were proposed to describe the nonlinear flow in fractures, and Forchheimer's law is the most extensively used approach where the pressure gradient is a quadratic function of the flow rate, written as [32][33][34] where and are model coefficients, representing pressure drop components caused by linear and nonlinear effects, respectively.Since the hydraulic gradient is proportional to ∇ as = ∇/(), (5) can then be written as follows: where = − and = − .
The hydraulic gradient is calculated by dividing the hydraulic head difference by the flow length where is the flow length, which denotes the horizontal distance between the left and right specimen boundary in this study, and is the gravitational acceleration.
To quantitatively estimate the nonlinearity of fluids flowing through the fracture networks, a nonlinear effect factor was introduced to determine the fluid flow regime [1,35,36].It can be presented by where and 2 are energy losses due to viscous and inertial dissipation mechanisms in the fractures. denotes the ratio of nonlinear pressure drop to the overall pressure drop.
For engineering purposes, a critical value of = 10% has been defined as the critical condition for onset of flow nonlinearity in fracture networks, where the nonlinear effect can be appreciable and cannot be neglected [23,24,29].
The Forchheimer number 0 is another widely accepted parameter to characterize the onset of flow transition to nonlinearity, which is defined as the ratio of nonlinear to linear pressure drops [1,13,17], Combinations of ( 8) and (9) yield the following equation: Transmissivity () is an important hydraulic property to characterize the macroscopically observed flow resistances in fractures, which can be defined as [37] When the flow rate is sufficiently low and the inertial forces are negligible, the intrinsic transmissivity ( 0 ) is commonly regarded as a constant value.As the hydraulic head difference increases, the transmissivity can be used to evaluate the nonlinear flow through the fractures, and the normalized transmissivity (/ 0 ) was then determined [23], Therefore, as the nonlinear term ( 2 ) contributes to 10% of the total pressure drop or else = 10%, the critical value of / 0 is equal to 0.9.The corresponding is defined as the critical hydraulic gradient c .
Experimental Methodology
3.1.Rock Fracture Specimen Preparation.Granite specimens (495 × 495 × 17 mm in size) containing artificial fracture networks were established by using a high-pressure water jet cutting system [25].The granites were taken from Linyi city in Shandong Province of China.The granite is a porphyritic monzonite with an average unit weight of about 2.69 g/cm 3 .The uniaxial compressive strength of the rock is about 97.54 MPa, and its permeability is in the order of magnitude of 10 −19 m 2 .
The parallelism between the upper and lower surfaces of the plate specimens was controlled within an error of 0.02 mm.The water pressure of the water jet cutting system was held constant for all rock fractures, and the fractures penetrated the plate specimens thoroughly.Detailed descriptions of the rock fracture specimens are illustrated in Figure 2.Each specimen consists of two sets of parallel fractures with a constant included angle of 60 ∘ .By varying the spacing of the fractures, fracture networks with different numbers of intersections ( = 1, 4, 7, and 12) were, respectively, created.
Using a high-resolution three-dimensional laser scanning profilometer system, the fracture surface topography was measured before the hydromechanical tests.We considered the average of joint roughness coefficient (JRC) values for a series of 2D profiles along a fracture surface in the length direction, which is a suggested method by the International Society for Rock Mechanics and Rock Engineering (ISRM) to calculate the JRC of a 3D single fracture [38].The spacing of 2D profiles is 3 mm.In total, the JRC values of 5 2D profiles are calculated and their average value is regarded as the JRC of a 3D single fracture.The JRC values of all 2D profiles were estimated based on the values of 2 using (13) proposed by Tse and Cruden [39].Here, JRC is a dimensionless index ranging from 1 to 20 [40], which has been widely accepted in the field of rock mechanics and rock engineering [22,41].
The results indicate that JRC values of the fractures fluctuate within a very small range around 3.47.The fractures closely approximate parallel-plate models with a small JRC value.
where and represent the coordinates of a fracture surface profile and is the number of sampling points along the length of a fracture. 2 is the root mean square slope of the 2D profile.
Experimental Setup and
Procedure.Before the hydromechanical tests, sealing of the fractured specimens was first conducted.Based on the model specimen size, a 3 mm thick rubber jacket was produced using the ethylene propylene diene monomer (EPDM) waterproof rubber [25].Then, a rock fracture specimen was placed in the jacket, with the extra rubber seamlessly glued to the rock surface.For a better sealing of the fractures, a layer of glass glue was evenly coated on the specimen surface to leave the fractures open.Then, a piece of transparent crystal plate with a suitable size was attached to the glass glue.When the glass glue was dried, a hand-operated electric drill with a diameter of 10 mm was used to drill circular holes at the water inlet and outlet positions of the fractures through the rubber jacket.After a high-strength plexiglass cover plate and the horizontal load devices with uniform flow chambers were installed, the specimen was transferred to the test area.The stress-dependent flow tests of the fracture networks were conducted by using a self-developed flow test apparatus [25], as shown in Figure 3.It mainly consists of three units: (i) a platform for fluid flow in fracture networks; (ii) a water supply and measurement unit; (iii) a pneumatic-hydraulic cylinder unit.During the flow tests, a constant water pressure was maintained at the flow inlet manifold using an air compressor connected to a water tank, with the pressure range of 0-2 MPa.Both water inlet directions (-and -directions) evenly featured 12 flow distribution chambers to achieve a variable but uniform flow field.The hydraulic sources of the water inlet or outlet manifold can be individually switched on or off using the shut-off valves (Figure 4(a)).Each of the 12 water chambers provided equal water pressure at the rock specimen boundary.The effluents flowing out of the fractures were measured in real time using four glass tube rotameters with a range of 0.0004-11.0L/min.Both the horizontal (and -directions) and vertical (-direction) loads on the rock specimens were created using a pneumatic-hydraulic cylinder connected to an air compressor that has a maximum air pressure of 3 MPa.Before the test, a vertical load of 20 kN was applied on the specimen surface to balance the vertical water pressure in the fractures.
During the flow test, water was fed through the inflow manifold connected to a water tank that can supply a constant water head at all times by using an air compressor.Both horizontal boundary loads in the -direction ( ) and direction ( ) were applied and increased from 7 to 35 kN in a 7 kN interval, with a fixed lateral pressure coefficient of to of 1.0.Applications of the boundary load conditions are displayed in Figure 4(b).In this study, we just discuss the -directional nonlinear flow behaviors through the fracture networks over a range of inlet water pressures from 0 to 0.6 MPa with increasing boundary loads.To achieve this investigation, the side valves (-direction) were opened while the top and bottom valves (-direction) were closed, which forced water to flow horizontally from the right to left specimen boundary (-direction) through the fracture networks.Other boundaries were considered impermeable.
For a certain boundary load and inlet hydraulic pressure, the total volume flow rate of the fracture networks at the water outlet boundary can be obtained when the glass rotor was relatively stable with no fluctuations.The effluent was then collected in another storage tank and recirculated.The entire hydraulic experiments were performed under isothermal conditions at room temperature of approximately 20 ∘ C. In addition, the fluid was assumed to be viscous with of 1.0 × 10 −3 Pa⋅s and incompressible with of 1000 kg/m 3 .
Test Results and Discussion
Due to the low permeability of the intact granite matrix, fluid was assumed to flow through the fractures only during the hydromechanical tests.For the rock fracture specimen with various subjected to different (here, refers to the boundary load due to = ), a series of hydraulic tests were conducted with different inlet fluid pressures ranged from 0 to 0.6 MPa.In this study, the hydraulic head at the water outlet specimen boundary was assumed to be equal to zero.Thus, according to (7), in the range of 0-0.6 MPa, ranged from 0 to 123.69.The experimental data obtained in the hydraulic tests on the rock fracture specimens in the form of hydraulic gradient () versus discharge () curves are displayed in Figure 5.The quadratic polynomial regression of Forchheimer ( 6) is used to best fit the raw experimental data, with the residual squared 2 values all larger than 0.99 (Table 1).From Figure 5, with an increase in , shows an increasing trend.For a certain , as increases, the slope of - curves becomes steeper, indicating a higher flow resistance mainly as a result of fracture closure.Hence, more flow energy is required to achieve the same flow rate for fracture networks subjected to a larger boundary load.However, at a given , as increases, flowing out of the fracture networks produced by an identical hydraulic head shows an increasing trend (Figure 6).Taking = 14 kN as an example, produced by = 123.69 is increased from 5.04 × 10 −6 m 3 /s ( = 1) to 10.86 × 10 −6 m 3 /s ( = 12), increasing by approximately a factor of 1.15.With an increase in , the overall discharge capacity of the fracture networks enhances.
In view of the fact that the variations of linear and nonlinear coefficients and against and are very similar, as a function of was analyzed [29], as plotted in Figure 7(e).The experimental data can be fitted very well using the following empirical function: = 7.899 × 10 6 0.766 .
Although (14) fits very well the relationship between the coefficients and , with the generating 2 of 0.9927, the applicability of this equation needs to be further verified.
To quantify the degree of the nonlinear effect of fluid flow through the fracture networks, variations of for all test cases were calculated and plotted as a function of based on (8), as shown in Figure 8.As increases, displays an increasing trend, while the rate of its increase steadily diminishes.The variation process can be well described using a power function where the fitting coefficients ( and ) are related to both the boundary load conditions and number of intersections.For a certain , as increases, the coefficient shows a decreasing trend, but the coefficient increases.However, at a given , with an increase in , plots an increasing trend while decreases.
By defining a critical value of = 0.1, c of the fracture networks with various numbers of intersections was calculated, as listed in Table 1. Figure 9 displays the changes in c due to the changes in and .As increases, c shows an increasing trend.In the range of from 7 to 35 kN, c increases by 6.35 times ( = 1), 7.73 times ( = 4), 3.97 times ( = 7), and 6.09 times ( = 12), respectively.The main reason for these variations may be as follows.For = , it is expected that the effect of normal loads plays a dominant role on the overall fluid flow field [42].An increase in the normal load will undoubtedly lead to corresponding fracture closure.For each single fracture, a small change in fracture aperture can result in a large variation in the flow rate due to proportional relationships between and h 3 in (4), which would then impact c for the onset of flow transition.However, for a certain , as increases, c shows a decreasing trend.Taking = 35 kN as an example, in the range of from 1 to 12, c decreases from 118.57 to 13.01, or by 89.03%.
For fluid flow through fractured and porous media, the transmissivity has also been applied to estimate the nonlinear flow regime.Using (11), of the fracture networks was calculated.Figure 10 shows the relationships between and .Notably, is not a constant value but exhibits a decreasing trend with , which further validates the existence of nonlinear flow behaviors in the fracture networks.With an increase in , decreases due to fracture closure.However, for a certain , shows an increasing trend with .Using the least square method, as a function of can be well fitted with the following exponential function: where the regression coefficients , , and were all presented in Figure 10.The units of coefficients and are m 4 and is a dimensionless constant.When assessing the fluid flow through a single rock fracture, the Reynolds number (Re), which is defined as the ratio of inertial forces to viscous forces, is typically used to quantify the onset of nonlinear flow.As noted above, the variations of normalized transmissivity (/ 0 ) versus Re can be determined by extending the linear Darcy law with = /(−∇) [23,25,34,43], where is a dimensionless coefficient determined by the coefficients and in Forchheimer (6).
However, in engineering practices, there are hundreds to thousands of fractures, and the Re of each fracture generally cannot be ascertained.Instead, in a model typically has known values and can be easily obtained.Therefore, in this study, the relationships between and / 0 were used to evaluate the flow regimes in the fracture networks.
By fitting the experimental data sets, the relationships between and / 0 for rock fracture networks in this study were analyzed, as shown in Figure 11.Here, 0 denotes the transmissivity corresponding to = 0 in Figure 10, where the hydraulic head difference is sufficiently low and the inertial forces are negligible.The results show that variations in / 0 against can be expressed as follows [14]: where is a dimensionless coefficient.Note that the exponent of is −0.45, which does not change with the number of intersections or boundary load conditions.From Figure 11, as increases, / 0 shows a decreasing trend.For a certain , as increases, the transmissivity relationships generally shift upward.However, for a certain , the -/ 0 curves shift downward as increases.In addition, the variations in / 0 with can be divided into three stages.When is small (i.e., less than 0.2), with the increase of , / 0 approximately holds a constant value of 1.0; thus, the fluid flow is linear.Then, with continuously increasing , / 0 decreases and the reduction rate of / 0 first increases and then decreases.Based on ( 12) and ( 18), when / 0 = 0.9, c is calculated, which is in the ranges 8.98-103.16( = 1), 3.46-61.40( = 4), 2.69-28.37( = 7), and 1.10-15.50( = 12), respectively.Generally, the range of c is larger than that calculated based on (6) and (8) shown in Figure 9.
From the quadratic relationships between and of the fracture networks displayed in Figure 5, at a given , the increase in varies with both and , which leads to the changes in the equivalent permeability of the fracture networks.To assess the discharge capacity of the fracture networks, the -directional equivalent permeability of the fracture networks was calculated at a specified inlet hydraulic pressure of 0.05 MPa, with the corresponding of 10.31.The fluid flow took place in fractures only and was calculated by the cubic law.
Calculations were made with the following equation, assuming no gravity term: where is the cross-sectional area.
The changes in the equivalent permeability of the fracture networks in terms of and are displayed in Figure 13.For a certain , as increases, of the model region decreases.In the range of from 7 to 35 kN, decreases by 90.31% ( = 1), 90.87% ( = 4), 82.28% ( = 7), and 84.82% ( = 12), respectively.In addition, the - variation curves can be divided into two stages.As = 7-21 kN, the permeability changes are sensitive.However, when > 21 kN, changes gradually and approaches constant values.The main reason for these variations may be as follows.As increases, the effective stress fne in the fractures increases, resulting in corresponding fracture closure.The overall volume flow rate decreases.As the fracture aperture continues to decrease until the residual fracture aperture, the equivalent permeability of the fracture networks generally retains a constant value.However, at a given F, the equivalent permeability increases with mainly due to richer connectivity in the fracture networks.
Conclusions
This paper experimentally examines the impacts of number of intersections ( = 1, 4, 7, and 12) on stress-dependent nonlinearity of fluid flow through real rock fracture networks.For each intersection number, a series of hydromechanical tests with respect to different inlet hydraulic pressures (0-0.6 MPa) and increasing boundary loads from 7 to 35 kN were conducted.The degree of nonlinearity, critical hydraulic gradient, normalized transmissivity, and the equivalent permeability of the fracture networks were all evaluated.The main conclusions are as follows: (1) Forchheimer's law offers a good description for the relationships between volume flow rate and hydraulic gradient of water flow through the rock fracture networks.Both the linear coefficient and nonlinear coefficient in Forchheimer's law decrease with the number of intersections but increase with the boundary load.The changes of nonlinear coefficient as a function of linear coefficient can be well fitted using a power function = 7.899 × 10 6 0.766 based on the experimental data.
(2) The critical hydraulic gradient of the fracture networks is calculated by taking a critical nonlinear effect factor value of 10%.With an increase in the number of intersections, the critical hydraulic gradient decreases mainly due to richer flowing pathways in the fracture networks.However, as the boundary load increases, the critical hydraulic gradient increases due to fracture closure caused by increasing effective stress in the fractures.For all cases in this study, the critical hydraulic gradient is ranged between 0.62 and 118.57.
(3) With an increase in the hydraulic gradient, the transmissivity of the fracture networks displays an exponential decrease trend.In addition, the transmissivity increases with the number of intersections but decreases with the applied boundary load.The variations in normalized transmissivity as a function of hydraulic gradient were estimated with a mathematical expression / 0 = 1 − exp(− −0.45 ).The coefficient increases with the boundary load but decreases with the number of intersections.As the boundary load increases, the equivalent permeability of the fracture networks shows a decreasing trend.
Figure 2 :
Figure 2: Plate granite specimens of fracture networks with different numbers of intersections ( = 1, 4, 7, and 12).The black lines denote the geometric locations the fractures.
Figure 3 :
Figure 3: Schematic view of the stress-dependent fluid flow test apparatus.
Figure 4 :
Figure 4: (a) A top view of hydraulic setup of the test apparatus.(b) Loading diagram of the rock fracture specimen.
Figure 5 :
Figure 5: Regression analysis of hydraulic gradient as a function of measured flow rate using Forchheimer's law of fracture networks subjected to increased boundary loads.
Figure 6 :
Figure 6: as a function of for different at a fixed of 123.69.
Figure 7 :
Figure 7: Variations in linear and nonlinear coefficients and , and the nonlinear relation between and .
Figure 8 :
Figure 8: Evolution of nonlinear effect factor with hydraulic gradient .
Figure 9 :
Figure 9: Relationships between c and .
Figure 10 :
Figure 10: Regression analysis of transmissivity () as a function of hydraulic gradient .
Table 1 :
Values of , , c , and 2 of fracture networks with different numbers of intersections subjected to various boundary loads . | 2018-12-06T12:13:55.578Z | 2018-04-24T00:00:00.000 | {
"year": 2018,
"sha1": "e930e96f4444013a3433692dfbb05d2798be2734",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/geofluids/2018/8217921.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e930e96f4444013a3433692dfbb05d2798be2734",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
249674354 | pes2o/s2orc | v3-fos-license | Learnable Frequency Filters for Speech Feature Extraction in Speaker Verification
Mel-scale spectrum features are used in various recognition and classification tasks on speech signals. There is no reason to expect that these features are optimal for all different tasks, including speaker verification (SV). This paper describes a learnable front-end feature extraction model. The model comprises a group of filters to transform the Fourier spectrum. Model parameters that define these filters are trained end-to-end and optimized specifically for the task of speaker verification. Compared to the standard Mel-scale filter-bank, the filters' bandwidths and center frequencies are adjustable. Experimental results show that applying the learnable acoustic front-end improves speaker verification performance over conventional Mel-scale spectrum features. Analysis on the learned filter parameters suggests that narrow-band information benefits the SV system performance. The proposed model achieves a good balance between performance and computation cost. In resource-constrained computation settings, the model significantly outperforms CNN-based learnable front-ends. The generalization ability of the proposed model is also demonstrated on different embedding extraction models and datasets.
Introduction
Speaker verification (SV) refers to the task of verifying the identity of a speaker from given speech utterance(s). SV systems are developed for various notable applications [1][2][3], such as speaker diarization, bio-metric authentication, and security. Deep neural network (DNN) based models [4][5][6][7][8] are predominantly adopted in current SV systems and lead to appreciable performance gain over conventional models, e.g., GMM-UBM, I-vectors [9][10][11]. Typically these DNN models take a certain form of acoustic features as input and produce neural embeddings that represent speaker-specific information in speech, which are then used for speaker discrimination. The most commonly used input acoustic features are Mel-scale spectrum features like log Mel-scale filter-bank coefficients (MFBANK) and Mel-Frequency Cepstral Coefficients (MFCC). They are computed from Short-Time Fourier Transform (STFT) coefficients and transformed using a set of pre-defined band-pass filters designed with consideration on human auditory perception [12]. These acoustic features are widely used and have achieved great success across different tasks of speech and language processing. However, there is no reason to expect that this universal acoustic front-end is optimal and performs equally well for a specific task like speaker verification. In [13], it is argued that narrow-band spectral information may contain distinct characteristics of speakers, and Mel-scale spectrum features might have ignored lots of narrow-band information.
Could we improve the performance of an SV system by learning the audio front-end as part of model training? In other application areas of deep learning, e.g., computer vision (CV), it has been shown that feature representations learned from raw input, i.e., image pixels, perform better than hand-crafted features in various modeling and classification tasks [14][15][16]. There were also a number of studies by the speech research community on applying CNN to learn features from raw waveform in conjunction with the downstream task [17][18][19]. Experimental results show their superior performance to hand-crafted features like MFBANK and MFCC. However, the performance gain is at the cost of significantly increased computation that is due to the small-stride CNN.
In this paper, a computationally efficient learnable acoustic front-end is proposed for SV systems. The front-end consists of a group of learnable filters that extract features with low computation cost by directly transforming the STFT spectrum. These Learnable Frequency-Filters (LFF) are similar to Mel-scale filters but allow flexible adjustment on the filters' frequency responses. The filters' bandwidths and center frequencies are updated in conjunction with the embedding extraction model in an end-to-end manner. Experimental results show that the proposed method achieves better performance than MFBANK and two learnable CNN-based feature extraction models. By analyzing the learned filters, it is noted that the flexibly adjusted bandwidth accounts for most of the improvement, while the learned center frequencies are very similar to those used in Melscale filterbank.
The remainder of the paper is organized as follows. The relation to previous works is described in Section 2. Section 3 discusses the architecture of the proposed model. Experimental setup and results are given in Sections 4 and 5. Finally, Section 6 contains discussions and conclusions.
Previous works
The relation between MFBANK and the convolution layer is discussed first, in order to relate conventional signal analysis to feature learning in CNN. Then we give a brief review on different learnable feature approaches in previous research.
Mel-scale filter-bank features
The computation of MFBANK consists of two major parts: (1) STFT and (2) Mel-scale filter-banks. The STFT spectrum of input sample X is composed of a sequence of Fourier transform (FT) coefficients, which are calculated on the short-time frame sequence [x1, ...xn]. These frames are cropped from X with window length w and hop length s. A window function f window , e.g., Hanning window, is applied to each frame. The FT coefficient on frequency bin k can be represented by the dot product between time-domain signal samples and a complexvalued filter f k = e −2πikt/N , where N is the number of fre-conv.
conv. Sinc
Gabor Figure 1: The convolution kernel of Sinc and Gabor filters.
quency bins in the FT and t is the time index in a frame. Thus, the STFT coefficients on the frame xj can be expressed as, where and , denote the operation of element-wise product and inner product respectively. And it can be written in the convolution form as: where * is convolution operation with stride s. In this way STFT is made equivalent to a convolution layer in DNN with kernel size w, stride s, and output channel N/2, with fixed complex-valued weights. A common setting of STFT at sampling rate of 16kHz audio uses 25ms window length and 10ms hop length, which correspond to w=400 and s=160. The Melscale filters are applied on the STFT spectrum for the MFBANK features.
Vanilla CNN filters
Given that STFT and the filterbank operation can be represented as convolution, a number of studies proposed to use learnable convolution kernels to generate filter-banks [18,20,21]. They showed that the filter-bank structure learned from CNN can approximate Mel-scale filter-bank with properly initialized weights, suggesting that learnable filter-banks have the ability to outperform MFBANK, or at least get close to it.
Parameterized CNN filters
In [22], it was shown that, without constraint on the filter weights, the frequency responses of the learned filters may exhibit spiky shapes and spread over a wide range, even to the negative frequency region, which do not appeal to our intuition. In view of these undesirable outcomes, Gabor filters were proposed to parameterize the convolution weights in [22]. In another concurrent work [13], the Sinc-function filter was adopted for the parameterization. The frequency responses of both types of filters approximate band-pass filters with rectangle and bellshape respectively as shown in Fig 1. In addition, the number of learning parameters is reduced to two for each filter, i.e., the center frequency and bandwidth. However, Sinc and Gabor filters suffer from a severe problem in convolution. As shown in Eq. 3, the linear scaling in time gives an inverse scaling in frequency in FT. That is, in order to achieve a wide-band frequency output, the filter in time domain f (at) must be narrow. Thus the filter gains away from the filter center are close to zero (see Fig 1). As a consequence, a large part of the speech samples in the analysis frame are dismissed when a large convolution stride in time is used. To alleviate this problem, the stride is set to a relatively small value (1 in their work), which leads to the increase of the computation cost.
STFT dB Figure 2: The learnable filters are applied on the STFT spectrum and transferred into dB scale.
Filters on frequency
Data-driven harmonic filters were proposed in [23], where learnable filters, instead of the commonly used Mel-scale filters, were applied to the STFT output. H triangular filters and F harmonics of each filter are learned in order to produce a 3-dimensional feature with shape F × H × T , representing Harmonic×Frequency×Time as in the harmonic constant-Q transform (HCQT). State-of-the-art results across various tasks, like automatic music tagging and keyword spotting, were achieved with this trainable front-end. The learnable filters proposed in this paper can be viewed as a simplified version of these harmonic filters by omitting the harmonic term. We choose to omit the harmonic term because it is originally designed to emphasize content that has a harmonic structure, which is of more importance for tasks like music information retrieval than speaker verification. We also carried out a detailed analysis of what factors affect the degree of improvement for speaker verification, by visualizing the learned filter parameters and comparing them with those of Mel-scale filters. We empirically show that for speaker verification, the main benefit of learnable filters comes from the adjustable bandwidths, and that the learned frequency centers are similar to those in Melscales.
Learnable filters
A group of learnable filter functions are applied to the STFT spectrum to generate the filter-bank output feature. Two types of filter functions are attempted in this work: Triangle(T)-type: Bell(B)-type: 5) where N is the number of frequency bins, M is the number of filters in the learnable filter-bank. The filter's center frequency and bandwidth are determined by two learnable parameters α and β. The triangle-shape filter (T-type) is defined as in Eq. 4, and Eq. 5 defines a bell-shape filter (B-type). Stacking of wi creates a transformation matrix W of size N × M . Given a spectrum with size T × N (time-frequency representation), the filter bank output is a T × M matrix obtained by multiplying the STFT spectrum with W , as illustrated in Fig 2. The output values are transformed into decibel (dB) scale.
Notably, if the values of α and β are specified according to the Mel-scale filters and fixed, the output features would be the log Mel-scale filter-bank features (MFBANK). While the learnable module allows flexible adjustment on the filters' locations and bandwidths to capture speaker discriminative information.
Issues on the computation cost
For an input waveform with l samples, a convolution operation with kernel w and stride s requires the computation cost of O(w l s ). The used of small stride used in previous work [13,22] places a heavy computation burden. But large stride degrades their performance in the experiments. The small stride is not required in the proposed method, because the FT coefficients change little within a short time interval. The proposed method is applied on the STFT spectrum, which does not require a small stride, alleviating the convolution computation cost.
Experimental setup 4.1. Datasets
Experiments are carried out on two speech datasets of different languages. The first dataset is the Voxceleb1 and Voxceleb2 [6,24,25], in which most of the utterances are in English. The SV model is trained on the development set of Voxceleb2 (Vox.2), which contains over 1 million utterances from 5, 994 speakers. Three official test sets in Voxceleb1 are used for evaluation: the cleaned original test set (Vox-O), the extended test set (Vox-E), and the hard test set (Vox-H).
The second dataset is the CN-Celeb [26], which consists of over 100k recordings in Chinese from 1, 000 speakers. We use the default train/eval split provided in the dataset. The crosslanguage generalization of extracted features is evaluated on CN-Celeb. All audio data are sampled at 16 kHz and no data augmentation is applied in the experiments.
Backbone network
One of the main modules in an SV system is the backbone network, which takes acoustic features as input and generates the speaker embeddings. The backbone used in this work is a modified version of Time delay neural network (TDNN) [5], which is made up of several 1D convolution layers with dilation and a statistics pooling layer. Compared with the network structure in [5], there are three main modifications: (1) an instance normalization layer (IN) [27] is added at the top, which normalizes the input features on time dimension; (2) the original pooling layer is replaced by an attentive statistics pooling layer [28]; (3) the output dimension of layer segment7 is set as 256, and this layer's output is utilized as the speaker embedding.
Training and Evaluation
An Additive Margin Softmax Loss [29] with scale=30 and margin=0.2 is employed for speaker classification during training. The feature extraction module and the backbone network are trained jointly by an Adam optimizer [30] with a batch size of 128 to minimize the classification loss. Each sample within a batch is a speech segment of 2-second long randomly cropped from an utterance. The model is trained on Vox.2 for 30 epochs. The learning rate is initialized as 0.001 and decayed by a ratio 0.1 at epoch 15, 25 respectively.
In the evaluation, each utterance is divided into multiple 4-second duration segments, with a 3-second overlap between two neighboring segments. The average cosine similarity between the segments from the test utterance and the enrollment utterance is used as the score for verification. Figure 3: The y-axis of (a) represents the filters' bandwidths, (b) represents the center frequencies of the filters.
LFF vs. MFBANK
The number of frequency bins (N in Eq. 1) in STFT is set to 512, and the number of band-pass filters in both LFF and MF-BANK is 64. The Equal Error Rate (EER%) results on the Vox-celeb1 test sets are summarized in Table 1. We can see that LFF outperforms MFBANK on all test sets, and that T -type and Btype learnable filters do not show significant differences in the model performance.
To understand what the filters have learned in the training, we analyzed their parameters. The bandwidths and center frequencies are plotted in Fig. 3. The x-axis of (a) and (b) represents the index of the filter, ranging from 0 to 63 (i.e. there are 64 filters in total). The y-axis represents the learned parameter value. It can be observed that the bandwidths and center frequencies of the T-type filters are highly close to B-type. Compared with Mel-scale filters in Fig. 3a, the learned filters have smaller bandwidths in the whole frequency region, suggesting that the narrow-band filters are more appropriate for extracting speaker-related characteristics. On the other hand, the center frequencies of learned filters are surprisingly similar to the Melscale filters, as shown in Fig. 3b.
Performance v.s. computation cost
In this section, the proposed method LFF is compared with two other learnable front-end features, Gabor-conv and Sinc-conv, which take raw waveform as input. Gabor-conv is modified from LEAF, in which the learnable pooling is disposed of and the learnable normalization is replaced by the logarithmic function. For a fair comparison, the kernel size of the first convolution layer for the latter two methods is set as 400 with a stride of 160, corresponding to the classic STFT configuration of window length 25ms and window shift 10ms. The output feature dimension of all feature extraction models is fixed at 64.
However, under this configuration, Gabor-conv and Sincconv are unlikely to achieve good performance because they require a small stride to cover wide-band frequency information. We then decrease the stride of the convolution layers in Gaborconv and Sinc-conv by half. In the meanwhile, the window shift of STFT for LFF is also reduced by half to allow fair compari- son. A max-pooling layer is applied behind the convolution to tailor the length of the output.
The impact of stride/window-shift on the performance of the above three learnable feature extraction models is depicted in Figure 4 and Table 2. Only Vox-E and Vox-H are shown here because they are of a much larger size than Vox-O and therefore more representative. It is noted that both Gabor and Sinc depend on a small stride to obtain a good performance, but it also implies a higher computation cost. On the contrary, LFF is not sensitive to the stride because it is directly applied on the STFT spectrum. Compared with Gabor and Sinc, the proposed method gives superior performance under stride 160, meaning it would be more preferable if computing resource is limited.
Model generalization
We evaluate the generalization ability of the proposed LFF by testing it on another network architecture: ECAPA-TDNN [8], and another dataset: the CN-Celeb dataset.
ECAPA-TDNN involves the Res2Net structure into the TDNN and gives a state-of-the-art performance. The 512channel ECAPA-TDNN is used in the experiments and the training process is similar to TDNN as described in Section 4.3. For the CN-Celeb dataset, the size of which is much smaller than Voxceleb, the speaker embedding dimension is decreased to 128, and a dropout layer with p = 0.3 is added before the embedding layer to alleviate overfitting. The number of training epochs is also reduced to 15. Table. 3 compares the performance of LFF and MFBANK on the previously described two backbone architectures and datasets. It shows that features from LFF-T give consistently better results than MFBANK, suggesting that the proposed learnable frequency filters generalize well on different network architectures and languages.
CNN filter-banks
The proposed approach aims to extract useful features from the STFT spectrum. To evaluate whether the information extracted by CNN from raw waveform can complement the features of learned filters and improve the performance, a one-layer CNN is applied. The input waveform is normalized by mean and standard deviation first (achieved by an IN layer). The normalized waveform is processed by the convolution layer to generate T × λM output features, where T denotes the output time length. λ is a hyperparameter smaller than 1 and controls the relative contributions from the CNN and LFF. For the output feature with size T × M , (1 − λ)M channels of it are generated from LFF. The convolution kernel size and stride for both CNN and LFF are set as 400 and 160 respectively.
The results are shown in Table 4. It can be observed that involving CNN in the feature extraction does not improve the performance. It indicates that vanilla convolution can not provide additional information for SV within a low computation cost (large stride and small output dimension). The design of CNN for feature extraction from raw waveform requires further careful investigation.
Conclusions
A learnable feature extraction front-end for SV, named LFF, has been developed and evaluated. The model consists of a group of filters with learnable bandwidth and center frequency, and the filters are applied on the STFT spectrum to extract filterbank features. Two different filter shapes are investigated in the experiments and they give similar performances in SV. Compared with conventional Mel-scale filters, the learned filters exhibit narrower bandwidths. The proposed method can be implemented with low computation cost and performs better than two other learnable features Gabor, Sinc under a fair comparison in the experiments. | 2022-06-16T01:15:49.346Z | 2022-06-15T00:00:00.000 | {
"year": 2022,
"sha1": "43f144bbdd5335d411d18a27db4c49999c735757",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "43f144bbdd5335d411d18a27db4c49999c735757",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
152283855 | pes2o/s2orc | v3-fos-license | Using Social Network Analysis to Investigate the Collaboration Between Architects and Agile Teams: A Case Study of a Large-Scale Agile Development Program in a German Consumer Electronics Company
Over the past two decades, agile methods have transformed and brought unique changes to software development practice by strongly emphasizing team collaboration, customer involvement, and change tolerance. The success of agile methods for small, co-located teams has inspired organizations to increasingly use them on a larger scale to build complex software systems. The scaling of agile methods poses new challenges such as inter-team coordination, dependencies to other existing environments or distribution of work without a defined architecture. The latter is also the reason why large-scale agile development has been subject to criticism since it neglects detailed assistance on software architecting. Although there is a growing body of literature on large-scale agile development, literature documenting the collaboration between architects and agile teams in such development efforts is still scarce. As little research has been conducted on this issue, this paper aims to fill this gap by providing a case study of a German consumer electronics retailer’s large-scale agile development program. Based on social network analysis, this study describes the collaboration between architects and agile teams in terms of architecture sharing.
Introduction
Emerging in the 1990s, agile methods have transformed and brought unprecedented changes to software development practice by strongly emphasizing change tolerance, continuous delivery, and customer involvement [1,2]. With these agile methods, self-organizing teams work closely with business customers in a singleproject context, maximizing customer value and quality of delivered software product through rapid iterations and frequent feedback loops [1]. The success of agile methods for small, co-located teams has inspired enterprises to increasingly apply agile practices to large-scale endeavors [2,3]. Since the initial application of agile methods was originally intended for small, co-located teams, many organizations are uncertain how to introduce them at scale and therefore face new challenges such as inter-team coordination, dependencies to other existing environments or distribution of work without a defined architecture [1,4,5]. The latter is also the reason why large-scale agile development has been subject to criticism since it neglects detailed assistance on software architecting [2,6]. Agile methods assume that architecture should evolve incrementally rather than being imposed by some direct structuring force (emergent architecture) [7]. However, the practice of this design is effective at team level but insufficient at large-scale. It causes excessive redesign efforts, architectural divergence, and functional redundancy increasing a system's complexity [7,8]. Therefore, an intentional architecture is required, which embraces architectural guidelines that specify inter-team design and implementation synchronization [7,9]. The effective evolution of a system's architecture requires the right balance of emergent and intentional architecture and a close collaboration between architects and agile teams [7,9,10].
Literature describing the collaboration between architects and agile teams in large-scale agile development is still scarce. This paper aims to fill this gap by providing a case study of a German consumer electronics retailer's large-scale agile development program. Based on this objective, our research question is:
How does the collaboration take place between architects and agile teams in a large-scale agile development program?
The remainder of this paper is structured as follows. In Sect. 2, we provide an overview of foundations and related works. In Sect. 3, we present the research approach of this paper. Section 4 describes the case study on the collaboration between architects and agile teams in the large-scale agile development program. We discuss our lessons learned in Sect. 5 before concluding the paper with a summary of our results and remarks on future research in Sect. 6.
Background and Related Work
In the following, the Scaled Agile Framework and Spotify Model are introduced, as the observed program has adopted these two scaling frameworks. Thereafter, the concept of communication networks is presented, which is essential for interpreting the results of the social network analysis in Sect. 4.
Scaled Agile Framework
The Scaled Agile Framework (SAFe), a widely used scaling framework [11], was first published by Dean Leffingwell in 2011. SAFe builds on existing lean and agile principles that are combined into a method for large-scale agile projects.
It provides a soft introduction to the agile world as it specifies many structured patterns. This introduction is needed for organizations moving from traditional to agile development environment [7]. The latest SAFe 4.6 version supports four out-of-the-box configurations: Essential SAFe, Large Solution SAFe, Portfolio SAFe, and Full SAFe. As the observed program uses Essential SAFe, we will subsequently focus on this. Essential SAFe is the simplest entry point for implementing SAFe and consists of team and program levels [7]. At team level, the techniques outlined are those used in Scrum. Each team consists of five to nine members, one scrum master (SM), and one product owner (PO). All teams are part of an agile release train (ART), a team of agile teams that delivers a continuous flow of incremental releases. Each team is responsible for defining, building, and testing stories from its team backlog in a series of two-week iterations using common iteration cadences [7]. At program level, the product management (PM) serves as the content authority for the ART and is accountable for identifying program backlog priorities. The PM works with POs to optimize feature delivery and direct their work at team level. A release train engineer (RTE) facilitates program execution, escalates impediments, manages risk, and helps to drive continuous improvement [7]. The system architect has the technical responsibility for the overall architectural design of the system and aligns the ART with the common technical and architectural vision [7].
Spotify Model
In 2012, Kniberg and Ivarsson [12] published Spotify's approach to scale agile methods over 30 teams across three cities. The Spotify Model emphasizes the importance of "aligned autonomy", i.e. the autonomy of agile teams with simultaneous collaboration and coordination to achieve the same goals. The basic unit of development is called a Squad, which is similar to an agile team in SAFe. Squads are self-organizing and autonomous teams that have all the skills to design, develop, test, and release for production. A Tribe is designed as a collection of squads working in related areas (correspondents to an ART in SAFe). Squads within a tribe are co-located. People with similar skills in the same competency area within the same tribe form a Chapter. A Guild is a community of people that share same interests and often includes all chapters working in this area (complies with a community of practice in SAFe) [12].
Communication Networks
According to Guo and Sanchez [13], communication is understood as the creation or exchange of thoughts, ideas, and emotions between senders and receivers. Communication can be decomposed into two types: inter-team and intra-team communication. The former stands for communication between several teams, the latter for communication within a team [14]. The flow of communication connecting senders and receivers are called communication networks [15]. Figure 1 depicts five common communication networks. The wheel network is the most centralized network pattern. In this network, each member communicates with [15] only one other person. The superintendent C receives all the information from his subordinates A, B, D, and E and sends back information, usually in the form of decisions. The chain network is the second highest in centralization. Only two people communicate with each other, and they have only one other person to communicate with. The Y network is similar to the chain network except that two members are out of the chain. In the Y network, members A and B can send information to C but they cannot receive information from anyone else. Members C and D can exchange information. Member E can exchange information with member D. The circle network stands for horizontal and decentralized communication, which offers equal communication possibilities for every member. Each can communicate with one other to his right and left. Members have identical restrictions but the circle is a less restricted condition than the wheel, chain, or Y network. The all-channel network is an extension of the circle network and connects everyone in the circle network, as it permits each member to communicate freely with all other persons [15].
Agile Architecture
Angelov et al. [16] describe the role of architects and challenges they face in Scrum such as insufficient collaboration, lack of understanding of the value of architecture, and poor communication between team architects [16]. Bachmann et al. [17] and Nord et al. [18] present four tactics to achieve agility at scale by aligning the system architecture, organization structures and product infrastructures. These include vertical and horizontal system decomposition, matrix and augmented team structures, architecture and infrastructure runway, and deployability tactics and can be used in different phases in a system's life cycles. Uludag et al. [10] describes how the adoption of domain-driven design supported a large-scale agile development program with three agile teams at a large insurance company. Uludag et al. [10] report that agile teams and project managers involved in the program conceived that without any form of architectural guidance, large-scale agile development programs can hardly be successful. Dingsøyr et al. [19] investigated a large-scale development program with an extensive use of Scrum and a focus on customer involvement, inter-team coordination, and software architecture. Two key findings related to software architecture are the tension between up-front and emergent architecture and the demanding role of architects in large-scale agile development.
Case Study Design
A case study is a suitable research methodology for this paper, since it helps to study contemporary phenomena in a real life context [20]. We followed the guidelines described by Runeson and Höst [20].
Case Study Design:
The main objective of this paper is to investigate the collaboration between architects and agile teams in large-scale agile development in terms of architecture sharing. Based on this objective, we defined one research question (see Sect. 1). The study is a an exploratory single case study, since this paper looks into an unexplored phenomenon and aims to seek new insights and generate ideas for future research [20]. The case was purposefully selected, because the studied company has successfully adopted SAFe for building complex software for the last one and a half years. The unit of analysis is the consumer electronics retailer's large-scale agile development program.
Data Collection:
We used a mixed methods approach with three levels of data collection techniques [21]. As direct methods, we observed two Program Increment (PI) Planning events [7] with low degree of interaction by the researcher and low awareness of being observed [20]. These observations provided a deep understanding of the overall structure. With the help of seven semi-structured interviews, roles and practices related to architecture were identified and documented. Quantitative data was collected by the online-survey tool Questback for building the social networks and revealing the collaboration between architects and agile teams (see Sect. 4). Therein, we asked respondents how often they exchange architectural advice and decisions with their colleagues, how often they see their colleagues, and if they have suggestions on how to improve the exchange among team members (using a Likert scale). A total of 32 out of 62 available people from eight teams took part in the survey. Three persons were removed from the analysis because no clear assignment to these persons could be made. The response rate for the remaining 29 program members from eight teams is 47% with 758 connections for architecture sharing.
Data Analysis: Interviews and observation protocols were coded using a deductive approach as proposed by Cruzes and Dybå [22]. Qualitative data collected in interviews form the theoretical foundation for interpreting social relations between architects and agile teams. After initial coding, codes were refined and consolidated by merging related ones and removing duplicates. Quantitative data was analyzed through the use of social network analysis, which comprises a set of methodological techniques that aim to describe and explore patterns in relationships that individuals and groups form with each other [23].
Case Description
In 2016, the case organization decided to relaunch a failed CRM project using agile methods. Due to the complexity of the project, the management decided to relaunch it with the help of a scaling framework. During early stages of research, the reasons for using Essential SAFe (from now on SAFe) became more apparent and convincing to the management. One reason for choosing SAFe was that it has proven itself in large organizations and offers comprehensive documentation. The adoption was initiated with a pilot project, which was geographically distributed. At the beginning, the pilot project faced a lot of problems. Thus, all involved employees were trained upon agile methods and SAFe by external agile coaches. After a few PIs, the responsible management team perceived that SAFe did not provide sufficient guidance on the coordination of their agile teams. Thus, the organization decided to combine SAFe with the Spotify Model. Within the transformation process, program members were divided into tribes, chapters, squads, and guilds. Figure 2 shows the current organizational structure of the observed program. Figure 2 also shows all 62 members forming a tribe. This tribe consists of a "scaled" team (Team A), which does not play a hierarchical superior, but a more coordinating role without personnel management, and four squads (Team B, Team C, Team D, and Team E). Team F, Team G, and Team H, which are not shown in Fig. 2 constitute representatives of three suppliers that provide external support for their third-party systems. The tribe is divided horizontally into nine chapters for: (1) the chief product owner (CPO) and POs, (2) RTE and SMs, (3) IT project managers (IT-PMs), (4) quality analysts and test managers (QAs & TMs), (5) data analysts (DAs), (6) solution architects (SAs) 1 , (7) business process architects (BPAs), (8) product reliability engineers (PRE), and (9) developers (Devs). Each SA is assigned to a squad and takes care of the overall system architecture with its subsystems and interfaces. The team concentrates on the cross-system data flows and processes related to the integration of the architecture. These data flows and processes are used to define minimum interface requirements that all teams must meet. In contrast to SAs, who represent technical architects, BPAs are functional architects that are also dedicated to squads. The responsibilities of BPAs are not really known yet, as their role has been added to the program just recently. However, both architect roles should play a dual role within their squads by making architectural decisions and guiding them to fulfill the required architectural standards. Due to ongoing transformation, guilds have not yet been established but will be organized soon. In the following two sections, the inter-and intra-team exchange of architecture-related information of the observed program will be presented. Figure 3 provides an overview of how architecture-related information is shared across all teams. An interesting finding here is that the scaled team is located in the center of the graph. This indicates continuous communication and coordination between the scaled team and the four squads on architectural topics. Figure 3 also shows a close collaboration between Team B and Team E and between Team B and Team D, which is due to architectural dependencies between the systems on which they work. Figure 3 also provides an overview of roles that are inten- 3. Social network of eight teams including salient roles that are intensively involved in inter-and intra-team architecture-sharing sively involved (large nodes) in architecture sharing. First, it shows that the CPO of Team A (CPO A ) is the most outstanding node in the inter-and intra-team exchange of architecture-related information. Second, SAs also form relatively large nodes compared to other roles. This observation confirms the importance of SAs for the exchange of inter-and intra-team architectural information. Figure 3 also shows that the TM A also plays an important role in architecture sharing. Table 1 presents top 10 stakeholders involved in inter-team sharing based on the normalized degree centrality 2 measure. Table 1 shows that the CPO A has a normalized degree centrality value of 1,0, which indicates that he/she is sharing information with all stakeholders involved in the observed program. The SA E Table 1. Top 10 stakeholders involved in inter-team architecture sharing based on normalized degree centrality and SA D have normalized degree centrality values of 0,92 and 0,90 indicating high involvement in inter-team sharing. The PI planning event of SAFe is a face-to-face event [7] that aims to align all agile teams within the ART to share the common mission and vision by creating iteration plans and team objectives for the upcoming PI. It is conducted every two and a half months and offers a platform for the exchange of general and architectural information across teams, since all members of the ART are present in one location. Figure 4(a) shows that SAs and BPAs have a very strong sharing with other teams during the PI planning. Figure 4(d) reveals a chain communication between the SA B , SA C , SA D , and SA E on a daily basis. In particular, the chain is composed as follows: SA E exchanges information with SA B , who exchanges information with SA D , who shares information with SA C . This communication pattern characterizes a centralized communication between SAs. The chain communication pattern can also be observed with SA B , SA D , and SA E . Figure 4(e) shows that SA B , SA D , and SA E constantly 3 exchange information and that the SA C is no longer involved in an exchange with other SAs. Figure 4 shows that SAs form a decentralized all-channel communication pattern. This means that each SA speaks with all other SAs. The overall comparison also shows that the three external SA of Team B are less participating in the inter-team exchange than the rest of internal SAs involved in the program. Other roles such as SM, TM, PO, and CPO are also heavily involved in exchange of information within the PI planning. The shorter the observed time intervals become, the more dominant the SA becomes with regards to the inter-team sharing.
Intra-Team Architecture Sharing
The exchange of architectural information in Team B shows a central wheel communication pattern between SAs, since external SAs are guided by the internal SA, who represents the intra-team lead architect (see Fig. 5(a)). Figure 5(a) also shows that SAs form the core of the team. Moreover, Fig. 5(a) shows that BPA B only exchanges information with another role. A decentralized all-channel communication pattern can be observed in Team C (see Fig. 5(b)). This means that other non-architectural roles exchange information without necessarily involving SA C . Nevertheless, SA C plays the most central role, since the SA frequently communicates with all team members. Compared to BPA B , BPA C plays a more central role, as he/she shows a close collaboration and communication with his/her Fig. 4. Social networks focusing on SAs and BPAs with regards to the frequency of inter-and intra-team architecture sharing squad (see Fig. 5(a) and (b)). The comparison of the two figures also shows that SA C and BPA C exchange information more frequently than SA B and BPA B . Figure 5(b) shows a decentralized all-channel communication pattern between architects and other team members of Team D. Similar to BPA C , BPA D often Table 2. Normalized degree centralities of architects in intra-team architecture sharing Table 2 shows the normalized degree centrality values of SAs and BPAs involved in intra-team architecture sharing. 75% of the SAs possess a normalized degree centrality value of 1,0 indicating that they share information with all squad members. Comparing SAs with BPAs, Table 2 shows that SAs have a stronger exchange of information with their squad members than BPAs (except Team E). Figure 6 shows how Team B's intra-team sharing changes at four distinct time intervals. For instance, Fig. 6(a) shows that BPA B only exchanges information with one Dev B once per iteration. Figure 6(b) shows that the exchange of information between SAs and non-architectural roles mostly takes place two to three times per iteration, while the sharing between SAs takes place constantly (see Fig. 6(d)). Similar to Figs. 6, 7 shows Team C's intra-team architecture sharing. The exchange in the team usually takes place two to three times per iteration (see Fig. 7(c)). Sharing between architects and non-architectural roles takes place on a daily basis (see Fig. 7(d)). In contrast to Team B, Fig. 7(e) shows that SA C and BPA C constantly communicate together. Figure 8(a) shows that the exchange between architects and non-architectural roles as well as among architects mainly takes place on a daily basis. SA D and BPA D constantly exchange architectural information (see Fig. 8(b)). Figure 8(b) also shows that other mem-bers such as DA D , QA & TM D , PRE D , and PO D constantly exchange architectural information. The intra-team exchange of Team E takes place mainly on a daily basis (see Fig. 9(a)). SA E and BPA E communicate on a daily basis (see Fig. 9(b)). Architecture sharing between architects and non-architectural roles takes place on a daily basis. Figure 9(b) shows that two groups are formed during the constant exchange of information. The first group includes SA E , SM E , Dev E , and PO E , while the second group constitutes Dev E , BPA E , and PRE E . Table 3 provides a summary of the social network analysis with identified communication patterns and frequencies.
Key Findings
Both architectural roles, i.e. SAs and BPAs, and other roles, e.g. TMs, SMs, and POs, are involved in inter-and intra-team architecture sharing. In particular, the CPO plays one of the most salient roles. An all-channel communication network can be observed in each squad. SAs enable a decentralized exchange so that other team members can exchange architecture-relevant information without necessarily involving SAs. This observation coincides with the values and principles of agile software development. Both SAs and BPAs prefer face-to-face communication with their team members and do not exchange information by including bridging roles. Each squad is accompanied by at least one SA and BPA. Both architects play a dual role in their squads. On the one hand, they make architectural decisions and iteratively create architecture models. On the other hand, they provide guidance and support their squad in meeting architectural standards. With this setup, the observed program aims to increase development speed by balancing emergent and intentional architecture. In all social networks, SAs form central nodes in inter-and intra-team sharing. Table 3. Summary of the social network analysis
Threats to Validity
We discuss potential threats to validity along with an assessment scheme as recommended by Runeson and Höst [20].
Construct Validity:
This aspect reflects to what extent operational measures that are studied really represent what the researcher has in mind [20]. Two countermeasures were taken for construct validity. First, interview protocols were coded by the author of this paper and reviewed by a second researcher. Second, a key informant of the organization has reviewed the analyses of this paper.
Internal Validity is irrelevant, as this study was neither explanatory nor causal. [20]. This paper focuses on analytical generalization [20] by providing a detailed description of the case. It provides empirical insights that allow for a profound understanding on the collaboration between architects and agile teams. The shown findings should be viewed as valuable insights for other organizations that adopted Essential SAFe.
Reliability: This validity is concerned with to what extent the data and the analysis are dependent on the specific researcher [20]. To mitigate this threat, two countermeasures were taken. First, the case study has been designed so that the large number of interviewees and multiple interviewers allowed data and observer triangulation. Second, a case study database was created, which includes case study documents such as audio recordings protocols, and field notes of observations.
Conclusion and Future Work
In this paper, we described the collaboration between architects and agile teams in a large-scale agile development program of a German consumer electronics retailer. Due to the complexity and extent of the CRM product, each squad is guided and supported by at least one SA and BPA. Each SA is responsible for the architecture of a subsystem and ensures that the respective squad complies with defined architectural requirements. The observed program also introduced the new role of the BPA that is responsible for developing the functional architecture of the subsystem. To understand the role of SAs and BPAs and their collaboration with squads, we investigated social networks of one scaled team and four squads. We learned that intra-team architecture sharing is usually facilitated by SAs. Comparing the social networks with common communication networks, we discovered that SAs and BPAs prefer direct communication. For the most part, architects share information on a daily basis with their teams. The intra-team sharing between architects and their teams is characterized by an all-channel communication network.
As future work, we will continue to study the large-scale agile development program of the German consumer electronics retailer. First, we will research how the current state of architecture sharing is perceived by the stakeholders and how it could be improved by the use of various coordination mechanisms such as ad hoc meetings, co-location or communities of practices. Second, as the squads in the large-scale agile development program become more mature and evolve towards feature teams, we will investigate the architectural decision-making process of squads. We hope to gain a better understanding of the collaboration between architects and squads regarding the distribution of their responsibilities for architectural issues. | 2019-05-14T14:03:41.882Z | 2019-05-21T00:00:00.000 | {
"year": 2019,
"sha1": "66f3512cb3421b000c484b48e4f3035dc8bd5977",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-19034-7_9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1f97ab822839979237dc0f9fe7ddfe16cfac0f54",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
139436321 | pes2o/s2orc | v3-fos-license | Measurements of detonation propagation in the plastic explosive in charges of small diameters using synchrotron radiation
During the detonation of charges of plastic explosives based on PETN and RDX, the spatial structure of the arising flow and the degree of compression of the matter behind the detonation front were obtained using the synchrotron radiation facilities. The experiments were carried out with charges of small diameters, which made it possible to control the symmetry of the undisturbed charge of the explosive and the flow behind the detonation front. In addition, the advanced detector was used to increase the frame rate by 4 times in comparison with the authors’ earlier works.
Introduction
Registration of fast processes accompanying explosive phenomena by using the synchrotron radiation (SR) as the source of X-ray pulses is the efficient method used to the study of detonation and the shock wave phenomena for more than ten years. The authors of this article use and continuously modernize the experimental station at the Siberian Synchrotron and Terahertz Radiation Centre at the Budker Institute of Nuclear Physics [1,2]. This station enables to obtain a multi-frame slit X-ray film with a frame exposure time of about 1 nanosecond and a spatial resolution of 0.1 mm. The similar station for recording the SR during high-speed processes was created by colleagues at the Argonne National Laboratory in 2012 [3,4].
The way of obtaining information about the structure of the flow behind the front of the detonation wave is based on the registration of the SR beam passing through the detonating charge. The identity of the X-ray pulses of the SR by the spectrum and the intensity distribution in the beam during the experiment makes it possible to calibrate the detector and obtain the distribution of the compression of matter along the beam for the process being studied. Also, an important feature is the fact that the registration of the distribution of the compression of substance in a detonating charge of an explosive does not introduce perturbations into the arising flow. This circumstance is especially important for studying the detonation process in charges of small diameters. In this case the use of contact sensors can lead to catastrophic changes in the observed phenomenon. To register the radiation passing through the expansion area of the detonation products, the new detector was launched (2017) and used in this work. Compared with the previous detector [5], the time resolution was improved by 4 times. Now, the interval between frames is 124 ns. The number of frames during the experiment was also increased up to 100. Thus, it is possible to trace the detonation process starting from the unperturbed state of the material to the late stages of expansion of the detonation products.
Experimental setup and reconstructing distirbution of compression of substance
In this work, we measured the detonation parameters in cylindrical charges of plastic explosives based on PETN and RDX with diameters of 5, 10, and 15 mm. The measurement of the flow, which arises behind the front of the detonation wave, was carried out in the cross section of the charge as shown in figure. 1. The intensity distirbution of the SR beam passed through the detonating charge of high explosive is recorded by the one-coordinate multichannel detector. The amount of substance along the SR beam was calculated from the measured intensity distribution using the previous calibration of the detector. For this purpose the attenuation of the intensity for the known amount of the mass of the investigated explosive along the SR beam is determined in the static regime. The integral current in the accelerator ring completely determines the spectrum and the intensity distribution of the synchrotron radiation beam, so the calibration can be carried out regardless of the time of the experiments. However, the detonation products and fragments of experimental assembly that reach the windows of the explosive chamber during an explosion can affect the data recorded, so the calibrating just before each experiment considerably improves results. The charge was initiated by the generator of plane shock wave. The measurements was carried out at the distance from the face of the initiation that sufficient to establish the stationary detonation regime. The assumption of the stationary detonation front should be made to compare the experimental data for the charges of different diameters and explosive compositions.
Charges of small diameters can be completely placed in the region of registration of the SR beam, which makes it possible to control the symmetry of both the undisturbed charge and the flow arising behind the detonation front. The relative amount of mass for the charge of the explosive based on PETN of 10 mm in diameter is shown in figure. 2. The cylindrical symmetry of the mass distribution is of fundamental importance, since it gives the possibility to set the task of tomography of the compression of substance in the process using only one shot. For the tomography problem, it is necessary to smooth out the noisy experimental data. However, there are natural discontinuities at the detonation front and at the charge edge, typical for the explosion process. To solve the tomography problem is used the method [6] developed earlier by the authors of this work. This method based on the use of a priori information to smooth experimetnal data. The essence of the method is set out below.
Measuring the amount of mass along the SR beam during detonation of a cylindrical charge of explosive, we should obtain a distribution of the typical form shown in figure. 3. First, we register the unperturbed state of the charge of the explosive, then we see that the detonation front passes through the registration cross section, and later the side expansion of the detonation products proceeds. After a while, the detonation products go outside the registration area of the SR beam. This distribution of mass is equivalent to the distribution of the compression of the substance, obtained by setting the corresponding values in the nodes of the grid shown in figure 4. For this purpose, the spline function is calculated through the grid nodes along each line shown in figure 4 starting from the detonation front down. The spline function gives us the values not only at the grid nodes but also between the nodes. In the experiment the mass distribution is measured in the horizontal cross-sections of the grid (one of them is indicated by a dashed line). This mass distirbution can be obtained with use of the another spline function, which is calculated along the dashed line inside the region filled with the explosion products. The geometry of the grid is determined by the curvature of the front and the border of the region filled with the explosion products. Thus, none of the splines intersects the discontinuities. Minimizing the deviations between the experimental and calculated data by varying the grid parameters, we obtain the distribution of the compression of the substance corresponding to the X-ray shadow observed in the experiment. For the compositions under study, the curvature of the detonation front is very large. For charges of small diameters, the detonation front pass the observation cross-section in only tens of nanoseconds, while the time interval between frames is 124 nanoseconds. This circumstance leads to the impossibility of the reliable determination of the curvature of the front directly in these experiments, and so the curvature was determined in the separate experiments. The similar situation arises with the determination of the state of substance in the von Neumann spike, since the typical width of the chemical reaction zone for the considered explosives is also tens of nanoseconds [7]. Therefore, at best, the first frame behind the detonation front will show the mass distribution close after the Chapman-Jouguet state.
When the detonation front passed through the explosive charge, the detonation products quickly exceed the border of the registration area, due to intensive side expansion. The considered method of reconstructing the flow structure allows one to obtain data in this case due to the requirement of smoothness of the solution, but the error will substantially increase. Previously, it was usually possible to register no more than one frame before the expand of detonation products outside the observation area. The 4-fold decrease in the interval between frames to 124 nanoseconds significantly increases the accuracy of determining the structure of the compression of substance behind the detonation front.
Results
The solution of the tomography problem results in the radial function of the dynamics of the compression of substance in the detonating charge for each experiment. This function can be shown as a three-dimensional surface (figure. 5) or a map (figure. 6) that allows us to compare the flow structures behind the detonation front for the explosive compositions investigated. The non-constant magnitude of the compression of substance in the undisturbed region of the charge is related both to the quality of the production of the charges and to the experimental error in measuring the amount of substance along the SR beam. This error does not exceed 5% for static objects. This value agrees well with the obtained compression of the substance before the detonation front. It is expected that the total error for the dynamic flow region will be of the similar order. The distirbution of the substance compression of PETN-based charge. Diameter 10 mm. Figure 7 shows the diagrams of the compression of substance along the charge axis for various explosive compositions and diameters. The quantitative difference in the flows behind the detonation front are obvious. For clarity, the profiles were shifted in time to align detonation fronts. It should be noted that the moment when the detonation front passed through the observed cross section of the charge is determined with an accuracy not exceeding 124 ns. This is due to the frame rate and the almost flat shape of the detonation front. for the maximum compression of the substance are rather referred to later states behind the detonation front.
Conclusions
The paper presents quantitative data on the dynamics of compression of substance in the area of expansion of the detonation products. This allows to compare the detonation processes in charges consisting of various explosives. It was necessary to make a number of approximations to obtain the data. First of all, we had to consider detonation stationary, that is valid for charges of sufficient length. Further, we had to assume that the arising flow is axisymmetric. The validity of the last approximation was checked during the experiment for charges of small diameters. On the other hand, obtaining the compression distribution of the substance entails smoothing out the experimental data, and this can lead to partial loss of the data. Therefore, in order to calibrate numerical models, the comparison of the calculated mass distribution along the SR beam with the experimental one will give better results than the use of the compression distribution of the substance for these purposes.
In [8] authors used the VISAR technique and the laser-geterodyne technique (PDV-technique) to study the plasticized PETN of close compositions. They recordered time profiles having the near-front peak and the plateau behind the detonation front with the follow-on dip. According to the authors, the near-front peak corresponds to the Chapman-Jouguet state. In this paper, we do not observe any specific features on the compression diagrams that could be indicate the presence of this plateau. | 2019-04-30T13:03:48.185Z | 2017-09-27T00:00:00.000 | {
"year": 2017,
"sha1": "5b958e1c7758c8802f8be952c4084550487b2af6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/899/4/042004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "60c231d60a38a3b9caeee7bfa8af54d26d06e6a8",
"s2fieldsofstudy": [
"Physics",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
15732493 | pes2o/s2orc | v3-fos-license | Pulsed Near-IR Photoresponse in a Bi-metal Contacted Graphene Photodetector
We use an ultra-fast near-infrared pulse coincidence technique to study the time, temperature, and power dependence of the photoresponse of a bi-metal contacted graphene photodetector. We observe two components of the photovoltage signal. One component is gate-voltage dependent, linear in power at room temperature and sub-linear at low temperature-consistent with the hot-electron photothermoelectric effect due to absorption in the graphene. The power dependence is consistent with supercollision-dominated cooling in graphene. The other component is gate-voltage independent and linear in temperature and power, which we interpret as due to thermoelectricity of the metal electrodes due to differential light absorption.
One critical process in the operation of a photothermoelectric device is the cooling of hot electrons, which limits both the detecting sensitivity and speed. Previous studies showed that in a graphene p-n junction photodetector, the hot electrons are mainly cooled by disorder-assisted phonon scattering processes termed supercollision [23][24][25] , whereas other studies concluded the direct emission of surface phonons of the polar substrate by graphene electrons plays an essential role 26,27 . While the nature of cooling in graphene p-n junction devices remains uncertain, there are no reports to date on the cooling processes in a technologically-relevant graphene-metal junction.
Here we study the cooling of hot electrons in a graphene based photodetector contacted with dissimilar metal electrodes (Cr and Au) that is uniformly illuminated by an ultrafast, pulsed near-IR excitation. We use the pulse coincidence measurement technique 28 to study the time, power, and substrate temperature dependence of the photovoltage signal generated due to the hot-electron photothermoelectric effect with dissimilar metal contacts. Surprisingly, low-temperature pulse-coincidence measurements show either a peak (corresponding to an enhancement) or a dip (corresponding to an attenuation) in photoresponse when the pulses are coincident within the response time of the detector. The power dependent photoresponse measurement at low temperatures reveals that the photovoltage consists of a linear and a sub-linear component which may have different signs depending on the gate voltage, explaining the observation of both peaks and dips in the pulse-coincidence measurement. Further measurements at different temperatures show that the linear component is independent of the gate voltage, and is consistent with a thermoelectric effect in the contact metal, while the sub-linear component due to the absorption in graphene shows a power dependence consistent with the model based on supercollision cooling 23,24 .
The bi-metal contacted graphene photodetector is realized by exfoliating monolayer graphene on SiO 2 /Si substrate, followed by sequential Cr and Au metal electrode deposition using a standard e-beam lithography technique (see Methods). The inset of Fig. 1a shows the optical micrograph of the device; the graphene flake is contacted with Cr (Au) electrode on the left (right) side. Figure 1a shows the two-probe conductance G as a function of the applied back gate voltage V g measured at T ~ 50 K. In the absence of an applied gate voltage, the device is p-doped and the charge neutral point is at V g = 55 V. The two-probe field effect mobility of the device is ~500 cm −2 •V −1 •s −1 , suggesting that the device is quite disordered. Next we characterized the photoresponse of the device at the same temperature using two near-IR (λ = 1.56 μ m) pulsed laser beams (see Methods) with variable power and delay. The photovoltage at various gate voltages as a function of the delay time τ d between the pump and probe pulses from − 0.14 ns to 0.14 ns is plotted in Figs 1b,c. The feature at τ d = 0 originates from the nonlinear nature of the photoresponse: If τ d is much larger than the device's intrinsic response time τ r , the device would have relaxed from the first excitation prior to the arrival of the second pulse, in which case the two pulses would generate two independent photoresponse signals to form the total photovoltage. When τ d is comparable to or much smaller than τ r , their photoresponses cannot necessarily be linearly superposed, which enhances (weakens) the signal of the device when the response is super-linear (sub-linear) in power.
Figures 1b, c show the photoresponse as a function of delay time, for various gate voltages spanning the p-doped region (Fig. 1b) and n-doped region (Fig. 1c). Surprisingly, the two-pulse coincidence signal either shows a peak or a dip at zero delay time, depending on the applied gate voltages: for example in Fig. 1b, for V g ≤ 25 V, the signal is enhanced when τ d = 0, whereas for V g ≥ 58 V, the signal decreases when two pulses temporally overlap each other. One possible explanation to this is that the signal is monotonically super-linear for V g ≤ 25 V and sub-linear for V g ≥ 58 V, resulting in an enhancement or an attenuation of the response at zero delay time, respectively. Another scenario that can account for the observed phenomenon is that the photoresponse consists of two components, one linear and the other nonlinear. The nonlinear signal contributes to the feature at τ d = 0, while the linear part serves as an offset to the floor response, which if it has an opposite sign to the non-linear component, could change the polarity of the floor response, making the nonlinear enhancement/attenuation appear like an attenuation/enhancement.
To distinguish between these possibilities, the power dependence of the dc photoresponse was characterized at different temperatures using a single pulsed laser. Figure 2a shows the data taken at high temperature (T = 267 ± 2 K, where the error corresponds to fluctuations in temperature during the measurement of different data sets) and Fig. 2b plots the scaled photoresponse normalized by the incident power (which can be regarded as the responsivity in arbitrary units) as a function of the gate voltage. The fact, that at powers where the signal was well above the noise floor for all gate voltages, all curves coincide with one another in Fig. 2b suggests that the signal is proportional to the absorbed power (linear response) in this temperature range. Figure 2c shows the power dependent photoresponse measured at low temperature (T = 120 ± 2 K). Compared to Fig. 2a, both the magnitude and the gate-voltage dependence of the signal have changed. More interestingly, it is found that the intersection point with the x-axis changes from V g ~ 60 V to V g ~ 70 V, as the incident light power gradually increases. This is shown in Fig. 2d, the zoomed-in plot of Fig. 2c. This indicates that at certain gate voltages the signal must be non-monotonic in power, in fact crossing zero at finite power. This evidence strongly suggests that the signal is composed of at least two components with different power dependences. The measured photovoltage, which is the summation of these two components, thus crosses the x-axis at different gate voltages when changing the incident power since both components have their own functional form of the gate voltage dependence.
To determine the origin of these two components of the signal, the temperature-dependent characterization of the photoresponse to one near-IR pulsed laser excitation is carried out and the results are shown in Fig. 3a. The temperature varied from T = 19.4 K to T = 201 K. The overall shape of the gate-voltage-dependent photovoltage changes only slightly with the temperature, while the major effect of temperature appears to be a uniform downward shift of the photovoltage along the y-axis with temperature. The simplest explanation for the observation is that the photovoltage is comprised of two components that separately depend on the gate-voltage and temperature, i.e., V photo (V g , T, P) = V photo,1 (V g , P) + V photo,2 (T, P). To better understand the temperature dependence of V photo , the data shown in Fig. 3a is replotted as a function of the lattice temperature in its inset. It is easily seen that V photo shows a linear dependence on the lattice temperature above 80 K. At low temperatures, the strong fluctuation 29 , which can also be observed in Fig. 3a, makes it difficult to discern the exact functional form of the signal vs. lattice temperature. Thus the data below 80 K is not shown.
Because the temperature-dependent component of the photovoltage barely changes when the carrier density of graphene is tuned over a wide range, we consider that this part of the signal is generated by light absorption occurring outside of the graphene flake. The reflectances, R, of chromium and gold at the wavelength λ = 1.55 μ m are 0.66 and 0.98, respectively 30 . Considering that the transmission of the beam is very small for the thickness used in this device (~40 nm), the absorption in chromium pad is estimated to be as high as ~34%, which is much larger than graphene's absorption (a few percent due to the interband transition) and the absorption in the gold pad. Therefore, it is possible that a thermoelectric response due to the absorption in chromium contributes to the total photovoltage signal of the device.
A control device of a chromium-gold thermocouple as shown in the inset of Fig. 3b is constructed to test this hypothesis (see Methods). The photothermoelectric response of the metal electrodes is characterized by focusing a CW near-IR (1.55 μ m) laser beam on the device. The focused spot size is a few microns, so that local illumination is possible. The device is mounted in a cryostat and the photoresponse is measured at different temperatures as shown in Fig. 3b. The blue curve, which corresponds to the noise level, suggests that there is no photoresponse when the beam is focused on gold due to nearly 100% reflection of the surface. In contrast, a photoresponse, which shows a linear dependence of the temperature, is observed when the chromium surface is illuminated (red curve). This signal is further enhanced when the focused beam spot is adjusted closer to the Cr-Au junction. It is difficult to directly scale the photoresponse shown here to the temperature dependent component of the signal observed in the graphene photodetector shown in Fig. 3a, since both the laser source and the sample's geometry have changed significantly. However, one can still make a qualitative estimation: The absorbed power of the chromium pad in Fig. 3b is comparable with the contact absorption in the experiment shown in Fig. 3a. However, the thermoelectric voltage is strongly reduced in Fig. 3b, because the wide Cr-Au junction (~700 μ m in width) electrically shorts the light illuminated area (the spot size is ~3.5 μ m), which behaves like a small battery, making the measured voltage ~200 (700 μ m/3.5 μ m) times smaller. This is not an issue for the data taken in Fig. 3a, because the spot size of the beam is large and covers the whole area of the bowtie electrodes. It is thus reasonable that the photovoltage shown in Fig. 3a is two orders of magnitude larger than that shown in Fig. 3b. A quantitative comparison requires considering more factors, such as the various heat pathways 31,32 for both geometries and the difference between pulsed and CW excitations 24 . Nonetheless, the fact that chromium can absorb near-IR light and generate a thermoelectric response that is linear with the temperature suggests that the temperature-dependent component of the signal observed in the graphene detector is likely generated due to the chromium contact's absorption.
Lastly, we consider the power dependence of V photo . According to the analysis in previous paragraphs, the photovoltage results from two components. The first, V photo,1 results from the thermoelectric effect in the electrodes, and should be linear in temperature and power. The second, V photo,2 , is assumed to have a power-law power dependence, and depend on gate voltage: We subtract the signal at V g = 35 V, where we observed a flat response in the pulse coincidence measurement, from each curve shown in Fig. 2c to obtain only the nonlinear component of the response V photo,2 , while the subtracted gate-independent value is the linear component V photo,1 . Figure 4a shows the power dependence of the subtracted component V photo,1 , which is indeed linear in power, consistent with the thermoelectric effect in the electrodes. Figure 4b shows the result of a power-law fit to the power dependence of the remaining V photo,2 at each gate voltage: the power law exponent α < 1 indicating a sublinear power dependence. The exponent α varies within a range from 0.65 to 0.95, consistent with the supercollision model in graphene, which predicts 24 α varying from 0.5 to 1, depending on the energy per laser pulse. Note that this analysis is not performed between V g = 35 V ~ 55 V due to the small signal (see below) which produced large errors in the fitting. Figure 4c shows the gate-voltage dependence of V photo,2 at various powers. We see that V photo,2 changes sign twice with gate voltage, at approximately V g = 35 and 55 V. We recall that V photo,1 is independent of gate voltage, hence the relative sign of V photo,1 and V photo,2 is the same for 35 V < V g < 55 V, and opposite for V g < 35 V and V g > 55 V. Figure 4d replots the pulse-coincidence data from Figs 1b,c for comparison with Fig. 4c. For 35 V < V g < 55 V the pulse-coincidence signal displays a dip feature at zero delay time, indicating a sub-linear power dependence. This is consistent with the signal being the sum of V photo,1 (linear) and V photo,2 (sub-linear) of the same sign in agreement with Fig. 4c. For V g < 35 V and V g > 55 V, the pulse-coincidence signal displays a peak feature at zero delay time; this is in agreement with the signal corresponding to the sum of a linear V photo,1 and sub-linear V photo,2 of opposite sign, resulting in a super-linear power dependence at high power. Again, this region corresponds well with the observation of a negative V photo,2 in Fig. 4c.
Discussion
The graphene-metal junction is a complicated optoelectronic system, with each part of the device interacting with the incident power and contributing to the electric output of the circuit. In this work, we analyzed the photoresponse of a dissimilar metal contacted graphene photodetector as a function of gate voltage, temperature, and power, using near-IR pulsed radiation. We were able to successfully decouple the two components of the signal, one generated by graphene's absorption and the other due to the absorption in the contact, by taking advantage of their different power, temperature and gate dependences. Specifically, we find that absorption by the electrodes results in a photovoltage that is linear in temperature and power, and independent of gate voltage. Absorption by the graphene, in contrast, results in a photovoltage with complex gate-voltage dependence, and a sub-linear power dependence consistent with supercollision cooling of hot carriers in the graphene.
Our simple decoupling method has captured the main operation principle of the device. However, some detailed questions still remain for discussion. For example, the heated chromium pad can generate a temperature gradient in graphene from Cr side to Au side, which contributes to a response that is linear in power (since Δ T is linear in power due to chromium's absorption) but gate-dependent (due to the gate-dependent thermoelectric effect in graphene). This signal is generated in graphene, but due to the absorption in chromium, which is not considered in our simple model. This probably accounts for the observation that α depends on the gate voltage (Fig. 4b), not expected in the simple model. Furthermore, some of previous work suggests that the photovoltaic effect also contributes to the photovoltage signal 9,33 , when the excitation photon energy is high enough to generate electron-hole pairs in graphene. In contrast to the gate-voltage-independent thermoelectric response of the contacts, the expected graphene photovoltaic signal should be gate dependent. A previous study 22 in a biased graphene photodetector shows that the photovoltaic signal plays an essential role near the charge neutral point, while it drops off quickly as the carrier density of the graphene increases. This opens the possibility that V photo,2 is due, in part, to a photovoltaic effect in graphene. However, in this work, the gate dependence of the decoupled nonlinear component of the signal is consistent at all measured temperatures, with the simplest explanation that the signal is purely thermoelectric in origin, rather than consisting of two parts (thermoelectric and photovoltaic). Since there is a lack of reports on the power dependence of the photovoltaic response at different temperatures 22,34,35 , further studies will be needed to quantitatively determine the magnitude of the signal according to the different processes.
Methods
Single-layer graphene was exfoliated from bulk graphite onto a substrate of 300 nm SiO 2 over ion-implanted intrinsic Si. Chromium/gold electrodes (thickness 4 nm/45 nm), the chromium bowtie contact (thickness 35 nm), and the gold bowtie contact (thickness 40 nm) are thermally evaporated for the device shown in Fig. 1 in three lithographic steps. The liftoff mask is patterned via e-beam lithography using a bilayer resist [methyl methacrylate (8.5%)/methacrylic acid copolymer (MMA), Micro Chem Corp.; and poly(methy methacrylate) (PMMA), Micro Chem Corp.]. The chromium/gold thermocouple shown in Fig. 3b is fabricated using the same lithography technique as described above.
The device is mounted in a continuous flow cryostat system (Janis Research) to characterize the temperature dependence of the photoresponse from room temperature down to ~10 K. The response of the graphene photodetector shown in Fig. 1a is characterized by Menlo Systems C-Fiber Fiber Laser, which outputs 1.56 μ m pulsed excitations with a pulse width of ~60 fs at a repetition rate 100 MHz. The average power of the beam can be tuned up to ~50 mW. The photoresponse is characterized by illuminating the detector with a chopped laser beam and detecting the open-circuit photovoltage signal using a voltage preamplifier and lock-in amplifier. The beam is focused on the detector using a glass lens with the beam size a few hundred microns in diameter. The photoresponse of the thermocouple shown in Fig. 3b is characterized in a similar way. The only difference is that the excitation source is 81663A Distributed Feedback Laser (Keysight Technologies), which is a CW near-IR laser with wavelength 1.55 μ m and a maximum output power ~20 mW, and the beam is focused to a few microns in diameter.
The pulse coincidence measurement is characterized by two Menlo Systems C-Fiber Lasers. The repetition rate of the pump pulse is f 0 = 100 MHz, which is slightly different from the probe pulse with a repetition rate f = f 0 + δ f, resulting in an asynchronous illumination on the device. The range of the delay time varies from − 0.14 ns to 0.14 ns. | 2016-03-22T00:56:01.885Z | 2015-10-06T00:00:00.000 | {
"year": 2015,
"sha1": "815899e0350148e0f7704a271b60b4452b9e188d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep14803.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "815899e0350148e0f7704a271b60b4452b9e188d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
55745774 | pes2o/s2orc | v3-fos-license | A MULTIDISCIPLINARY ANALYTICAL FRAMEWORK FOR STUDYING ACTIVE MOBILITY PATTERNS
Intermediate cities are urged to change and adapt their mobility systems from a high energy-demanding motorized model to a sustainable low-motorized model. In order to accomplish such a model, city administrations need to better understand active mobility patterns and their links to socio-demographic and cultural aspects of the population. During the last decade, researchers have demonstrated the potential of geo-location technologies and mobile devices to gather massive amounts of data for mobility studies. However, the analysis and interpretation of this data has been carried out by specialized research groups with relatively narrow approaches from different disciplines. Consequently, broader questions remain less explored, mainly those relating to spatial behaviour of individuals and populations with their geographic environment and the motivations and perceptions shaping such behaviour. Understanding sustainable mobility and exploring new research paths require an interdisciplinary approach given the complex nature of mobility systems and their social, economic and environmental impacts. Here, we introduce the elements for a multidisciplinary analytical framework for studying active mobility patterns comprised of three components: a) Methodological, b) Behavioural, and c) Perceptual. We demonstrate the applicability of the framework by analysing mobility patterns of cyclists and pedestrians in an intermediate city integrating a range of techniques, including: GPS tracking, spatial analysis, auto-ethnography, and perceptual mapping. The results demonstrated the existence of non-evident spatial behaviours and how perceptual features affect mobility. This knowledge is useful for developing policies and practices for sustainable mobility planning.
INTRODUCTION
Intermediate cities are facing complex challenges at the beginning of XXI century, which includes changing and adapting their mobility systems from a high energy-demanding motorized model to a sustainable low-motorized model.Promoting pedestrian and bicycle mobility is a cost-effective way to dramatically reduce environmental and socioeconomical impacts derived from the car-based transportation model and improve the population's wellbeing.
In fact, active mobility (also known as non-motorized mobility) plays a key role in both developing efficient and equitable transportation systems and in moving towards more sustainable cities. Non-motorized modes are resource-efficient since they require less infrastructure (i.e.roads and parking space) and pose minimal costs for users, administrations and the environment.Additionally, they can be easily integrated into the public transit systems providing versatile mobility access for everyone including: youth, senior citizens, people with disabilities or special needs and the economically disadvantaged that otherwise would struggle to travel independently.Beyond efficiency and equality, these modes offer a fun and healthy way to move within the urban environment and help to create more liveable communities and encourage efficient development (Victoria Transport Policy Institute, 2016).
In order to accomplish a low-motorized transportation model, city administrations need to better understand how, where, why and who moves around the city.Collecting, analysing and *Corresponding author interpreting mobility patterns and their links to demographic, socio-economic, cultural and psychological aspects of the individuals and groups are therefore a fundamental step in planning and implementing sustainable mobility policies.During the last decade, researchers have demonstrated the potential of geo-location technologies and mobile devices to gather massive amounts of data for mobility studies.The analysis and interpretation of this data has been carried out by specialized research groups from different disciplines such as: spatial analysis, data-mining, spatial statistics, transportation engineering, urbanism, social psychology, among others, and have focused primarily on movement data collection techniques (Feng and Timmermans, 2014;G. Griffin and Jiao, n.d.;Quiroga, Romero, García, and Parra, 2011;Sayed, Zaki, and Autey, 2013;Zaki and Sayed, 2013) and data structures for efficient storage and movement data retrieval (Forlizzi, Güting, Nardelli, and Schneider, 2000;Güting and Schneider, 2005;Tryfona and Jensen, 1999).Other studies have developed datamining algorithms and visual analytic techniques to detect and extract movement patterns from massive geo-location datasets (Andrienko and Andrienko, 2008;Gudmundsson, Laube, and Wolle, 2007;Laube, 2009;Orellana and Wachowicz, 2011;Thomas and Cook, 2006;Thomason, Griffiths, and Sanchez, 2015), and methods for enriching movement data with semantics (Alvares et al., 2007;Baglioni, Macedo, Renso, Trasarti, and Wachowicz, 2009;Bogorny, Heuser, and Alvares, 2010).
Although fruitful, these efforts have been focused on narrow aspects of mobility.Consequently, broader questions remain unanswered, mainly those relating spatial behaviour of individuals and populations within their geographic environment and the motivations and perceptions shaping such behaviour.In order to advance the knowledge of sustainable mobility and to explore new research paths, interdisciplinary approaches are needed.In fact, researchers are showing increased interest in such interdisciplinary approaches, where the use, adaptation and combination of methods and techniques from different disciplines enable them broaden the scope of their research and explore the interactions of different elements of urban mobility.
Conducting interdisciplinary research is challenging due to difficulties integrating different approaches, methods and interpretations of disciplinary frameworks that researchers use in each field.Consequently, new frameworks are needed to allow specialists to collaborate and communicate while exploring and understanding the complexity of urban mobility.
In an effort to contribute to this need, here we present three elements for an analytical framework to study patterns of nonmotorized urban mobility: a) Methodological, b) Behavioural, and c) Perceptual.The first component (Methodological) attempts to adapt, develop and combine methods and techniques, looking for a cross-disciplinary synergy that allows a better understanding of mobility patterns.The second component (Behavioural) aims to explore and understand the relationship between the spatial behaviour of people and the environment in which they move.The third component (Perceptual) studies the effect that mobility has upon the perception of environment.The proposed framework is the result of an on-going discussion among researchers from geography, architecture, urbanism, environmental studies, psychology, and computer sciences.
In this study we provide context for the proposed framework through literature, theory and practice.The remaining of this document is structured as follows: Section 2 details the main assumptions and elements in which the proposed framework is based.Section 3 discusses the three key components of the framework based on relevant literature.Section 4 presents an application of the framework to study the spatial behaviour of pedestrians and bicycle users in the city of Cuenca, Ecuador.Finally, Section 5 outlines the main conclusions and outline next steps of this research.
Main assumptions
In the process of developing the proposed framework, some assumptions were made in order to frame our proposal.These assumptions are usually accepted among researchers although not always in an explicit way.Here we list these assumptions in order to provide a clear context for the analytical framework.
Movement is a key aspect of spatial behaviour, and is the result of people interacting with each other and their spatial environment (Orellana and Renso, 2010).Therefore space is more than a static background in which people move, but an active element of movement behaviour (Hillier, 2007).In other words, space has agency in the sense that it influences and is influenced by movement.
Movement is a dynamic complex system and therefore exhibits some key features such as: feedback loops, selfadaptation, emergent patterns, non-linearity, sudden transitions and tipping points (Mayer-Kress, Liu, and Newell, 2006).
Movement is ruled by limitations, "and not by independent decisions by spatially or temporally autonomous individuals" These limitations include capability, coupling, and authority restrictions (Hägerstrand, 1970).
Movement patterns are the evidence of spatial interactions, and therefore we can understand important aspects of people's mobility by studying the structures of spatio-temporal footprints of individuals as they move.In this sense, a movement pattern is a high-level description of how the movement of an individual or group relates to the underlying space (Laube, 2009).
Movement behaviour is determined by a hierarchic structure of decision-making at three levels: (i) A strategic level in which the individual decides their destination, activities and aims, (ii) a tactical level in which the individual decides the route to follow, the places to avoid, and reactions to unexpected events; and (iii) an operational level in which he/she decides the next step, which means that they intuitively choose a direction and speed, depending on the immediate environment (Hoogendoorn and Bovy, 2004).
Active mobility is fundamentally different from motorized mobility and usually underestimated.Although obvious at first glance, this differentiation must be explicit when studying movement and spatial behaviour of people.Interactions, restrictions, motivations, perceptions and strategies are different when individuals move within a motorized vehicle than when they walk or bike.Current practices tend to undercount shorter trips, non-work trips, off-peak trips, non-motorized links of motorized trips, travel by children, and recreational travel.As a result, there are usually far more non-motorized trips than what conventional travel models recognize (Victoria Transport Policy Institute, 2016).
Main elements
Based on these assumptions we propose an analytical framework for the study of non-motorized urban mobility patterns.The proposed framework involves three components: (i) methodological, (ii) behavioural, and (iii) perceptual.Each component attempts to address a common challenge on active mobility studies.
The methodological component attempts to explore, evaluate and combine methods and techniques from different disciplines.The premise for this component is that the synergy of methods from different fields boosts the analytical possibilities for the framework.Several methodological approaches are explored for main phases of the research: data collection, analysis, and interpretation.
The behavioural component aims to understand the effect of the environment on the spatial behaviour of people as they move.Revisiting the assumption of space as an active agent of the movement phenomena, this component explores the variables of urban space that trigger and modify different behaviours at the three decision-making levels mentioned above.
The perceptual component focuses on understanding how movement affects the perception of urban environments, and therefore influences people's feelings and opinions.There are obvious links between the perceptual and behavioural components in terms that one influence each other and this highlights the feedback loops mentioned in the assumptions.
Each component, however, focuses on a different set of research questions.
Methodological component
Among the main challenges for a better understanding of active mobility, is to select and combine methods and techniques from different disciplines.This component of the framework attempts to answer the question "How to collect, store, analyse and interpret data for exploring and understanding active mobility?"Although this question usually appears later in a scientific research, we want to start with this component to highlight the diversity of fields involved on studying mobility.
Traditionally, movement data collection was carried out using two main methods: individual-oriented and place-oriented.Individual-oriented methods mainly consisted of transportation and mobility surveys that include home and work/study locations and main transportation modes.Place-oriented methods mainly consisted of counting people and vehicles passing through a checkpoint using manual or automatic techniques.These two approaches together were most commonly used for mobility studies with the disadvantage of high costs and lack of detailed information (Golob and Meurs, 1986).
Nowadays, scientists, practitioners and planners recognise the huge potential of massive geo-location data produced by the confluence of ICTs and geo-positioning technologies.Dedicated location devices such as GPS are widely used for mobility and transportation studies thanks to its high spatial and temporal resolution and low cost.Researchers have studied mobility patterns simply by giving GPS devices to groups of people for a period of time and analysing the locations, times and routes recorded by the devices.
Although simple and powerful, this approach has limitations regarding the low number of people that can be reached in each study and the battery/memory limitations of the devices.Therefore, researchers have investigated other ways to collect movement data from geo-enabled devices such as smartphones, tablets or wearable computers; including developing and implementing dedicated monitoring apps.It is now common for companies and corporations to offer Location Based Services (LBSs) in exchange for user's location data, some times in a rather obscure way or without full acknowledgment from the user.The usefulness of this approach has been proved through services such as Google Maps' traffic layer (Google, 2016) and Strava METRO or Waze's navigation services (Waze Mobile, 2016).Also, researchers have shown that it is possible to extract movement information from social networks data (Dunkel, 2015; G. P. Griffin and Jiao, 2015;Torres and Costa, 2014).In a more participatory way, crowdsourcing and volunteer geoinformation approaches can be of great interest for researchers.People who are interested in improving mobility in the cities can participate by donating their geo-location data for research and planning using web-based or mobile apps to collect and store data.Ludic approaches can engage users and add value to the use of such applications (Capos SpA, 2015).
The vast amount of movement data being collected by the aforementioned approaches poses new challenges for researchers in storing, organising and analysing these massive datasets.During the last decade, researchers from computer science have developed data models and structures for efficient storage and retrieval (Forlizzi et al., 2000;Güting and Schneider, 2005;Tryfona and Jensen, 1999).Extracting information from these massive datasets is also important.Scientists are developing data-mining algorithms to detect and extract movement patterns (Gudmundsson et al., 2007;Laube, 2009), and applying visual analytics approaches to explore and understand those movement patterns (Andrienko and Andrienko, 2008;Thomas and Cook, 2006).Other researchers are interested in extracting significant locations from movement data (Thomason et al., 2015), exploring the spatial properties of movement data such as spatial autocorrelation (Orellana and Wachowicz, 2011), or enriching movement data with semantics (Alvares et al., 2007;Baglioni et al., 2009;Bogorny et al., 2010).
Although technology-based techniques gained huge attention during the last years, some field-based methods are still irreplaceable since they provide direct contact with the reality and offer a deeper understanding of the phenomena.Therefore, methods such as ethnographies (Lugo, 2013;Meneses-Reyes, 2013), in-deep interviews and direct observation (Jirón, 2011), and other approaches from sociology (Jungnickel and Aldred, 2014) not only remain popular among social scientists but they are gaining a renewed impetus even among researchers from engineering, architecture or computer sciences who are rediscovering the opportunities these approaches offer.
It is arguable that the most interesting research opportunities are located at the intersection of the different approaches.For example, digital video with action cameras can be used to study in detail the sensorial experience of riding a bicycle around the city (Spinney, 2011).In another example, mobile ethnography in combination with geo-location devices have been used in Utrech to study the embodied experience of cycling (Duppen and Spierings, 2013).Likewise the UWAC project at UK adopted a hybrid methodology by administrating surveys and interviews to households and individuals as well as ethnographic observation during a year in combination with spatial analysis techniques (T.Jones et al., 2012).
Behavioural component
A large amount of research has been conducted on how, when and why people decide to move around the city.In the context of the proposed framework we organise these findings on the hierarchical decision-making structure mentioned on the assumptions.Hereby, we briefly mention some examples of how the urban environment influences human movement behaviour based on the three tiers: strategic, tactical and operational.
Strategic level refers to decisions made in relation to destination, activities, aims, and the mode of the trip, and consequently, the use of motorized or non-motorized mobility.Influence of the urban environment at the strategic level includes both physical and non-physical factors.
Based on a utility maximisation assumption, distance is the most mentioned factor for mobility planning, and it is frequently assumed that non-motorized modes are preferred for short trips.This idea, however, is not supported by evidence.In a preliminary study, the authors found no correlation between commuting distance and mode selection, which is consistent with previous research on several cities in England (T.Jones et al., 2012).Researchers have explored other factors affecting the strategic level, including culture (Mehta, 2008), accessibility, safety and comfort (Alfonzo, Boarnet, Day, Mcmillan, and Anderson, 2006;Talavera-Garcia and Soria-Lara, 2015;Weinstein Agrawal, Schlossberg, and Irvin, 2008), topography (Heinen, Wee, and Maat, 2010;Iseki and Tingstrom, 2014;Rodríguez and Joo, 2004) and availability and closeness to transportation infrastructure (Cervero, Sarmiento, Jacoby, Gomez, and Neiman, 2009;Handy and Xing, 2011a;Khan, Kockelman, and Xiong, 2014).
Tactical level refers to decisions regarding routing, areas of preference/avoidance and activity scheduling.In other words, once the individual has decided where and how to go, they must decide the route to follow taking into account the places they need to visit, the timing and the preferred characteristics of the areas to cross.
At this level, network connectivity is the first variable taken into account (Handy and Xing, 2011b;Heinen et al., 2010;Khan et al., 2014).Routing is also influenced by the physical and visual continuity (Manum and Nordstrom, 2013;Rybarczyk and Wu, 2010), including obstacles, which are decisive mainly for the elderly (Alfonzo et al., 2006;Bernhoft and Carstensen, 2008).Moreover, it has been found that angular minimization is an important factor in route choice and that measurement of least angle routes in urban environments can be a useful way of predicting cyclist volumes (Raford, Chiaradia, and Gil, 2007).
Safety, in terms of crime and traffic accident, is also one of the aspects influencing the tactical level (Alfonzo, 2005), shaping the routes to avoid insecure areas.However, the importance of safety versus distance / time optimization is not yet well understood, since empirical evidence shows that on several cases, protecting infrastructure is underused.On the other hand, safety is highly dependent on perception, which will be analysed on the third component.
Finally, other components of environment that affects movement behaviour at the tactical level include population density, connectivity, and land use mixture (Cervero et al., 2009;Handy and Xing, 2011a;Khan et al., 2014).Higher density, greater connectivity, and more land use mix streets report higher rates of walking and cycling than low-density, poorly connected, and single land use streets (Saelens, Sallis, and Frank, 2003).
Operational level regards to the "next step", meaning that people will tend to follow the planned route but will change direction and speed in reaction to the changing, immediate environment, such as unpredicted obstacles and most notably the interactions with other individuals.It is clear that this level is highly unpredictable given the dynamic nature of the factors affecting it.Nevertheless, important research has been conducted on the operational level of movement, mainly focused on interactions among individuals, collectives and the environment, demonstrating that it is possible to detect, extract and replicate different kinds of movement patterns such as flocking, following, avoidance, among others (Gudmundsson et al., 2007).
It is worth mentioning that the set of decisions at the operational level might change decisions at the tactical level.For example, if a given street is found too crowded or insecure, the individual might decide to turn back or select a different route.On the other hand, the accumulated experience of decisions at the operational level will have an effect on tactical and strategic levels: For example, a cyclist who repeatedly enounters problems or obstacles on his/her preferred route will probably change to another route or even change their transportation mode.
Also, the three-tier system presented here is not the only way to organize the decision-making process for human mobility.As an example, Wiener, Büchner, & Hölscher (Wiener, Büchner, and Hölscher, 2009) proposed a different approach based on the cognitive processes involved and the level of spatial knowledge that is available to the individual.
In the context of the analytical framework, the behavioural component allows researchers to organise research questions regarding the level in which people operate.It also explores the influence of decisions made at one level influence the other levels.
Perceptual component
People cycling or walking perceive the city in a different way than drivers or public transport commuters, since these modes offer a unique set of experiences of the urban landscape involving different sensorial channels, which are partially or totally blocked inside a motor vehicle.Although some researchers have investigated these differences, usually they have not been incorporated with other aspects of active mobility on policies or planning.Perceptions are determinant when deciding the transportation mode, but these are not usually addressed by conventional mode choice studies (Heinen et al., 2010) In the proposed framework, perceptions are explicitly included since they shape motivations and spatial behaviour at different levels.In this context, the perceptual component includes not only sensorial stimuli but also explores the interpretation of such stimuli in the light of individual and collective experiences.We organize the perceptual component in three categories: perceptions about the environment, perceptions about other people and perceptions about the self.
Perceptions about the environment are largely affected by the mode of transportation.Cyclists and pedestrians interact directly i.e. without barriers with their environment, and are therefore more sensitive to the sights, sounds and smells of their surroundings.This can provide a rich experience but can also be overwhelming in the complex urban everyday life (Jungnickel and Aldred, 2014).The changes in perceptions of the urban environment when cycling has been studied using ethnographic methods, which allow a deeper understanding of such changes (P.Jones, 2005).
Several factors have been explored related to perceptions when walking and cycling.For example, researchers have identified which elements make a street to be perceived as "walkable", including density and mixture of uses, infrastructure, presence of other walkers, speed, illumination, among others (Wood, Frank, and Giles-Corti, 2010).Other studies have explored the perceptions of shared spaces in contrast to segregated pedestrian and bike lanes, indicating that pedestrians feel most comfortable in shared space under conditions which ensure their presence is clear to other road users -these conditions include low vehicular traffic, high pedestrian traffic, good lighting and pedestrian-only facilities.Conversely, the presence of many pedestrians and, in particular, children and elderly, makes drivers feel uneasy and, therefore, enhances their alertness.(Kaparias, Bell, Miri, Chan, and Mount, 2012).Age and gender also affects such perceptions.For example, older people tend to appreciate pedestrian and cycling facilities more than the young (Bernhoft and Carstensen, 2008).
Recent provided interesting insights on the influence of perception in choosing routes.For example Quercia, Schifanella, & Aiello (2014) explored how three subjective attributes of urban environment happiness, quietness, and beauty can be crowd-sourced to inform an algorithm to recommend routes that are not only short but also emotionally pleasant.
Perceptions about the others are also affected by movement.Wood et al. (2010) studied how walking behaviour and neighbourhood characteristics can influence the sense of community, this is, 'a feeling that members have of belonging and being important to each other and a shared faith that members' needs will be met by the commitment to be together' (McMillan and Chavis, 1986).The results suggest that only certain kinds of walking (i.e.leisure and slow walking) may increase sense of community.
Perception about others also shapes movement behaviour.For example, pedestrians are more likely to engage in risky behaviours such as crossing a busy intersection if they see other pedestrians engaging in the same risky behaviour (Barrero et al., 2013).
Perceptions about the self are also affected by the practice of cycling and walking.On the one hand, car ownership in certain cultures has been related to a self-perception of economical and social success.This tendency may be changing in the last decade, when health and environmental concerns are broadly shared by new generations.Walking and cycling, therefore, contribute to shape an image of "responsible citizenship".For example, participants in a study portrayed cycling as a practice of independence and interdependence, "its mix of benefits for the individual and the collective make it an appropriate response to contemporary social problems" (Aldred, 2010).At the same time, cyclers and walkers, mainly those who recently changed from other motorized modes of transportations may be more self-aware regarding their physical skills and capabilities (P.Jones, 2005).
Cycling has also been related to identity building.In fact, transport-related identities exist in interplay with other social identities (Skinner and Rosen, 2007), but have their own implications: appeals to 'cyclists' seek to shape such identities (Aldred, 2013).These identities may vary depending on the socio-cultural context and even on the comparative frequency to other mobility modes.In cities where bicycles are integral part of the mobility, a 'cyclist' identity is less relevant, whereas in motor-dominated cities this identity is more salient and many times stigmatised (Green, Steinbach, Datta, and Edwards, 2010).
APPLICATION
To demonstrate the applicability of the proposed framework, we implemented it in an on-going research project to study active mobility patterns in the city of Cuenca, Ecuador.The aim of the project is to gain understanding on the spatial behaviour of pedestrians and cyclists in order to provide insights for the design and implementation of sustainable mobility public policies.
The framework helped to organise the research questions and methodology in a clear and direct way, and provided a common playfield for geographers, computer scientists, architects, and psychologists.Table 1 summarizes the main research questions and corresponding methodologies organized by components of the framework.Source: http://piespedales.crowdmap.orgAlthough on early stages, the project is producing interesting results, which are being interpreted based on the different components of the framework.For example, we have created a collaborative mapping platform that allows any user to report mobility problems in the city using an interactive map, which allows mapping the perceptions about safety, conflicts and infrastructure (Figure 1).The results are compiled and analysed using spatial analysis techniques to compare with movement data collected with GPS data loggers carried by bicyclists.The project also created a citizen-science oriented initiative named "Scientists on Pedals" which invited people to participate in different stages of the project.The initiative has been well received and approximately 300 volunteers are currently participating.Other study in the project includes "controlled experiments" where volunteers are assigned to visit different checkpoints in the city.This allows analysing way-finding and routing behaviour of participants in relation with the urban environment.Finally, observational studies on the influence of physical and spatial variables on pedestrian flows are being conducted involving direct observation and automatic counting.
It is worth noting that although proposed methods are broadly used in the research field of mobility, it is the synergy among them, which allows a deeper exploration of the research questions.
CONCLUSIONS
In this paper, we introduced an analytical framework for the study of active mobility patterns in urban areas.The framework is based on a set of well-established assumptions regarding human movement and some elements that represent common questions in mobility research.The framework helps to organise multidisciplinary research efforts on active mobility by providing a structure for research questions and methodologies.The applicability is demonstrated in the context of an on-going research project, which aims to study the movement patterns of walking and cycling in Cuenca, Ecuador. | 2018-12-11T09:00:54.303Z | 2016-06-08T00:00:00.000 | {
"year": 2016,
"sha1": "448fccbc280b8f5f82497a7a72627a9b05c87330",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B2/527/2016/isprs-archives-XLI-B2-527-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "448fccbc280b8f5f82497a7a72627a9b05c87330",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
55221424 | pes2o/s2orc | v3-fos-license | FDTool: a Python application to mine for functional dependencies and candidate keys in tabular data
Functional dependencies (FDs) and candidate keys are essential for table decomposition, database normalization, and data cleansing. In this paper, we present FDTool, a command line Python application to discover minimal FDs in tabular datasets and infer equivalent attribute sets and candidate keys from them. The runtime and memory costs associated with seven published FD discovery algorithms are given with an overview of their theoretical foundations. Previous research establishes that FD_Mine is the most efficient FD discovery algorithm when applied to datasets with many rows (> 100,000 rows) and few columns (< 14 columns). This puts it in a special position to rule mine clinical and demographic datasets, which often consist of long and narrow sets of participant records. The structure of FD_Mine is described and supplemented with a formal proof of the equivalence pruning method used. FDTool is a re-implementation of FD_Mine with additional features added to improve performance and automate typical processes in database architecture. The experimental results of applying FDTool to 13 datasets of different dimensions are summarized in terms of the number of FDs checked, the number of FDs found, and the time it takes for the code to terminate. We find that the number of attributes in a dataset has a much greater effect on the runtime and memory costs of FDTool than does row count. The last section explains in detail how the FDTool application can be accessed, executed, and further developed.
Introduction
Functional dependencies (FDs) are key to understanding how attributes in a database schema relate to one another. An FD defines a rule constraint between two sets of attributes in a relation 1 r(U), where U = {v 1 ,v 2 ,…,v m } is a finite set of attributes (Yao et al., 2002). A combination of attributes over a dataset is called a candidate (Yao et al., 2002). An FD X → Y asserts that the values of candidate X uniquely determine those of candidate Y (Yao et al., 2002). For example, the social security number (SSN) attribute in a dataset of public records functionally determines the first name attribute. Because the FD holds, we write {SSN} → {first_name}. (Yao & Hamilton, 2008).
In this case, X is the left-hand side of an FD, and Y is the right-hand side (Yao et al., 2002). If Y is not functionally dependent on any proper subset of X, then X → Y is minimal (Yao et al., 2002). Minimal FDs are our only concern in rule mining FDs, since all other FDs are logically implied. For instance, if we know {SSN} → {first_name}, then we can infer that {SSN, last_name} → {first_name}.
Power set lattice
The search space for FDs can be represented as a power set lattice of nonempty attribute combinations. Figure 1 gives the nonempty attribute combinations of a relation r(U) such that U = {A,B,C,D}. There are 2 n -1 = 2 4 -1 = 15 attribute subsets in the power set lattice (Yao & Hamilton, 2008). Each combination X of the attributes in U can be the left-hand side of an FD X → Y such that X → Y is satisfied by relation r(U) (Yao & Hamilton, 2008). Since the attribute set itself U trivially determines each one of its proper subsets, it can be ignored as a candidate. There remain 2 n -2 = 2 4 -2 = 14 nonempty subsets of U that are to be considered candidates.
There are n · 2 n-1n = 4 · 2 4-1 -4 = 28 edges (or arrows) in the semi-lattice of the complete search space for FDs in relation r(U) (Yao & Hamilton, 2008). The size of the search space for FDs is exponentially related to the number of attributes in U. Hence, the search space for FDs increases quite significantly when there is (Yao & Hamilton, 2008). A relation r on U, denoted r(U), is a finite set of mappings {t 1 ,…, t n } from U to dom(U) with the restriction that for each mapping t ∈ r(U), t [v i denotes the value obtained by restricting the mapping t to v i . Each mapping t is called a tuple and t(v i ) is called the v i -value of t (Maier, 1983).
Amendments from Version 1
In response to the reviewers' comments, in this revision we corrected grammatical and typographical errors. In the abstract, we clarify that the experimental comparison of several functional dependency algorithms is referenced from previous research. In the sentence following Definition 6, we substituted the phrase "determines equivalent attributes sets with" with "determines equivalent attribute sets using", since the FDTool code uses the functional dependencies discovered at each level to generate equivalent attribute sets. We uploaded publicly available data with the same shape and structure as the 13 CARES datasets. We base all simulation studies on the publicly available data, which can be found in FDTool/data/input/CARES/ as part of the FDTool repository and archived our Zenodo project.
See referee reports
REVISED a greater number of attributes in U. For instance, when there are 12 attributes in a relation, the search space for FDs climbs to 24,564. This gives reason to be cautious of runtime and memory costs when deploying a rule mining algorithm to discover FDs.
Partition
The algorithms used to discover FDs differ in their approach to navigating the complete search space of a relation. Their candidate pruning methods vary and sometimes the methods used to validate FDs do as well. These differences affect runtime and memory behavior when used to process tables of different dimensions.
A common data structure used to validate FDs is the partition. A partition places tuples that have the same values on an attribute into the same group (Yao et al., 2002).
Definition 2. Let X ⊆ U and let t 1 ,…,t n be all the tuples in a relation r(U). The partition over X, denoted ∏ X , is a set of the groups such that t i and t j , 1 ≤ i, j ≤ n, i≠ j, are in the same group if and only if t i [X] = t j [X] (Yao et al., 2002).
It follows from Definition 2 that the cardinality of the partition card(∏ A (r)) is the number of groups in partition ∏ A (Yao & Hamilton, 2008). The cardinality of the partition offers a quick approach to validating FDs in a dataset.
Theorem 1. An FD X → Y is satisfied by a relation r(U) if and only if card(∏ X ) = card(∏ XY ) (Huhtala et al., 1999).
Theorem 1 provides an efficient method to check whether an FD X → Y holds in a relation 2 . Huhtala et al. (1999) proved it to support a fast validation method for relations consisting of a large number of tuples.
Closure
Efforts in relational database theory have led to more runtime and memory efficient methods to check the complete search space of a relation for FDs. In place of needing each arrow in a semi-lattice checked, we can infer the FDs that logically follow from those already discovered. Such FDs are to be discovered as a consequence of Armstrong's Axioms (Maier, 1983) and the inference axioms derivable from them (Ramakrishnan & Gehrke, 2000), which are -Reflexivity: Y ⊆ X implies X → Y; -Union: X → Y and X → Z imply X → YZ; -Decomposition: X → YZ implies that X → Y and X → Z.
These axioms signal the distinction between FDs that can be inferred from already discovered FDs and those that cannot (Maier, 1983). Exploiting what can be derived from Armstrong's Axioms allows us to avoid having to check many of the candidates in a search space.
Definition 3. Let F be a set of functional dependencies over a dataset D and X be a candidate over D. The closure of candidate X with respect to F, denoted X + , is defined as {Y | X → Y can be deduced from F by Armstrong's Axioms} (Yao & Hamilton, 2008).
The nontrivial closure 3 of candidate X with respect to F is defined as X* = X + \ X and written X* (Yao & Hamilton, 2008). Definition 3 gives room to elegantly define keys. Informally, a key implies that a relation does not have two distinct tuples with the same values on those attributes. Keys uniquely identify all tuple records in a dataset.
Definition 4. Let R be a relational schema and X be a candidate of R over a dataset D. If X ∪ X* = R, then X is a key (Yao et al., 2002).
A candidate key X of a relation is a minimal key for that relation. This means that there is no proper subset of X for which Definition 4 holds.
Difference-and agree-set algorithms model the search space of a relation as the cross product of all tuple records (Papenbrock et al., 2015). They search for sets of attributes agreeing on the values of certain tuple pairs. Attribute sets only functionally determine other attribute sets whose tuple pairs agree, i.e., agree-sets (Asghar & Ghenai, 2015;Papenbrock et al., 2015). Then, agree-sets are used to derive all minimal FDs.
Dependency induction algorithms assume a base set of FDs in which each attribute functionally determines each other attribute (Papenbrock et al., 2015). While iterating through row data, observations are made that require certain FDs to be removed from the base set and others added to it. These observations are made by comparing tuple pairs based on the equality of their projections. After each record in a dataset is compared, the FDs left in the base set are considered valid, minimal and complete (Papenbrock et al., 2015).
Lattice traversal algorithms model the search space of a relation as a power set lattice. Most of such algorithms, (i.e., TANE, FUN, FD_Mine) use a level-wise approach to traversing the search space of a relation from the bottom-up (Papenbrock et al., 2015). They start by checking 4 for FDs that are singleton sets on the left-hand side and iteratively transition to candidates of greater cardinality.
Performance Papenbrock et al. (2015) released an experimental comparison of the aforementioned FD discovery algorithms. The seven algorithms were re-implemented in Java based on their original publications and applied to 17 datasets of various dimensions. They found that none of the algorithms are suited to yield the complete result set of FDs from a dataset consisting of 100 columns and 1 million rows (Papenbrock et al., 2015). Hence, it is a matter of discretion to choose the algorithm best fitting the dimensions of a dataset.
The experimental results show that lattice traversal algorithms are the least memory efficient, since each k-level 5 can be a factor greater than the size of the previous level (Papenbrock et al., 2015). Difference-and agree-set algorithms and dependency induction algorithms perform favorably in memory experiments as a result of their operating directly on data and efficiently storing result sets. Lattice traversal algorithms scale poorly on tables with many columns (≥ 14 columns) due to memory limits (Papenbrock et al., 2015).
Lattice traversal algorithms are the most effective on datasets with many rows, because their validation method 6 operates on attribute sets as opposed to data (Papenbrock et al., 2015). This puts such algorithms in a special position to rule mine clinical and demographic record datasets, which often consist of long and narrow sets of participant records. Difference-and agree-set algorithms and dependency induction algorithms commonly reach time limits when applied to datasets of these dimensions (> 100,000 rows) (Papenbrock et al., 2015).
Lattice traversal algorithms
Lattice traversal algorithms iterate through k-levels represented in a power set lattice. If the lattice is traversed from the bottom-up, we say the algorithm is level-wise.
Definition 5. Let X 1 , X 2 ,…, X k , X k+1 be (k + 1) attributes over a database D. If X 1 X 2 … X k → X k+1 is an FD with k attributes on its left hand side, then it is called a k-level FD (Yao et al., 2002). 4 We say that an FD is checked when Theorem 1 is used to see if it holds or not (Yao et al., 2002). 5 Definition 5. 6 The search space for FDs is reduced at the end of each iteration using pruning rules. Pruning rules check the validity of candidates not yet checked with FDs already discovered and those inferred from Armstrong's Axioms (Yao & Hamilton, 2008). After a search space is pruned, an Apriori_Gen principle generates k-level candidates with the (k -1)-level candidates that were not pruned (Yao & Hamilton, 2008).
Apriori_Gen:
-oneUp: generates all possible candidates in C k from those in C k-1 .
-oneDown: generates all possible candidates in C k-1 from those in C k .
Level-wise lattice traversal algorithms stop iterating after all candidates in a search space are pruned. In this case, Apriori_Gen generates the null set ∅ raising a flag for the algorithm to terminate. This has the effect of shortening runtime to the degree that FDs are discovered and others are inferred.
Tane
The level-wise lattice traversal algorithms TANE, FUN, and FD_Mine differ in terms of pruning rules. FUN and FD_Mine expand on the pruning rules of TANE. Released by Huhtala et al. (1999), TANE prunes a search space on the basis that only minimal and non-trivial 7 FDs need be checked. TANE restricts the right-hand side candidates C + for each attribute combination X to the set which contains all the attributes that the set X may still functionally determine (Papenbrock et al., 2015). The set C + is used in the following pruning rules (Papenbrock et al., 2015).
• Minimality pruning: If an FD X \ A → A holds, A and all B ∈ C + (X) \ X can be removed from C + (X).
• Right-hand side pruning: If C + (X) = ∅, the attribute combination X can be pruned from the lattice, as there are no more right-hand side candidates for a minimal FD.
• Key pruning: If the attribute combination X is a key, it can be pruned from the lattice.
Key pruning implies that all supersets of a key, i.e., super keys, can be removed, since they are by definition nonminimal (Huhtala et al., 1999).
FD_Mine
Like TANE and FUN, FD_Mine is structured around the level-wise lattice traversal approach and the aforementioned pruning rules. Unlike the other two algorithms, FD_Mine, authored by Yao et al. (2002), uses the concept of equivalence as means to more exhaustively prune the search space of a candidate (Papenbrock et al., 2015). Informally, attribute sets are equivalent if and only if they are functionally dependent on each other (Papenbrock et al., 2015).
The proofs demonstrating that no useful information is lost in pruning candidates from equivalent attribute sets are reproduced in this section and were originally developed by Yao & Hamilton (2008). The equivalence pruning method can be derived directly from Armstrong's Axioms.
Definition 6. Let X and Y be candidates over a dataset D. If X → Y and Y → X hold, then we say that X and Y are an equivalence and denote it as X ↔ Y.
After a k-level is fully validated, i.e., each k-level candidate is checked, FD_Mine determines equivalent attribute sets using the FDs already discovered.
Proof. Since X → X + and Y ⊆ X + , Decomposition implies that X → Y. By a similar argument, Y → X holds. Because X → Y and Y → X, we have by definition that X ↔ Y holds.
Lemma 3 and Lemma 4 are derived from Armstrong's Axioms with the assumption of the equivalence X ↔ Y. (Yao & Hamilton, 2008).
Theorem 2 checks attribute sets X and Y for the equivalence X ↔ Y. FD_Mine assumes that the attribute set Y is generated before X. By Lemma 3 and Lemma 4, we know that for equivalence X ↔ Y, no further attribute sets Z such that Y ⊆ Z need be checked (Yao & Hamilton, 2008). Hence, Y is deleted as a result of the following pruning rule.
• Equivalence pruning: If X ↔ Y is satisfied by relation r(U), then candidate Y can be deleted. (Yao & Hamilton, 2008).
Exploiting the equivalence pruning method leaves FD_Mine in a more aggressive position to prune candidates than TANE. This offers an advantage in terms of runtime and memory behavior (Yao et al., 2002).
Non-minimal FDs
The pseudo-code proposed in the second version of FD_Mine (Yao & Hamilton, 2008) will under certain circumstances output non-minimal FDs (Papenbrock et al., 2015). FD_Mine references an Apriori_Gen method (Agrawal et al., 1996) stating that for each pair of candidates p, q ∈ C k-1 the set p ∪ q is to be placed in C k if card(p ∪ q) = k. Example 1 shows that the Apriori_Gen method referenced and utilized by FD_Mine can violate minimality pruning by checking supersets that need not be checked. Figure 2 gives the power set lattice of the relation described in Example 1 pruned by FD_Mine. it must be that AB* = {C, D, E}, the algorithm validates the FDs ABCD → E, ABCE → D, and ABDE → C. Since E, for example, is functionally dependent on the proper subset AB ⊆ ABCD, ABCD → E is non-minimal.
The Apriori_Gen principle presented in TANE (Huhtala et al., 1999) more effectively generates candidate level C k+1 from C k . It requires that C k+1 only contains the attribute sets of size k + 1 which have all their subsets of size k in C k (Huhtala et al., 1999) In reference to Example 1, this method does not insert the candidate ABCD in C 4 , without loss of generality, because ABC ⊆ ABCD but ABC ∉ C 3 . Thus, the non-minimal FD ABCD → E is not checked. Properly assigned closure values can allow the algorithm to avoid checking many non-minimal FDs. This is because the ObtainFDs module, i.e., the validation method, only checks 12 the right-hand side attributes v i for which v i ∈ U \ X + (Yao & Hamilton, 2008). Hence, provided that Pruning rule 3 asserts the equality ABCD* = E, ABCD → E need not be checked.
Operation
FDTool (Buranosky, 2018) is a command line Python application executed with the following statement: $ fdtool /path/to/file 13 . For Windows users, this is to be run from the working directory of fdtool.exe, which will likely be C:\Python27\Scripts for those installing with pip install fdtool. For other systems, installation automatically inserts the file path to the fdtool command in the PATH variable. /path/to/file is the absolute or relative path to a .txt, .csv, or .pkl file containing a tabular dataset. If the data file has the extension .txt or .csv, FDTool detects the following separators: comma (','), bar ('|'), semicolon (';'), colon (':'), and tilde ('∼'). The data is read in as a Pandas data frame 14 .
10 Equivalent candidates are stored in E.
11
All candidates at level k are stored in C k .
12
Assume the left-hand side attribute set X. 13 Edit FDTool/fdtool/config.py prior to building setup with python setup.py install to change preset time limit or max k-level. 14 The data is read in with the Pandas function read_csv(), which is subject to the usual spacing errors associated with reading in delimiterseparated values.
FDTool provides the user with the minimal FDs, equivalent attribute sets and candidate keys mined from a dataset. This is given with the time (s) it takes for the code to terminate (after reading in data), the row count and attribute count of the data, the number of FDs and equivalent attribute sets found, and the number of FDs checked. This is printed on the terminal after the code is executed as shown in Figure 3. The information is saved to a .FD_Info.txt file. Figure 3 shows the printed output of FDTool.exe applied to the contents of Table 1. The output file Table1. FD_Info.txt is saved to the user's current working directory.
Implementation
FDTool is a Python based re-implementation of the FD_Mine algorithm with additional features added to automate typical processes in database architecture. FD_Mine was published in two papers with more detail given to the scientific concepts used in algorithms of its kind (Yao et al., 2002;Yao & Hamilton, 2008). The two versions of FD_Mine were released with different structures but make use of the same theoretical foundation (Papenbrock et al., 2015), which is fully supported in mathematical proofs of the pruning rules used (Yao & Hamilton, 2008). FDTool was coded 15 with special attention given to the pseudo-code presented in the second version of FD_Mine (Yao & Hamilton, 2008).
The Python script dbschema.py in FDTool/fdtool/modules/dbschema is taken from dbschemacmd (https://www.elstel.org/database/dbschemacmd.html.en): a tool for database schema normalization working on functional dependencies (Elmasri & Navathe, 2011). It is used to take sets of FDs and infer candidate keys from them. The operation first assigns the left-hand side attribute combinations of a set of FDs to dictionary keys and their closures to the corresponding values. It then reduces the set of FDs to a minimum coverage 16 . Candidate keys are assembled using the minimum coverage and closure structure by adding attributes to key candidates until each minimal attribute set X for which X + = U is found. Details on the dbschema operations are described in FDTool/fdtool/modules/dbschema/Docs.
Use cases
FDTool was initially created to help decompose datasets of medical records as part of Clinical Archived Records research for Environmental Studies (CARES). CARES currently contains 13 datasets obtained from the medical software firms Epic and Legacy. The attribute count in this database ranges from 4 to 18; the row count ranges from 42,369 to 8,201,636.
Experimental results
To limit the strain on computational resources, FDTool has a built in time limit of 4 hours. FDTool reaches this preset limit (triggering program termination) when applied to the PatientDemographics dataset (42,369 rows × 18 columns) and the EpicVitals_TobaccoAlcOnly dataset (896,962 rows × 18 columns). The remaining 11 CARES datasets are given in Table 2 17 .
Experimental summary
The results from Table 2 show that runtime is primarily determined by the number of attributes in a dataset. For instance, the LegacyPayors dataset (1,465,233 rows × 4 columns) has slightly more rows (13% increase) but far fewer attributes (60% decrease) as compared to the AllLabs dataset (1,294,106 rows × 10 columns). The runtime of LegacyPayers (9.4 s.) is much less than that of AllLabs (999.8 s.), because AllLabs has many more arrows in its powerset lattice, -1 10-1 . . 2 -10 2 -10 5110, n n n = = than does LegacyPayers (28). Hence, FDTool has more FDs to check when applied to AllLabs. It is clear that the attribute count of a dataset has a much greater effect on the runtime of FDTool than does row count.
Many of the arrows in the powerset lattice of a candidate are pruned by FDTool. AllLabs has 5,110 arrows in its powerset lattice. However, FDTool only checks 818 FDs, as there are many inferred from the 43 FDs found. This follows from the Prune() function, which deletes many of the candidates to check partially as a result of mining 4 equivalent attribute sets. FDTool terminates after 5 k-levels when applied to AllLabs.
Future development
We want to improve its performance so that FDTool is better equipped to handle datasets of different dimensions. Using the dependency induction algorithm FDEP, the reach of FDTool could be extended to datasets with fewer rows and more than 100 columns (Papenbrock et al., 2015). This might also require upgrading the source code with multicore processing methods, such as a Java API, to reduce runtime and avoid reaching memory limits. A formal proof of the dbschema operations is also desired.
15
FDTool was tested regularly throughout the implementation process so as to accomodate to changes made to improve runtime and memory behavior. 16 A set of FDs F is a coverage of another set of FDs G if every FD in G can be inferred from F; i.e., G + ⊆ F + (Soule, 2014). F is a minimum coverage of G if F is the smallest set of FDs that covers G (Soule, 2014 Another goal is to increase the functionality provided by FDTool. This would mean implementing the pen and paper methods typically used to normalize relational schema and decompose tables. Our intent is to incorporate these changes in newer versions of FDTool, released at regular periods, so as to develop it as Python software that could automate much of what is done in the database design process.
While the authors fully support the open dissemination of data for verification and replication purposes, CARES data cannot be released as it contains Protected Health Information. For the purpose of testing the runtime and memory behavior of FDTool, we have produced simulated copies of all 13 datasets in the CARES collection. These datasets are publically available in FDTool/data/input/CARES as part of the FDTool repository and archived in the above Zenodo project. Author contributions MB and ES designed and implemented the software. MB wrote the manuscript. CWC supervised MB, and reviewed the manuscript. EP maintained the research data. DDS coordinated the funding for the project. All authors agreed to the final content of the manuscript.
Grant information
This work was funded by the US Environmental Protection Agency. The work presented here does not necessarily reflect the views or policy of the EPA. Any mention of trade names does not constitute endorsement by the EPA.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Reviewer Expertise: Statistical and computation methods.a I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
The article has : clear contributions
In the part: It enhances the FD_Mine algorithm by improving performance and automating typical theory processes.
In the part: The authors re-implement the FD_Mine algorithm, which is otherwise not implementation publicly available as a software tool.
In the part: The authors apply FDTool to 12 datasets of different dimensions. experiment : The effect of the attributes is greater than the records on the runtime and memory costs of the Findings FDTool. :
Additional contributions
The article clearly describes the features of the FDTool, such as its usage and execution. It also depicts future research opportunities with respect to the further development of the FDTool.
Major Comment:
In the abstract, it says, "We conclude that FD_Mine is the most efficient FD discovery algorithm when applied to datasets with many rows (> 100,000 rows) and few columns (< 14 columns)." The word "conclude" does not seem appropriate here. If this result indeed follows from your research, please explain how the results shown in Table 2 support this claim with respect to all datasets shown in the table [This explanation could be added in the experimental results or experimental summary section]. However, if the conclusion is in fact being taken from Papenbrock, then wording might be adjusted to "Previous research established that FD_Mine …." You may want to state your conclusions about your software tool.
We confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com | 2018-12-15T14:21:14.394Z | 2018-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "ef2736d0290a1bdc1d3fa63c07cce622c1008de0",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/7-1667/v2/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef2736d0290a1bdc1d3fa63c07cce622c1008de0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226848945 | pes2o/s2orc | v3-fos-license | Necrostatin-1 and necroptosis inhibition: Pathophysiology and therapeutic implications
Graphical abstract
Introduction
Necroptosis is a newly defined form of cell death. According to traditional belief, necrosis is a passive and unregulated cell death process, however, the discovery of necroptosis has overturned this assumption. Similar to necrosis, it is featured by the morphology of necrosis, including cell swelling and rupture; similar to apoptosis, it is controlled by a determined signal pathway which is why it is also called regulated necrosis [1] (other forms of cell death are summarized in Box 1). Since the discovery of necroptosis, researchers have been trying to investigate the role and mechanism of necroptosis in various diseases, including cardiovascular diseases [2], neurodegenerative diseases [3] and liver diseases [4]. Although there are plenty of evidences and papers about necroptosis, little attention has been paid to Necrostatin-1 (Nec-1), the inhibitor of necroptosis, and the potential therapeutic capability of Nec-1 has not been completely reviewed. Currently the application of Nec-1 is limited by metabolic instability and off-target effects, but this limitation does not harm the defined protective effects of Nec-1 and its analogues/derivatives in disease models. In fact, Nec-1 provides new prospect for prevention and treatment of multiple diseases. Here, we summarized potential functions that Nec-1 has exhibited in various disease models including typical inflammatory disorders and coronavirus disease 2019 (COVID- 19), providing a comprehensive viewpoint on Nec-1 from different aspects.
Signaling pathway of necroptosis
The critical mechanism of necroptosis is related to the activation (including ubiquitination and phosphorylation) of receptor interacting protein 1 (RIP1), receptor interacting protein 3 (RIP3) and mixed lineage kinase domain-like protein (MLKL) [11,12]. Among researches on the initiation of necroptosis, mechanism of tumor necrosis factor (TNF)-induced necroptosis has been studied the most thoroughly ( Fig. 1.). The combination of TNF-α with TNF receptor 1 (TNFR1) on the cell membrane stimulates different signaling pathways, including nuclear factor kappa B (NF-κB), RIP1-indipendent apoptosis (RIA), RIP1-dependent apoptosis (RDA) and necroptosis. Upon activation, TNF trimers bind to TNFR1 trimers, followed by recruitment of RIP1, TNFR-associated death domain (TRADD), TNFR-associated factor 2 (TRAF2) and cellular inhibitor of apoptosis proteins 1 and 2 (cIAP1/2) [13][14][15]. These components assemble complex I at plasma membrane, activating NF-κB signaling. Subsequently, TRADD and RIP1 dissociate from TNFR1 and lead to the formation of complex II [14], including complex IIa, IIb and IIc. After dissociation from complex I, TRADD recruits fas associated via death domain (FADD) and caspase-8, forming complex IIa [14,16]. Complex IIa formation is followed by induction of RIA, which is independent of RIP1 and RIP1 kinase activity [16]. During the transition from complex I to complex II, RIP1 undergoes C-terminal death domain-mediated dimerization [17]. RIP1 dimerization is essential for RIP1 activation and leads to formation of complex IIb (composed of RIP1, FADD and caspase-8) and IIc (composed of RIP1 and RIP3) [18,19]. Complex IIb is formed without TRADD and thus requires RIP1 kinase activity to activate caspase-8 and RDA, which is RIP1-dependent [16,20]. The formation of complex IIa and IIb is followed by the activation of caspase-dependent cell death via apoptosis, while complex IIc induces necroptosis. When activated RIP1 combines with RIP3 and thus leads to the formation of complex IIc (or known as necrosome), cell death shifts to a caspase-independent way, followed by the phosphorylation and oligomerization of MLKL, which marks the beginning of necroptosis [12]. Polymerized MLKL translocates to cell membrane and causes membrane disruption, executing necroptotic cell death [21,22].
Nec-1 and its specificity
The patho and physiological relevance of necroptosis used to be underestimated or ignored because of the absence of a defined and convenient biochemical marker or indicator. The identification of necrostatins has changed the situation and enabled researchers to investigate more about the molecular mechanism of necroptosis. Necrostatins is a group of compounds named for their capability to prevent necroptosis, among which Nec-1 has been used to study the contribution of necroptosis and target RIP1 kinase activity in a wide range of pathological cell death events. Nec-1 ( Fig. 2A), a type of alkaloid with small molecule weight, was first identified as an inhibitor of necrotic cell death in 2005 [23] and was found to be a specific inhibitor of RIP1 in further studies [24]. By interacting with the T-loop, an essential structure for death domain receptor engagement, Nec-1 inhibits RIP1 kinase activity [24]. It binds allosterically to the hydrophobic pocket of kinase domain near ATP-binding active center, with RIP1 adopting a DLG-out inactive conformation [24]. After binding to the activation loop, Nec-1 can potently inhibit RIP1 autophosphorylation [25]. RIP1 autophosphorylation is a crucial process during TNF-induced necroptosis signaling and several RIP1 autophosphorylation sites have been reported by researchers, including Ser14/15, Ser20, Ser161 and Ser166 [24]. RIP1 phosphorylation leads to the recruitment of RIP3 to RIP1 and subsequent formation of RIP1-RIP3 complex [26], which then phosphorylates MLKL. Therefore, Nec-1 efficiently blocks RIP1-RIP3-MLKL signal transduction by inhibiting RIP1 phosphorylation.
As for pharmacokinetics, Nec-1 can quickly get into blood or circulatory system. Geng et al. found that plasma concentration of Nec-1 reaches its peak at 1 h after oral administration in rats [27], and Nec-1 could completely dissolve in 95 % ethanol with absolute bioavailability of more than 50 %. These characteristics endows Nec-1 with promising potentials of clinical application. One major drawback of Nec-1 is its short half-life, which is about 1− 2 h in rats (1.8 ± 0.9 h for intravenous route and 1.2 ± 0.3 h for oral route, tested by Geng et al) [27]. Another limitation is its off-target effect. In fact, the original name of Nec-1 was methyl-thiohydantoin-tryptophan (MTH-Trp), defined as an inhibitor of indoleamine 2,3-dioxygenase (IDO) [28]. Thus Nec-1 application not only inhibits necroptosis but also IDO and this off-target effect has been criticized by many researchers [25,29]. It is true that off-target action makes it more difficult for researchers to define RIP1 kinase function by using Nec-1, and limits its clinical application of targeting RIP1 specifically. However, as a classic and potent RIP1 inhibitor, the broad researches investigating the role of Nec-1 in various disease models still cannot be ignored, nor its potential clinical significance. In fact, IDO plays a significant role in inflammation [30], therefore, the off-target effect enables Nec-1 to protect against inflammatory diseases by inhibiting both necroptosis-induced and IDO-mediated inflammation. Although currently there is no direct evidence proving this (perhaps because of difficulty in distinguishing between necroptosis-induced and IDO-mediated inflammation), RIP1-independent inhibitory effect of Nec-1 has been demonstrated [31,32]. Therefore, off-target effect may endow Nec-1 with stronger anti-inflammation ability.
Molecular mechanisms of Nec-1 in disease models
Necroptosis has been verified to participate in various diseases [33], and the protection of Nec-1 has also been authenticated in many disease The combination of TNF-α and TNFR1 on the cell membrane stimulates different signaling pathways, including NF-κB, RIA, RDA and necroptosis. TNFR1 exists in the cell membrane and its subunits can spontaneously trimerize and bind to ligands, allowing their cytosolic tails to recruit multiple proteins and generate a complex (complex I). Ubiquitylation of RIP1 could stabilize complex I and contribute to NF-κB release. Additionally, deubiquitylated RIP1 cooperates with its cognate kinase RIP3 for recruitment of another complex (complex II). The formation of complex IIa and IIb is followed by the activation of caspase-dependent cell death via apoptosis, while complex IIc induces necroptosis. When activated RIP1 combines with RIP3 and thus leads to the formation of complex IIc, cell death shifts to a caspaseindependent way, followed by the phosphorylation and oligomerization of MLKL, which marks the beginning of necroptosis. Nec-1 is a type of alkaloid with small molecule weight, identified as an inhibitor of RIP1 and IDO. (B) Nec-1 halts the necroptosis signaling pathways by inhibiting RIPI signaling cascades. Furthermore, Nec-1 also affects necroptosis by targeting ROS. Nec-1 inhibits apoptosis by targeting RIP1 when the cell undergoes signaling pathway of RDA, yet it loses this anti-apoptosis effect when the pathway is RIA. models, including cardiovascular, neurological and renal diseases ( Table 1). The basic molecular mechanism of this protective effect is blocking necroptotic cell death by targeting RIP1. Therefore, Nec-1 administration usually results in alleviated cell death and improved cell viability. In fact, Nec-1 not only inhibits RIP1, but also reduces RIP3 expression and phosphorylation, which has been demonstrated in many disease models, including myocardial infarction (MI), mental diseases and intestinal inflammation [34][35][36]. However, no direct inhibitory effect of Nec-1 on RIP3 has been reported [25] and Nec-1 does not block RIP3 autophosphorylation [25], therefore this influence on RIP3 is indirect. Nec-1 reduces the interaction between RIP1 and RIP3, inhibits the formation and/or reduces the stability RIP1-RIP3 complex, thus affects downstream RIP3 activity [37].
Necroptosis and apoptosis share part of the same upstream pathway, and RIP1 kinase activity also drives apoptosis, which is defined as RDA. Therefore, Nec-1 also targets RIP1-mediated apoptosis. Increasing studies show that Nec-1 not only suppresses necroptosis, but also inhibits apoptosis [82,95]. Besides, Han et al. found that Nec-1 can enhance leukemia cell apoptosis induced by Shikonin [96], showing an anti-cancer effect; Jie et al. found that Nec-1 could specifically induce neutrophils apoptosis rather than blockage [97], which implies an anti-inflammatory effect. Despite researches mentioned above, some researchers claim that Nec-1 does not affect apoptosis [38,48], which is probably RIA. When apoptosis is mainly RIP1-dependent, Nec-1 inhibits RIP1 kinase activity and formation of complex IIa, and thus apoptosis is blocked; however, when apoptosis is RIP1-independent, Nec-1 has no inhibitory effect. As for the opposite results of Han and Jie, it can be explained by the shift between necroptosis and apoptosis: apoptosis can be induced when necroptosis is inhibited by Nec-1, and co-treatment with the pan-caspase inhibitor z-VAD-fmk leads to a shift from apoptosis to necroptosis [98]. These different research results indicate that the underlying mechanism among RIP1, necroptosis and apoptosis is a complicated network, which is why Nec-1 exhibits different effects in different situations/diseases models.
Reactive oxygen species (ROS) is a group of highly reactive chemical species containing oxygen, which is usually the natural byproduct of metabolism [99]. However, ROS levels could increase dramatically under harmful conditions, causing damage to cell structures or even cell death [100]. Studies have demonstrated that the application of Nec-1 attenuates the elevated intracellular ROS production in disease models of acute liver failure, spinal cord injury, and I/R injury etc [43,79,101]. This effect of downregulation on ROS generation could even result in a similar ROS level as normal cells [79]. Although the specific mechanism underlying the inhibitory effect of Nec-1 on ROS has not been demonstrated very clearly, it is verified that the ROS production is probably RIP1-dependent [102]. In other researches demonstrating the relation between ROS and necroptosis, ROS increases the expression of RIP1/-RIP3 and improves the stabilization of RIP1-RIP3 complex [103,104]. Such an interactive relation between necroptosis and ROS is still not very clear of definite, remaining to be supported and further investigated by more researches. The underlying mechanism of Nec-1 in diseases is summarized in Fig. 2B.
Inflammation and inflammatory diseases
Inflammation is a defensive response in reaction to tissue or cell damage caused by endogenous or exogenous injuries, which involves multiple cells including endothelial cells, mononuclear macrophages, fibroblasts and platelets. The relation between inflammation and necroptosis has gained attention since the observation of necroptosis. The necrotic characteristics of necroptosis is marked by cell rupture and release of intracellular immunogenic contents, which initiate inflammatory responses and indicate the pro-inflammatory effect of necroptosis. And, conversely, inflammation induces necroptosis by proinflammatory mediators or mechanism of direct contact with immune cells [105], promoting cell death. This vicious circle emphasizes the Table 1 Role of Nec-1 in disease models.
Diseases
Role of Nec-1 References
Cardiac ischemia/ reperfusion (I/R) injury
Nec-1 reduces cardiomyocyte necrosis, improves cardiac output and prolongs cardiac allograft survival time
Myocardial infarction (MI)
Nec-1 inhibits myocardial tissue necroptosis and improves cardiac function in acute MI models of rat and pig [34,44] Myocarditis Nec-1 protects heart tissues from myocardial injury by downregulating RIP1/RIP3 expression in CVB3-induced myocarditis mouse model [45] Neurological Diseases
Postoperative cognitive dysfunction
Nen-1 reduces neuroinflammation and attenuates postoperative cognitive dysfunction in D-Galactoseinduced aged mice [56] Ischemic stroke Pretreatment of Nec-1promotes survival of oligodendrocyte precursor cells, alleviates white matter injury, and improves cognitive function after transient cerebral ischemia [57] Alzheimer's disease (AD) Nec-1 directly targets Abeta and tau proteins, alleviates brain cell death and ameliorates cognitive impairment in AD models
The key components of necroptosis, RIP1, RIP3 and MLKL are verified to participate in inflammation. Compared with MLKL, RIP1 and RIP3 have a more significant role in inflammation [107], and here the focus will be on RIP1. Before the identification of RIP1 as a key target of necroptosis [24], RIP1 was firstly identified as a mediator during NF-κB signaling induced by TNF-α [108]. Therefore, RIP1 promotes inflammatory responses through two ways: necroptotic cell death and inflammation independent of cell death. Upon pro-necroptotic stimuli (e.g., TNF, Fas), RIP1 activates RIP3 and then phosphorylates MLKL, which leads to cell membrane pores [109] and release of death-associated molecular patterns (DAMPs) [110]. DAMPs are a group of endogenous molecules that can initiate non-infectious inflammatory responses, for example, cytokine family interleukin (IL)-1. DAMPs released from necroptotic cells sensitize neighboring cells to necroptosis and can be modified by ROS or endoplasmic reticulum stress (ERS). The role of necroptosis-associated DAMPs in inflammation (during bacterial and viral infection) is reviewed by Kaczmarek [110]. Apart from inducing pro-inflammatory necroptotic cell death, recent evidence suggests that RIP1 and RIP3 directly induce inflammation by production of pro-inflammatory cytokines, which is independent of cell death. Zhu et al. found that cytokine expression level of necroptosis directly induced by MLKL oligomerization was much lower than that of necroptosis induced by TNF-α, which involved RIP1 and RIP3 activation [111]. Similar result was also proved by Saleh et al. that production of INF-β induced by lipopolysaccharide (LPS) requires RIP1 and RIP3 but not MLKL [112]. Besides, RIP1 activation was proved to be crucial for the secretion of IL-1α which is independent of RIP3-mediated necroptosis [113], and for the production of TNF /TNF mRNA which is independent of NF-κB [114].
As a RIP1-target inhibitor of necroptosis, Nec-1 is proved to exhibit significant protective effect in various inflammatory diseases, including acute and chronic inflammatory responses. In the LPS-induced disease models of fulminant hepatic failure (FHF) and acute lung injury (ALI) [77,115], injection of LPS remarkably upregulated the expression and
Renal Diseases Acute kidney injury (AKI)
Nec-1 reduces necroptotic cell death, ameliorates kidney dysfunction and protects renal tubular epithelial cells [67,68,69] Chronic kidney disease Nec-1 improves renal pathologic changes and renal function in a remnant-kidney rat model
Trauma induced hemorrhagic shock
Nec-1 improves liver function and remarkably reduces morality in a rat model of hemorrhagic shock
Chronic hepatitis C virus infection
Nec-1 improves cell viability of cells infected by hepatitis C virus [81] Skeletal System Diseases Osteoarthritis Nec-1 ameliorates destruction of osteoarthritis cartilage [82] Cartilage thinning Co-treatment of Nec-1 and z-VADfmk alleviates force-mediated cartilage thinning and cell death of chondrocyte [83] Osteonecrosis Nec-1 markedly decreases the osteonecrosis rate in a rat model [84] Osteoporosis Nec-1 improves bone formation and alleviates trabecular bone in a glucocorticoid-induced osteoporosis rat model; Nec-1 inhibits osteocyte necroptotic cell death and attenuates trabecular bone deterioration in a rat model of postmenopausal osteoporosis [85,86]
Systemic inflammatory response syndrome (SIRS)
Pretreatment with Nec-1 protects against SIRS [25,89] Sepsis Nec-1 improves liver function and ameliorates pathological damage in septic rats [90] Abdominal aortic aneurysm (AAA) Nec-1 lessens aortic expansion and improves aortic structure pathologically in AAA mouse model [91] Acute respiratory distress syndrome Nec-1 ameliorates inflammatory response and improves pulmonary function in rat model induced by oleic acid [92]
Human immunodeficiency virus type 1 (HIV-1) infection
Nec-1 alleviates HIV-1-induced cytopathic effect and interestingly and inhibits the formation of HIVinduced syncytia [93] Systemic autoimmunity Nec-1 inhibits pro-inflammatory cytokine secretion [94] phosphorylation of RIP1 and RIP3, which was attenuated by Nec-1. Apart from prolonged survival time of mice, Nec-1 also showed great anti-inflammatory ability, inhibiting the activation of NF-κB in both models. In LPS-induced FHF, application of Nec-1 attenuated cell death, IL-33 release and RAGE (the receptor for advanced glycation endproducts) interaction, suggesting the protective mechanism was related to necroptosis and DAMPs-mediated pattern recognition receptor (PRR) pathways. However, in LPS-induced ALI, pretreatment with Nec-1 resulted in lower level of inflammatory cytokines including TNF-α, IL-6 and IL-8, but had no effect on cell viability. This suggests that Nec-1 also exhibits protection against inflammation independent of necroptosis. In systemic inflammatory response syndrome (SIRS) mouse model induced by TNF, Nec-1 prevented mice from hypothermia and death, but the authors inferred that NF-κB was not affected [89], which conflicts with NF-κB inhibition observed in FHF and ALI mouse models. In fact, Degterev et al. also reported no effect of Nec-1 on NF-κB in their early study [24]. This divergence could be explained by a recent research that RIP1 does not necessarily participate in activation of NF-κB [116]. Despite crucial role of RIP1 in NF-κB signaling reported by many researches [117,118], Wong et al. found that canonical NF-κB was activated in Rip1− /− MEFs upon TNF application, confirming that RIP1 is not obligate in NF-κB pathway [116]. Bertrand et al. proposed an explanation that the role of RIP1 in NF-κB activation is probably cell-type-specific [119]. Therefore, the RIP1 inhibitor Nec-1 has different effects on NF-κB in different disease models. Besides, Nec-1 can also attenuate chronic inflammatory responses like autoimmune disease. Autoimmune disease is caused by an abnormal immune response that mistakenly attacks human body. Researches have proved that the pathogenesis of autoimmune disease (e.g., autoimmune arthritis and vasculitis) involves necroptosis, with upregulated key factors of necroptosis [120,121]. Nec-1/Nec-1 analogues suppress autoimmune disease not only by inhibiting necroptosis, but also by suppressing apoptosis [122,123]. The underlying molecular mechanism is not clear, perhaps it is due to Nec-1 inhibition on necroptosis-induced DAMPs and pro-inflammatory cytokines. Besides, Nec-1 also targets RDA in a chronic inflammatory model of osteoarthritis, protecting against inflammation by suppressing apoptosis via RIP1/HMGB1/TLR4 pathway [82].
Ischemia-reperfusion injury and related diseases
Ischemia-reperfusion (I/R) injury is a type of tissue damage caused by reperfusion of blood after a period of anoxia or hypoxia [124], which is involved in various clinical diseases, including MI, cerebral infarction and gastrointestinal dysfunction. The pathophysiology of I/R injury consists of two stages. Firstly, the process of ischemia causes damages to cells and a shortage of oxygen, inducing oxidative stress and ensuing inflammatory responses. Then, when the stage of reperfusion begins, activated endothelial cells overproduce ROS, ROS cause oxidative stress and subsequent inflammatory injuries, finally resulting in cell death including necroptosis [125]. Nec-1 is reported to protect against several types of I/R injuries, including cardiac, cerebral and renal I/R injuries [44,57,73,74,126]. Here, we will discuss it from different mechanisms that Nec-1 protection involves.
Bundles of researches have been conducted to investigate the specific role of necroptosis in I/R injuries and the underlying mechanism, and it has been obvious that RIP1-mediated necroptosis is involved in I/R injuries, marked by increased levels of TNF-α, TNFR1 and RIP1/RIP3 phosphorylation [87,127]. Necroptosis occurs during both ischemia and reperfusion, causing cell death, tissue damage and finally organ dysfunction. Nec-1 can protect against I/R injury by inhibiting necroptotic pathway and reducing necroptotic cell death. The expression and phosphorylation of RIP1, RIP3 and MLKL is suppressed after application of Nec-1, usually accompanied by attenuated cell death/tissue injury and improved prognosis. For example, Nec-1 reduces infarct size and prevents adverse remodeling in cardiac I/R models [44,127], improves memory and cognitive function in cerebral I/R models [57,126], and reduces serum creatinine/urea concentrations in renal I/R models [73,74]. This protective effect of Nec-1 is also closely related to its anti-inflammation ability. In the liver I/R models of normal liver or fatty liver, Nec-1 notably suppressed liver inflammatory response, reducing expression of inflammation-related genes (e.g., NF-κB, JNK and ERK) [128]. The necroptosis downstream protein high mobility group Box 1 (HMGB1), a mediator of necroptosis-induced inflammation, is reported to participate in I/R injury [87,129]. As mentioned in Section 3.1, HMGB1 can be downregulated by Nec-1 [82], therefore it is also a key target (although indirect) for Nec-1 to alleviate inflammation in I/R injury [43]. The other models of liver and cardiac I/R injury also observed decreased inflammation or pro-inflammatory cytokines [127,130].
Another critical pathophysiological process during I/R injury is mitochondrial dysfunction or mitochondrial-related response. Ischemia causes hypoxia and great oxidative stress, and reperfusion intrigues the second wave of mitochondrial-mediated response. After the treatment or pre-treatment of Nec-1, I/R-induced ROS production decreases notably [127,128]. RIP1-mediated necroptosis is closely related to ROS production: necroptotic cell death causes oxidative stress/ROS overproduction [102,131], and ROS promotes necroptosis by increasing RIP1/RIP3 expression and RIP1-RIP3 complex stabilization [103,104]. Therefore, it is possible that Nec-1 prevents I/R injury by inhibiting ROS through two ways: decreased ROS level alleviates mitochondrial dysfunction, or unstable necrosome blocks necroptotic cell death. Besides, Nec-1 also targets Cyp-D/MPTP during I/R injury. Mitochondrial permeability transition pore (MPTP) has already been demonstrated as a crucial factor in the pathophysiology of I/R injury [132,133] and it also interacts with necroptosis pathway [134]. Lim et al. demonstrated the significant regulatory function of cyclophilin-D (Cyp-D) in the activation of MPTP by using a Cyp-D deficient mouse model, and found that Nec-1 showed no significant protective effect in Cyp-D-/-mice, indicating the mechanism of Nec-1 protection is MPTP-dependent [135].
Apart from I/R injury, the protective effect of Nec-1 has been observed in other disease models like brain hemorrhage and traumatic injury, which is related to I/R injury or I/R disease mechanism. In animal models of intracerebral/subarachnoid hemorrhage (ICH/SAH), application of Nec-1 attenuates cell death, reduces hematoma volume and improves neurological outcomes [37,55,136,137]. Besides, both pre-treatment and post-treatment of Nec-1 exerts protection against brain hemorrhage [54,55], indicating a broad therapeutic window. The underlying molecular mechanism is still not definite, but it involves oxidative stress and inflammation: for example, Nec-1 alleviates glutathione (GSH) depletion in hemin-induced cell death and inhibits NLRP3 inflammasome activation; the attenuated oxidation and inflammation ameliorates brain swelling and blood-brain barrier disruption, finally inhibits the formation of brain edema [54,55,138]. In researches of traumatic brain injury (TBI) and spinal cord injury (SCI), necroptosis plays a significant role in neural cell death and participates in both primary and secondary injury [47,48]. The Nec-1 protection is associated with Akt/mTOR activation, mitochondrial dysfunction and ERS [46,49,139]. Several researches reported that Nec-1 also inhibited apoptosis/autophagy [95,101], yet contradicted results were reported by Liu et al. that Nec-1 did not alter apoptosis [48], suggesting that there is crosstalk among necroptosis, apoptosis and autophagy in the pathogenesis of traumatic injury.
Metabolism-related cardiovascular diseases
In cardiovascular diseases, the death of cardiomyocytes is proved to be irreversible, which means poor prognosis. Previously, necrosis and apoptosis are regarded as main forms of cardiac cell death. With more and more studies investigating into programmed cell death, necroptosis has been proved to be another important pathway of cell death in cardiovascular diseases [140]. For example, expression levels of RIP1, RIP3 and MLKL are upregulated in clinical samples of atherosclerosis, [141], RIP3 mediates adverse remodeling after MI [142], and RIP1-RIP3 pathway is activated in acute myocarditis [45]. Since acute cardiovascular events like acute MI are inextricably linked with I/R, which has been discussed above (Section 3.3), here we would like to focus on metabolism-related cardiovascular diseases, including diabetic cardiomyopathy and atherosclerosis.
Hyperglycemia is a risk factor of cardiovascular diseases, and people with diabetes face higher risk of developing cardiovascular disease, including atrial fibrillation and coronary heart disease [143]. High glucose (HG) is closely associated with vascular endothelial dysfunction, causing damage to endothelial cells. Since necroptosis has been so active in pathological processes of cardiovascular diseases like cardiac I/R, as demonstrated above in Section 3.3, it is natural to question whether it participates in HG-induced endothelial dysfunction. Liang et al. firstly reported the contribution of necroptosis in HG-induced injury in cardiac cells and role of Nec-1 in reducing cytotoxicity, oxidative stress and dissipation of mitochondrial membrane potential (MMP) [39]. Similar results were obtained by later researches in both cardiac cells and endothelial cells [38,40,41]. However, current studies only provide an elusive description on this Nec-1 protective effect, and the underlying molecular mechanism remains to be investigated. Here, we would like to discuss the potential mechanisms related to advanced glycation end products (AGEs) and Ca 2+ /calmodulin-dependent protein kinase II (CaMKII). One of the major modifications under hyperglycemic conditions is the glycation of proteins or lipids, leading to the formation of AGEs. In the pathogenesis of hyperglycemia-induced cardiovascular dysfunction, AGE pathway has been proved to be a crucial mediator of HG-induced detrimental effects [143]. When AGEs combine to the receptor for advanced glycation end products (RGAG), NADPH oxidase is activated, leading to excessive generation of ROS [144]. Cellular oxidative stress induces, or at least promotes necroptosis signaling pathway under HG condition (as observed by Liang et al. [40]). According to the study of Zhang et al., RIP3 interacts with and allosterically activates glycogen phosphorylase (PYGL), a key metabolic enzyme of glycogenolysis [145]. PYGL catalyzes the breakdown of glycogen into glucose-1-phosphate, and glucose-1-phosphate is subsequently converted into glucose-6-phosphate [1,146]. Then glycolysis begins and each molecule of glucose is split into two molecules of pyruvate [1,146]. Pyruvate produces lactate, which induces ROS [1]. Besides, necroptosis-induced glycogenolysis also produces methylglyoxal, which covalently binds to proteins and forms AGEs [147]. From above, we can see that there is an active interaction among AGE pathway, necroptosis pathway and ROS production. Therefore, we speculate the application of Nec-1 inhibits necroptotic signaling, ameliorates ROS, and finally reduces AGEs in HG-induced cell injury and diabetic cardiomyopathy. Another target of Nec-1 might be CaMKII. CaMKII is a key substrate of RIP3 [148] and is abundant in cardiac tissues, involved in myocardial injuries by regulating L-type calcium channel and function of sarcoplasmic reticulum [149,150]. Although currently there is no proof that Nec-1 directly targets CaMKII, Szobi et al. found that inhibition of CaMKII normalizes the upregulated RIP1 levels and Reventun et al. reported reduced p-CaMKII expression after Nec-1 application [151], indicating a potential interaction (direct or indirect) between RIP1 and CaMKII [152].
Atherosclerosis is closely related to lipid metabolism, and people with hyperlipemia are more likely to develop atherosclerosis [153]. Among the factors of dyslipidemia, the most noticeable one is excess low density lipoprotein (LDL), especially oxidized LDL (ox-LDL) [154]. The accumulation of ox-LDL is a crucial stimuli of cell death, inducing not only apoptosis but also necrosis, which finally leads to the formation of intravascular necrotic cores [155]. Moreover, ox-LDL promotes the formation of foam cells from macrophages [156]. In 1997, Crisby et al. highlighted in their study that necrosis (they referred to it as oncosis) was a more common form of cell death than apoptosis in atherosclerotic plaques [157]. Further study of Grootaert et al. found that deletion of Caspase-3 promoted atherosclerotic plaque progression in Apoe-/-mice, rather than inhibiting it [158]. This suggests that anti-apoptosis is not a favorable strategy but anti-necrosis might be. However, necrosis is probably not an ideal target because it is largely uncontrollable and a single necrosis inhibitor cannot efficiently block all necrosis stimuli. Therefore, necroptosis could be a better target. In 2013, Lin et al. reported the proatherogenic role of RIP3-mediated necrosis [159]. They found that knocking out Rip3 in Ldlr-/-and Apoe-/-background mice significantly reduced the size of atherosclerotic lesions. Later in 2016, Karunakaran et al. innovatively demonstrated the macrophage necroptosis pathway could be applied as a both diagnostic and therapeutic tool to treat atherosclerosis [2]. Using a novel radiotracer developed with Nec-1, they found that radiolabeled Nec-1 localized specifically to atherosclerotic plaques in Apoe -/-mice, and Nec-1 uptake was correlated to lesion areas [2]. These exciting findings provide a new insight into the promising Nec-1 therapy of atherosclerosis, because this therapy appears to be nontoxic and shows efficiency in established atherosclerotic lesions. Another in vitro study further found that Nec-1 treatment could ameliorate eNOS/NO reduction, reduce vascular adhesion molecules (VCAM-1 and E-selectin) and inhibit NF-κB pathway in ox-LDL induced endothelial injury [160]. The role of necroptosis/Nec-1 in atherosclerosis is probably also associated with lipid peroxidation (necroptosis-mediated lipid peroxidation has been well discussed in the review of Vandenabeele et al. [1]): lipid peroxidation increases ox-LDL accumulation and ox-LDL induces more necroptotic cell death. Moreover, the strong pro-inflammatory effect of necroptosis (compared with other forms of cell death, necroptosis is more pro-inflammatory) further promotes atherosclerotic development. Therefore, targeting necroptosis in atherosclerosis with Nec-1 is a promising strategy, targeting both oxidative stress and inflammatory responses.
Neurodegenerative diseases
Neurodegeneration refers to a process of losing neurological function as a result of aging or pathological changes, which is due to neural cell degeneration and death. Some neurodegenerative diseases are common among the elderly population, like Alzheimer's disease (AD) and Parkinson's disease (PD), and some neurodegenerative diseases are heritable, like Huntington's disease (HD). Whether age-related or geneticsrelated, neural cell death plays an important role in the pathogenesis of these neurodegenerative diseases. For example, neuronal cell death in hippocampus and cortex is defined as a part of the etiology of AD. The role of apoptosis in neurodegenerative diseases like AD has been investigated by many researches during last two decades [161,162], and recent researches demonstrate that necroptosis also participate. Several neurodegenerative disorders have been proved to corelate with activation of necroptosis, including AD [163,164], PD [165], amyotrophic lateral sclerosis (ALS) [61], Huntington's disease (HD) [60], glaucoma [166] and retinitis pigmentosa [167]. Since the role of necroptosis in AD, PD, ALS and HD has been summarized in several reviews [3,33], we would like to discuss retinal degeneration in the following paragraph, focusing on the Nec-1 protection in retinal degeneration.
Retinal degeneration is a pathological process of progressive retinal cell death, which may finally result in vision loss. There are several causes of retinal degeneration, including aging, heredity, diabetic retinopathy and retinal detachment [168]. Murakami el al reported that necroptosis was the major mechanism of cell loss in a dsRNA-induced mouse model of retinal degeneration, which is contrary to previous studies attributing apoptosis as a main mediator of retinal pigment epithelial (RPE) cell death in response to prooxidants [62]. Their finding was confirmed by Hanus et al. in both in vitro [169] and in vivo [63] studies, highlighting the mechanism of necroptosis in RPE degeneration.
In the experiments of both Murakami el al and Hanus et al., subretinal/ retro-orbital injection of Nec-1 protects RPE from degeneration [62,63], providing therapeutic prospects in RPE degeneration-related diseases like age-related macular degeneration (AMD). The mechanism of this interplay among retinal degeneration, necroptosis and Nec-1 has not yet been well investigated, current researches could only provide evidence for the involvement of oxidative stress and neuroinflammation [170,171]. Here we speculate that lysosomal dysfunction may also be an important factor. In the development of retinal degenerative diseases like AMD, lysosomal dysfunction is closely related to RPE degeneration, especially deregulation of cathepsins, a group of lysosomal hydrolases/proteases [172]. Under normal conditions, cathepsins regulate intracellular homeostasis by degrading proteins, however, imbalanced cathepsins is correlated with AMD. Cathepsin D-deficient mice were observed with hallmarks of AMD, which might be a result of impaired degradation of rod outer segments [173,174]. However, contrary result was reported by Ogawa et al. that mRNA and protein levels of cathepsin S was upregulated in the RPE/choroid of aged mice [175]. These findings underline the significance of a balanced lysosomal function. Necroptosis execution is associated with lysosomal dysfunction (as reviewed by Vandenabeele et al. [1]). Necroptosis-induced calpains and lipid peroxidation cause lysosomal membrane destabilization, or to be more specific, lysosomal membrane permeabilization (LMP) [1]. LMP leads to the spillage of cathepsins, and this might be the mechanism of necroptosis-induced injury in RPE. Therefore, Nec-1 treatment could protect against retinal degeneration not only by anti-oxidation/anti-inflammation, but also by restoring lysosomal function. The protection of Nec-1 in other retinal degenerative disease models like glaucoma, retinal detachment and retinal I/R injury are summarized in Table 1.
Apart from retinal degeneration, the protective effect of Nec-1 has been observed in other neurological disease models as well (like PD [176] and axonal degeneration [177]). It is also reported that Nec-1 could ameliorate neural cell injury induced by aluminum, a common metal material which can be transported to the brain and contribute to AD etiology [178]. Although the application of Nec-1 may face challenges like relatively short half-time for effective work in neurological system, the significance and potential therapeutic effects of Nec-1 should not be ignored.
Role of Nec-1 analogues in disease models
Since the discovery of necroptosis in driving inflammation and disease pathology, gradually it has been verified as a promising therapeutic target. The ability to inhibit or block necroptosis is the main characteristic of necrostatin family members, and here is a short list of Nec-1 and Nec-1 analogues ( Table 2). Nec-1 is regarded as a potent necroptosis inhibitor, but it needs further optimization due to off-target effect and poor metabolic stability. Although not every necrostatin family member has been thoroughly studied, the key target has been proved to be RIP1, whose kinase activity is inhibited when necrostatins are applied. With more and more researches investigating the pharmacologic features of Nec-1, there are also several papers focused on Nec-1 analogues, providing a basic overview of these chemicals, including chemical structures and pharmacologic\toxicological features of Nec-1 analogues [25,29]. However, the potential role of Nec-1 analogues in disease models has not been systematically reviewed or summarized yet. Nec-1 analogues share some properties in common with Nec-1, which means they probably have similar functions (like anti-inflammation or anti-oxidation effects) when applied in diseases; on the other hand, Nec-1 analogues have their own unique attributes in aspects to molecule activity and stability compared with Nec-1, which could result in different effects and perhaps make them a better alternative of Nec-1 in some diseases or circumstances. Thus, we will give a quick glimpse of Nec-1 analogues and focus on two analogues: Nec-1i and Nec-1 s.
Nec-1i
Necrostatin-1 inactive (Nec-1i) is the demethylated variant of Nec-1, with minor inhibitory effect on the phosphorylation of RIP1, performing more than 100-fold less inhibitory activity compared with Nec-1 [25]. Possessing the characteristics mentioned above, Nec-1i has been used as a vehicle of Nec-1 in many disease models [152,179,180]. However, recent researches found that Nec-1i is actually not as "inactive" as its name suggests. Takahashi et al. proved that both Nec-1 and Nec-1i have inhibition effects on IDO enzyme activity, which involves immune regulation. In a mouse necroptosis assay, the inhibitory effect of Nec-1i on RIP1 was only 10 times less than Nec-1, although Nec-1i has no effect on necroptosis in human cells. Moreover, in vivo studies suggest that Nec-1i has the capability to protect against SIRS induced by TNF and the effect is almost equipotent to Nec-1-induced inhibitory effect in SIRS [25]. These findings are just some preliminary researches on Nec-1i, whether it has any clinical implications still needs more investigation.
Nec-1s
Necrostatin-1 stable (Nec-1 s), or 7Cl-O-Nec-1, is a stable form of Nec-1 analogues, exhibiting inhibitive effect of RIP1 in a dosedependent manner which is more potent than Nec-1 [24,181]. Although Nec-1 has already been used in various studies and is proved to have protective effect on plenty necroptosis-related diseases, it has following shortcomings: (1) Nec-1 has off-target effect on IDO; (2) Nec-1 has a short half-life of about 1 h. Compared to Nec-1 and Nec-1i, Nec-1 s does not inhibit IDO, proved to be a more specific RIP1 inhibitor. Besides, Nec-1 s is able to protect against SIRS induced by TNF without paradoxical sensitizing effect [25]. In view of the above-mentioned facts, Nec-1 s with improved stability can be used as a superb alternative of Nec-1. In the folic acid-induced AKI mouse model, both pretreatment and posttreatment of Nec-1 s protected against the second wave of cell death in AKI [68]. Wang et al. [91] conducted a research on abdominal aortic aneurysm (AAA) and found that Nec-1 was not able to lessen the aortic expansion significantly in elastase-induced AAA mouse model, whereas the mean aortic diameter of Nec-1 s group was evidently smaller compared with DMSO group. In the study of progression of existing aneurysms, Nec-1 s was proved to resolve the inflammatory response, marked by mitigated macrophage infiltration and reduced MMP9 [91]. More surprisingly, the histological result of Nec-1 s treated mice showed almost normal looking arterial structure, without progressive elastin degradation [91]. The reports about Nec-1 s in disease models are summarized in Table 3. These researches demonstrated the potential of Nec-1 s to be a stronger therapy with improved performance.
Nec-1 and COVID-19
As of now, coronavirus disease 2019 (COVID-19) has swept the globe, impacting everyone and every nation. Upon coronavirus infection, the immune system is quickly activated, and continuous infection causes continuous inflammatory response. The inflammatory response begins with an initial recognition of coronavirus, and then mediates immune cells recruitment and tissue repair. However, coronavirus may induce excessive and prolonged cytokine responses, namely cytokine storm in some severe individuals [184]. Cytokine storm causes acute respiratory distress syndrome or multiple-organ dysfunction, which leads to physiological deterioration and death [185]. Therefore, cytokine storm is probably the reason why some patients with mild symptoms suddenly become critically ill, especially for young men with strong immune system. The subsequent cytokine storm leads to SIRS, causing damages to kidney [186], liver [187] and myocardium [188] and finally resulting in multiple organ dysfunction syndrome. According to latest researches and clinical data, cytokine storm is closely related to prognosis of patients. Serum cytokine and chemokine levels are significantly higher in ICU-patients than patients with mild to moderate clinical symptoms [189]. Thus, cytokine storm is charged with being the mechanism explaining the morbidity and mortality of current COVID-19 cases. Therefore, seeking for new strategies to control cytokine storm in severe COVID-19 patients becomes a matter of great urgency.
Although many researchers believe the timely control of cytokine storm in its early stage could be the key to improve prognosis, there is no effective drug to lower mortality of COVID-19 patients. As demonstrated above, necroptosis is a key factor in inflammatory diseases, and Nec-1 is an anti-inflammation compound by targeting RIP1. Necroptotic cell death causes pro-inflammatory responses, releasing cytokines including IL-6, TNF-α, and IL-1β [190], which have been proved as key cytokines in disease development of COVID-19 [191,192]. Therefore, targeting necroptosis by using Nec-1 could be a potential strategy to fight against COVID-19. In the research of Zelic et al. [193], RIP1 kinase-inactive knock-in mice were observed to be resistant to SIRS model and cytokine storm, indicating the essential role of necroptosis and RIP1 in SIRS, thus researchers further predicted that RIP1 inhibitors may be beneficial to sepsis or systemic inflammation patients [193,194]. In fact, Nec-1 is indeed associated with systemic inflammation. In the rat model of acute pancreatitis, rats treated with Nec-1 displayed significant milder symptoms and pathophysiological characteristics, and Nec-1 alleviated the extent of systemic inflammatory response caused by acute pancreatitis, the underlying mechanism of this protective effect is probably related to RIP1/NF-κB p65/AQP8 axis [195]. Although there is no research directly showing the relation between necroptosis and COVID-19, or the effect of Nec-1 in COVID-19, the strong anti-inflammation ability of Nec-1 makes us predict that Nec-1/Nec-1 analogues might have the capability to alleviate cytokine storm, systemic inflammation and thus COVID-19. Since Nec-1 can inhibit both RIP1 and IDO, its promising anti-inflammation effect in COVID-19 may undergo multiple pathways. By targeting RIP1, it can alleviate both necroptotic and apoptotic cell death, reduce DAMPs release and pro-inflammatory cytokines, inhibit inflammatory pathway NF-κB and ameliorate oxidative stress; by targeting IDO, it may also reduce Table 3 Role of Nec-1 s in diseases.
Diseases
Role of Nec-1s References
Acute kidney injury (AKI)
Nec-1 s ameliorates cell death and kidney dysfunction in AKI [68]
Rheumatoid arthritis
Nec-1 alleviates rheumatoid arthritis in the collagen-induced arthritis mouse model with lower arthritis score and arthritis incidence [122] inflammatory responses. Besides, coronaviruses is a group of RNA viruses, and it has been proved that RNA virus can promote inflammasome activation via RIP1/RIP3/DRP1 pathway, which can be inhibited by Nec-1 and Nec-1 s [196]. Thus, it is possible that Nec-1 or its analogues can alleviate inflammation and cytokine release induced by COVID-19. Furthermore, Nec-1 has protective effects on kidney, liver and cardiovascular system (Table 1), which means that Nec-1 or its derivatives might also help to prevent and protect against multi-organ complications of COVID-19 patients. Some studies suggest that necroptosis is an alternate cell death form of apoptosis during virus infection [197]. Apoptosis is important in host defense against viral infection, damaged cells undergo apoptotic cell death upon infection, which inhibits virus replication [198]. It is observed that some viruses could prevent apoptosis by inhibiting caspase-8, thus necroptosis is initiated as an alternative form of cell death [199] (some viruses could even inhibit necroptosis [200,201]). So here comes the question: will Nec-1 inhibition of necroptosis promote virus infection? It is indeed that necroptosis plays a role in cell demise after viral infection, but whether it favors host antiviral responses or aggravates tissue inflammation and damage is not validated, or at least controversial. It is possible that necroptosis inhibits viral replication by inducing cell death during the early stage of virus infection, but it may also promote virus spread by cell rupture in the following stages. What's more, researchers found that the necroptotic cell death activated by influenza A virus infection does not depend on RIP1 [202] (yet it is usually RIP1-dependent in inflammatory responses). Therefore, using Nec-1 to control cytokine storm, which usually happens during an advanced stage, does not necessarily influence host antiviral immunity. Besides, it is observed in patients that T lymphocytes become functionally exhausted with disease progression [203]. Thus Nec-1 may also ameliorate T cell exhaustion of COVID-19 patients by regulating host defense.
Conclusion
Although necroptosis was defined as a new form of cell death just fifteen years ago, scientists have achieved much progress. From the fundamental mechanism to its role in various diseases, the picture of necroptosis is becoming clear and detailed. The clinical significance of targeting necroptosis is supported by more and more researches, thus necroptotic inhibitors have been attached with great significance for both scientific and clinical researches. Since the discovery of necroptosis, the specific necroptotic inhibitor Nec-1 has been applied in many disease models and it is actually the most used one among all necroptotic inhibitors. Furthermore, lots of studies have proved the therapeutic effects of Nec-1 in various disease models, therefore we propose that Nec-1 and its derivatives could have great clinical potential. Currently, some other RIP1 inhibitors are now undergoing clinical trials [204,205]. Although Nec-1 has drawbacks like metabolic instability and off-target effect, its protection in disease models and potential for optimization cannot be ignored. In fact, some classic drugs like aspirin is also characterized by short half-life [206], and off-target effect on IDO may endow Nec-1 with more potent anti-inflammation ability. In this review, we summarized the effects Nec-1 exhibits in disease models and research progress of recent studies. Apart from RIP1-dependent mechanism, we discussed other possible RIP1-independent factors (such as RIA and IDO) of Nec-1 protection. After probing into inflammation and I/R injury, we innovatively reviewed the role of Nec-1 in disease models of diabetic cardiomyopathy, atherosclerosis and retinal degeneration. Finally, we predicted that Nec-1 might help to ameliorate cytokine storm in COVID-19. Although many researchers have thrown spotlight on necroptosis inhibition, some questions remain unclear. For example, the effect of Nec-1 on apoptosis is perplexing. Some studies observed inhibitory effect on apoptosis, while the other claimed no influence. Here we attribute this divergence to different underlying pathways of RDA and RIA. Nec-1 inhibits apoptosis by targeting RIP1 when the cell undergoes signaling pathway of RDA, yet it loses this anti-apoptosis effect when the pathway is RIA. Considering the effects on necroptosis, apoptosis and inflammatory responses, here is another open question: does Nec-1 affect one or multiple RIP1-dependent pathways? The findings and remaining questions suggest that the molecular network of Nec-1 administration in diseases is quite complicated, remaining to be explored deeper by scientists. There is still a long road of scientific researches before the clinical practice of Nec-1 or its derivatives, which is hopeful to be an attractive strategy.
Funding
This work was funded by the National Natural Science Foundation of China (81803269) and the Science and Technology Commission of Shanghai Municipality (18YF1412100, 2019Y0150).
Declaration of Competing Interest
The authors declare no conflict of interest.
Appendix A. Supplementary data
Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.phrs.2020.105297. | 2020-11-12T09:09:10.773Z | 2020-11-09T00:00:00.000 | {
"year": 2020,
"sha1": "fb354f93ad1381e717243f3cfbb0831432e37f37",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.phrs.2020.105297",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1548db38c4e1ef5ef2b58c1c3d91b8668b8a8e9c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235437092 | pes2o/s2orc | v3-fos-license | How to Assess Diabetic Kidney Disease Progression? From Albuminuria to GFR
Diabetic kidney disease (DKD) is one of the most relevant complications of type 2 diabetes and dramatically increases the cardiovascular risk in these patients. Currently, DKD is severely infra-diagnosed, or its diagnosis is usually made at advanced stages of the disease. During the last decade, new drugs have demonstrated a beneficial effect in terms of cardiovascular and renal protection in type 2 diabetes, supporting the crucial role of an early DKD diagnosis to permit the use of new available therapeutic strategies. Moreover, cardiovascular and renal outcome trials, developed to study these new drugs, are based on diverse cardiovascular and renal simple and composite endpoints, which makes difficult their interpretation and the comparison between them. In this article, DKD diagnosis is reviewed, focusing on albuminuria and the recommendations for glomerular filtration rate measurement. Furthermore, cardiovascular and renal endpoints used in classical and recent cardiovascular outcome trials are assessed in a pragmatic way.
Introduction
In patients with type 2 diabetes, the prevalence of chronic kidney disease (CKD) is around 30-40%, mainly secondary to diabetic kidney disease (DKD) [1]. As for the general population, renal impairment increases the risk of cardiovascular (CV) disease for patients with type 2 diabetes [2]. This increased CV risk needs to be reduced in type 2 diabetic patients with CKD, because they have a baseline cardiovascular risk which is per se higher than that for the general population [3]. Even patients with stage 1 DKD have a cardiovascular risk comparable to that of patients with stages 2, 3, or 4 chronic kidney disease (CKD) caused by other diseases than diabetes [4]. For this reason, it is essential to slow down DKD progression to reduce mortality and morbidity in type 2 diabetes patients.
For decades, the only treatment that partially demonstrated to delay DKD progression has been renin-angiotensin system blockade. Nevertheless, in these last few years, new therapeutic options, such as sodium-glucose co-transporter 2 inhibitors (SGLT2i) and glucagon-like peptide-1 receptor agonists (GLP1a), have demonstrated a cardio-renal protective effect in type 2 diabetic patients [5][6][7]. The recommendation for these new drugs use has been implemented in several diabetes guidelines written by multidisciplinary teams composed of different specialist such as endocrinologists, nephrologists, internal medicine doctors, and cardiologists. These recommendations are based on cardiovascular outcome trials (CVOTs) that used different simple and composite endpoints, some cardiovascular and some other renal endpoints, difficult to interpret and to translate into the real-world 2 of 13 clinical practice of general practitioners. In addition, the definition and diagnosis of DKD has not been clear for years, as well as the necessity of an annual renal evaluation of patients with diabetes. This could be the reason why patients are often referred to a nephrologist in advanced stages of DKD.
The aim of this article is to analyze available tools for DKD diagnosis, conduct a thorough review of the main cardiovascular and renal endpoints in CVOTs based on diabetic populations, and explore their utility in daily clinical practice. This review has mainly been based on a PubMed search and should be considered as narrative in nature.
Diagnosis of DKD
The diagnosis of diabetic kidney disease (DKD) is based on clinical findings. It is defined by a decreased in glomerular filtration rate (GFR), the presence of albuminuria, or the existence of both dysfunctions in a patient with diabetes. A persistent reduction of estimated GFR below 60 mL/min/1.73 m 2 and/or the existence of albuminuria (albuminto/creatinine urine ratio ≥30 mg/g) in two measurements with at least a 3-month difference is sufficient to make a diagnosis of DKD in a patient with diabetes [8]. However, the term has a very low specificity, and a wide variety of histologic lesions, which are not always the consequence of diabetes, are usually included in this definition [9]. For instance, DKD could be misclassified in patients with obesity and hypertension, entities that commonly coexist. Therefore, the lack of precision of the term DKD raises another question: how could nephrologists be certain that diabetic lesions are the cause of CKD in their patients?
Kidney biopsy is the only way to confirm a diagnosis of kidney disease in a patient with diabetes and CKD, as well as the possibility of a non-diabetic renal disease diagnosis. This should always be done, especially when non-diabetic kidney disease (NDKD) is highly suspected. However, it is not always possible to perform a kidney biopsy in patients with diabetes, and thus, in these cases patients' clinical history and clinical findings must be relied upon to guide the diagnosis of DKD. The sequence of events is a fundamental clue in this task ( Table 1). The presence of CKD before the recognition of diabetes is normally associated with non-diabetic kidney disease (NDKD) [9]. The time since diabetes diagnosis is also relevant, especially in type 1 diabetic patients in whom it would be rare to identify albuminuria or diabetic kidney lesions with less than 5 years of disease duration [10][11][12]. On the contrary, in patients with diabetes and longer duration of the disease-known for more than 10 years-the most frequent lesions are diabetic nephropathy (DN) [13,14]. For both type 1 and type 2 DM patients, clinical findings such as the presence of progressive albuminuria before the decline of GFR or the diagnosis of diabetic retinopathy, also increase the probability of DN [15]. Conversely, the presence of hematuria has been related to NDKD [13,14]. Diabetic nephropathy, which is another essential term referring to diabetes complications, has been frequently used as a synonym of DKD [9,16]. Herein, the authors would like to point out that these concepts are not equivalent, and the use of the term DN should be restricted to patients with kidney biopsy-proven diabetic lesions. Therefore, DN is a more specific definition and confirms that diabetes is the actual cause of the pathological changes observed in the kidney. Diabetic kidney lesions create a pattern not usually seen in other renal diseases and sufficiently distinct to allow a diagnosis of DN [16]. The glomerulus is the most commonly affected structure in DN. Nevertheless, predominant tubulointerstitial damage with mild glomerular lesions is sometimes seen in diabetic patients and is related to the renal prognosis. In type 2 diabetes, where pathological changes result from multiple comorbidities, lesions are more heterogeneous, and tubulointerstitial damage may be predominant over glomerular injury [16]. In 2010, Tervaert et al. published a consensus classification of DN to establish the severity of these pathological changes [17]. Subsequently, a kidney biopsy, in addition to increasing diagnostic accuracy, provides information about the severity and reversibility of kidney lesions. Interestingly, the presence of renal impairment in diabetic patients, related to demonstrated ND or clinical suspected DKD, increases their cardiovascular risk. For this reason, new strategies to promote the early diagnosis and treatment of renal injury in patients with diabetes are urgent in nephrology clinical research nowadays.
Progression of DKD
In type 1 diabetic patients, it is possible to establish a precise date for the onset of insulin dependence, as well as for the diagnosis of the disease. Therefore, the clinical course of DKD was traditionally described in type 1 diabetic patients [18,19]. In these patients, a first silent phase of glomerular hyperfiltration is followed by mild albuminuria (urinary albumin of 30 to 300 mg/day). After an average of 10-15 years from diagnosis, albuminuria progresses to overt proteinuria, and reduction of GFR begins ( Figure 1) [19,20]. However, as the knowledge of DKD improved, it has been observed that not all diabetic patients present the classic phenotype. Many patients with diabetes, especially the heterogeneous group of type 2 diabetic patients, have a decline in GFR without albuminuria [15].
Diabetic nephropathy, which is another essential term referring to diabetes complications, has been frequently used as a synonym of DKD [9,16]. Herein, the authors would like to point out that these concepts are not equivalent, and the use of the term DN should be restricted to patients with kidney biopsy-proven diabetic lesions. Therefore, DN is a more specific definition and confirms that diabetes is the actual cause of the pathological changes observed in the kidney. Diabetic kidney lesions create a pattern not usually seen in other renal diseases and sufficiently distinct to allow a diagnosis of DN [16]. The glomerulus is the most commonly affected structure in DN. Nevertheless, predominant tubulointerstitial damage with mild glomerular lesions is sometimes seen in diabetic patients and is related to the renal prognosis. In type 2 diabetes, where pathological changes result from multiple comorbidities, lesions are more heterogeneous, and tubulointerstitial damage may be predominant over glomerular injury [16]. In 2010, Tervaert et al. published a consensus classification of DN to establish the severity of these pathological changes [17]. Subsequently, a kidney biopsy, in addition to increasing diagnostic accuracy, provides information about the severity and reversibility of kidney lesions. Interestingly, the presence of renal impairment in diabetic patients, related to demonstrated ND or clinical suspected DKD, increases their cardiovascular risk. For this reason, new strategies to promote the early diagnosis and treatment of renal injury in patients with diabetes are urgent in nephrology clinical research nowadays.
Progression of DKD
In type 1 diabetic patients, it is possible to establish a precise date for the onset of insulin dependence, as well as for the diagnosis of the disease. Therefore, the clinical course of DKD was traditionally described in type 1 diabetic patients [18,19]. In these patients, a first silent phase of glomerular hyperfiltration is followed by mild albuminuria (urinary albumin of 30 to 300 mg/day). After an average of 10-15 years from diagnosis, albuminuria progresses to overt proteinuria, and reduction of GFR begins ( Figure 1) [19,20]. However, as the knowledge of DKD improved, it has been observed that not all diabetic patients present the classic phenotype. Many patients with diabetes, especially the heterogeneous group of type 2 diabetic patients, have a decline in GFR without albuminuria [15]. Classical phenotype of DKD with an initial hyperfiltration phase and later development of progressive albuminuria. As the disease advanced to overt nephropathy, GFR decline was observed. (B) Non-proteinuric DKD in a patient that had hypertension before a diabetes diagnosis. Note that mild albuminuria appeared only when the patient had advanced CKD. (C) Glomerular hyperfiltration in an obese patient that later developed diabetes. As hyperfiltration progressed, the patient developed massive albuminuria followed by a rapid decline in GFR. UACR: urinary albumin-to-creatinine ratio. GFR: glomerular filtration rate. Different factors may influence the speed at which GFR declines in DKD. Other comorbidities that develop both before and after diabetes onset, such as obesity, hypertension, or dyslipidemia, could contribute to an accelerated reduction of GFR [15]. Furthermore, acute kidney injury episodes or the development of DKD over a previously known CKD accelerate the evolution to end-stage kidney disease (ESKD) [9]. The risk factors associated with GFR decline in DKD are hypertension, obesity, and dyslipidemia [15] (A) Classical phenotype of DKD with an initial hyperfiltration phase and later development of progressive albuminuria. As the disease advanced to overt nephropathy, GFR decline was observed. (B) Non-proteinuric DKD in a patient that had hypertension before a diabetes diagnosis. Note that mild albuminuria appeared only when the patient had advanced CKD. (C) Glomerular hyperfiltration in an obese patient that later developed diabetes. As hyperfiltration progressed, the patient developed massive albuminuria followed by a rapid decline in GFR. UACR: urinary albumin-to-creatinine ratio. GFR: glomerular filtration rate. Different factors may influence the speed at which GFR declines in DKD. Other comorbidities that develop both before and after diabetes onset, such as obesity, hypertension, or dyslipidemia, could contribute to an accelerated reduction of GFR [15]. Furthermore, acute kidney injury episodes or the development of DKD over a previously known CKD accelerate the evolution to end-stage kidney disease (ESKD) [9]. The risk factors associated with GFR decline in DKD are hypertension, obesity, and dyslipidemia [15,21]. Besides, it is worthy of mention that a GFR decline in diabetes can even occur in the absence of albuminuria [15]. As previously mentioned, a considerable number of patients with diabetes have predominant interstitial lesions with little or no glomerular damage. Thus, the renal function could be impaired without glomerular injury and the consequent development of albuminuria [10].
Albuminuria in DKD
Albuminuria has been classically considered the first DKD clinical indicator, a biomarker for DKD progression, and a cause of impairment of GFR [22] (Table 2). As stated above, the presence of albuminuria A2 in a diabetic patient confirmed by two measurements is enough for a DKD diagnosis; however, its presence is not only a static clinical marker. Untreated albuminuria will gradually worsen, turning into clinical severe albuminuria grade A3 (albumin-to-creatinine urine ratio >300 mg/g) at 10-15 years after diabetes diagnosis. The prevalence of albuminuria grade A3 in type 2 diabetes ranges from 5% to 48% depending on the studies, and in type 1 diabetes, from 8% to 22%. The albuminuria grade A2 presence in diabetic type 1 and type 2 patients is 13% and 20%, respectively [23]. In some cases, albuminuria may regress, either spontaneously or in relation to treatment, resulting in a lower renal risk in these patients when compared with patients who present progression of albuminuria. On the other hand, the presence of impaired GFR in the absence of albuminuria in diabetic patients, mainly in elderly populations, confers a lower risk of progression to ESKD [24]. Albuminuria is considered an independent risk factor for cardiovascular disease, and a higher rate of urinary albumin excretion is associated with an increased incidence of cardiovascular morbidity and mortality, as shown in Figure 2 [26,27]. Currently, an annual screening to detect abnormal levels of albuminuria and renal function measurement has been recommended by the National Kidney Foundation Kidney Disease (KDOQI) working group practice guideline in patients with diabetes. It is also recommended to initiate a renoprotective treatment in the early stages of DKD. Moreover, for the evaluation of GFR in diabetic patients, the recommendation is to use a creatinine-based formula such as the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation [28]. The classification of DKD based on albuminuria and It is also recommended to initiate a renoprotective treatment in the early stages of DKD. Moreover, for the evaluation of GFR in diabetic patients, the recommendation is to use a creatinine-based formula such as the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation [28]. The classification of DKD based on albuminuria and GFR give us prognostic information and helps us to make adequate therapeutic decisions. In clinical practice, the adherence to guidelines regarding albuminuria screening and treatment recommendations is not very high, as demonstrated by the GIANTT trial [29]. One of the reasons for this lack of adherence to screening could be that methods for albuminuria assessment are not standardized. The chosen measurement method for every single patient should be the one repeated over time to detect as early as possible the progression of DKD [30].
The presence of albuminuria, however, is a quite late indicator of DKD. As soon as albuminuria is detected, kidney injury is already established. In the nearly future, new circulating and urinary biomarkers, identified by genomics, transcriptomics, metabolomics, and proteomics, are needed to perform an earlier diagnosis of renal risk in diabetes and to improve the renal and global prognoses in these patients.
Cardiovascular Endpoints in Diabetic Kidney Disease
As mentioned above, patients with type 2 diabetes have a higher prevalence of cardiovascular morbidity and mortality as compared to the general population. The presence of kidney involvement in patients with cardiovascular disease, especially in patients with diabetes, confers an unfavorable prognostic and an increased cardiovascular risk. Kidney dysfunction in patients with diabetes is a marker of vascular lesions, and their detection allows the early identification of individuals at high risk for cardiovascular events. This early detection is necessary to improve their prognosis [25].
During the early stages of diabetes, there is an increase in plasma renin activity that plays a major role in the development of cardiovascular disease. Classically, angiotensinconverting enzyme inhibitors (ACEi) and angiotensin II receptor blockers (ARB) have demonstrated effectiveness in reducing renal progression and mortality in patients with diabetes and renal disease [31]. For decades, metformin and sulfonylureas have been the first-line drugs for managing type 2 diabetes, and their use in patients with eGFR between 45-60 mL/min has shown a reduction in mortality from all causes [32]. In a large cohort study (n = 124,720), Christiansen et al. [33] demonstrated that patients who started a treatment with metformin had a lower risk of a severe decrease in GFR. However, the use of metformin has been associated with lactic acidosis in the context of acute kidney injury [34], although a good renal prognosis has also been shown [35]. Furthermore, in patients under treatment with sulfonylureas or metformin, the addition of pioglitazone or acarbose was not able to demonstrate changes in the evolution of renal function or ACR [36]. It is of note that the use of sulfonylureas in patients with reduced GFR increases the risk of hypoglycemia [37].
During the last years, several studies have placed new antidiabetic drug families, such as sodium-glucose cotransporter-2 inhibitors (SGLT2i) and glucagon-like peptide 1 agonists (GLP1a), on top of classical treatments, making them the new first-line therapies in the prevention of cardiovascular events in this population [38]. These new classes of drugs have been tested in multiple CVOTs that have shown positive results in terms of cardiovascular risk. However, there is wide variability in the specific cardiovascular outcomes assessed in every trial. This disparity in the endpoints makes the comparison between them difficult. The scientific community has the necessity to establish relevant cardiovascular variables in the follow-up of patients with type 2 diabetes in CVOTs and to redefine which variables have repercussions on a real-world scenario [38,39].
Classical studies for ACEi and ARB analyzed primary cardiovascular outcomes such as mortality and hospitalization for congestive heart failure or a combined outcome of both as in the SOLV trial [40]. A few years later, in the HOPE trial [31], the primary outcome was already identified as a composite cardiovascular one with myocardial infarction, stroke, or death from cardiovascular causes, and each of these outcomes was also analyzed separately.
MACE (Major Adverse Cardiovascular Event) has been described and identified as the primary outcome in the vast majority of CVOTs involving patients with diabetes in this last decade [41]. It is a combined clinical endpoint used for cardiovascular outcome evaluations in CVOTs and it is comparable to the composite endpoint of all-cause mortality. The so-called classical 3-point MACE (3pMACE) is defined as a composite of death from a cardiovascular cause, nonfatal myocardial infarction, and nonfatal stroke. More recent studies have assessed a 4-point MACE (4pMACE) that includes hospitalization for unstable angina and/or a 5-point MACE that adds hospitalization for heart failure.
As MACE and its variations are a good strategy for a clinical comparison of CVOTs, so the inclusion of renal outcomes in CVOTs with antidiabetic drugs is fundamental for the rational evaluation of patients with type 2 diabetes.
Composite Renal Outcomes
In most clinical trials evaluating complications of type 2 diabetes, composite endpoints have been used, as previously mentioned. Decreases in sample size requirements and the ability to assess the net effect of an intervention and to avoid bias in the presence of competing risks are the most cited advantages for their use. The vast majority of clinical trials in type 2 diabetes used cardiovascular criteria as primary composite outcome and renal endpoints as secondary pre-specified objectives [5,58]. Secondary endpoints are additional endpoints, for which the trial may not be powered. The US Food and Drug Administration (FDA) indicated that secondary endpoint measures, by themselves, are not sufficient to fully characterize a treatment benefit. However, these measures may provide additional characterization of the treatment effect. Moreover, in several studies, the renal effects were evaluated in post-hoc analysis and not predefined in the protocol [56,59].
Recently, three seminal studies, CREDENCE [54],DAPA-CKD [60], and FIDELIO-DKD [57], have been published in the field of DKD, with renal outcomes as primary studies endpoints. These trials are different, but all analyzed the onset, the worsening of nephropathy, and patient's death due to renal causes. Heterogenicity in the endpoints has become a big problem to solve when comparing trials. For example, to evaluate the impairment of renal function, different parameters have been proposed, including a decrease in GFR greater than 30, 40, or 50% or the doubling serum creatinine [54,[58][59][60]. The same problem had arisen with other essential variables such as albuminuria and the definitions of ESKD and death from renal causes. Diverse effects, such as hemodynamics, can temporarily alter the creatinine and albuminuria values, so it remains mandatory to repeat and verify these parameters in the clinical evaluation of a diabetic patient.
For all the above-mentioned reasons, it is mandatory to define uniform criteria applicable to the design of clinical trials to be conducted in the next future. In this sense, a unifying definition of renal outcomes has been proposed combining three to five major adverse renal events (MARE). Recently, it has been used in clinical studies in patients with diabetes. MARE has been defined as: 1) incident kidney disease determined as new onset of kidney injury measurable by sustained eGFR < 60 mL/min/1.73 m 2 (on three consecutive visits) and/or new onset of albuminuria (UACR > 30 mg/g on at least two of three measurements on three consecutive days); 2) worsening of kidney disease determined as a sustained > 40% reduction in GFR or slope of GFR based on at least seven creatinine measurements and resulting in a significant GFR decline over a time period to be defined (most likely two years) and/or a slope (significant increase) in UACR compared with baseline measured in at least two of three urine samples on three consecutive days, confirmed by a second three-day urine measurement at least one month from the first result; 3) ESKD determined by initiation of renal replacement therapy (RRT) and continued for at least three months (or refusal of the patient or inability to start RRT for other reasons); 4) death due to renal causes determined as the death directly attributable to kidney disease (hyperkalemia and death from arrhythmia, calciphylaxis aggravating CV disease and subsequent death, or decompensated heart failure not explained by acute myocardial ischemia and death from uremia); 5) death of non-renal cause defined as death of any origin excluding kidney disease. Additionally, patient reported outcomes should be reported in parallel to MARE as a standard set of endpoints in studies on kidney disease in patients with diabetes.
GFR Decline in DKD: Ways of Measurement and Threshold
The development of CKD in diabetic patients is one of the most important complications in this population. Furthermore, DKD is the first known cause of CKD in developed countries [61]. Renal damage in diabetics is not only in the glomerular compartment but also in the tubulo-interstitial and vascular compartments. In clinical practice, a decrease in renal function and the presence of albuminuria have been considered as markers of renal damage [8,62]. These variables are used in daily clinical practice for screening. The classic method of measuring kidney function is the determination of the plasma creatinine level. However, it is not the best method, since it depends on multiple variables, such as muscle mass, age, sex, race, etc. For this reason, even in the presence of a normal creatinine level, it is possible that a reduction in nephrons function already exists, indicated by a decrease in GFR [39]. There are two ways to obtain the GFR: a direct measurement and an indirect calculation. Regarding the direct measurement of GFR, it is possible to use a technique based on radioisotopes or radiopharmaceuticals, not useful in routine clinical practice. Another method for the direct determination of GFR requires calculating creatinine clearance in 24 h urine. However, since creatinine excretion at the urinary level can be altered by many factors, an overestimation of the GFR is possible, and this method does not offer many advantages compared to the indirect calculation. Therefore, the indirect calculation has been widely recommended as a routine screening method [28]. Regarding the indirect calculation of GFR, different formulas derived from serum creatinine levels have been applied, such as the Cockcroft-Gault, MDRD-4, and CKD-EPI ones. Currently, CKD-EPI is the method widely recommended in clinical practice guidelines [25,63]. As it is known, CKD in diabetic patients corresponds to a decrease in GFR < 60 mL/min and / or the presence of microalbuminuria for more than three months [25]. However, in the evolution of DKD, there exists a first phase of hyperfiltration that is hard to diagnose, where there is an increase of GFR without albuminuria secondary to glomerular hyperfiltration [64]. Without treatment, the natural history of DKD leads to the loss of renal function with a decrease of between 2 and 20 mL/min of eGFR per year. However, with adequate glycemic control, blood pressure treatment, reduction of cholesterol levels, hygienic-dietary measures, and lifestyle changes, the loss of renal function may be substantially delayed, with a decrease of GFR between 2 to 5 mL/min per year [30]. In the evaluation of GFR as a renal endpoint in clinical trials, the doubling of the serum creatinine level has been classically used. This decrease corresponds to a reduction in GFR by 57% [65], indicating that this is a late marker, and although it is strongly related to CKD progression, large cohorts and long follow-up periods are needed to obtain this endpoint. For this reason, lower percentages of reduction of the GFR have been recently used in clinical trials analyzing renal outcomes, as previously described, such as GFR reduction of 40 or 30% [65]. An interesting paper by Perkovic et al. was designed to assess the consistency of the effects of empagliflozin versus placebo on an alternative composite kidney endpoint, consisting of different thresholds of decline in eGFR, initiation of renal replacement therapy (RRT), or renal death in the EMPA-REG OUTCOME trial, to assist in the design of future kidney trials. This study demonstrated that empagliflozin consistently reduced the risk of a broad range of kidney composite outcomes using different eGFR thresholds (≥30%, ≥40%, ≥50%, and ≥67%) to define a significant loss of kidney function. In addition, this study suggests that the use of a composite endpoint consisting of a 40% decline in eGFR, ESKD, and renal death may be the most reliable and efficient choice to demonstrate clear kidney benefits with the smallest required sample size and the greatest study power, when sustained outcomes are used [66].
Pros and Cons of Renal Endpoints Standardization
DKD is closely associated with a significant increase in CV risk. Early detection of kidney disease is of vital importance to stratify patients at risk for CV morbidity and mortality and to improve their prognosis by initiation of several treatments reflected in the current clinical practice guidelines [25]. Despite the fact that DKD is frequent and associated with an increase of mortality and patient burden cost, in different studies, the definition of renal outcomes is variable and heterogeneous, as previously mentioned in this review. Nephrology-oriented research is insufficient to answer the large amount of paramount clinical questions [67]. Nephrology is a specialty in which few randomized clinical trials have been carried out [68,69], and this is particularly true for studies evaluating glucoselowering drugs and the risk of DKD progression.
In 2008, the FDA published its guidelines to support the pharmaceutical industry in CVOTs for the development of new type 2 antidiabetic drugs, albeit renal outcomes were often evaluated as secondary endpoints in this CVOTs. Several studies focused on the renal effects of specific antidiabetics drugs, for example, on GFR or albuminuria trends, using heterogeneous defined endpoints, which makes difficult the interpretation of the results and their applicability in clinical practice [70].
During the last 15 years, several classes of antidiabetics drugs have been introduced for the treatment of patients with diabetes, including SGLT-2i, DPP-4i, and GLP-1a, and interestingly, secondary outcomes reported by CVOTs have indicated that these drugs may directly improve the renal function beyond changes in glycemic control. Recently, the SGLT2i family has emerged as a major advance for the treatment of DKD. The results of the CREDENCE trial have undoubtedly demonstrated that canagliflozin prevents kidney failure and cardiovascular events [71]. In addition to these three groups of drugs previously mentioned, the FIDELIO-DKD study showed that the use of finerenone (nonsteroidal, selective mineralocorticoid receptor antagonist) resulted in a lower risk of CKD progression and cardiovascular events compared to the use of placebo. In this case, the composite primary renal endpoint was kidney failure, sustained ≥ 40% decrease in GFR from baseline, or death from renal causes. Finererone was superior to placebo in these composite primary renal outcomes and also in the secondary composed outcome of reduced ACR [57]. These are the most important studies that assessed a composite renal endpoint as its primary endpoint and showed positive effects on these hard renal outcomes. In this regard, MARE, comparable to the term MACE, has been described to evaluate in a homogeneous way several events related to the development of new-onset DKD, ESKD, mortality, and quality of life [70]. With this new proposal, the homogenization of renal primary outcomes probably will help in the next future to respond to several questions on the management of DKD and the risk of progression. However, this approach may generate some doubts in CVOTs with composite endpoints. Regardless of this, a uniformly agreed definition for MARE would make meta-analyses easier and would facilitate the comparison of different studies, allowing tailored treatments for patients with diabetes at risk for ESKD.
Conclusions
For physicians involved in the management of patients with diabetes, it is crucial to understand the importance of the diagnosis of DKD. This diagnosis is easy and cheap to achieve by measuring GFR and UACR and is the first step to prevent DKD progression. Albuminuria is a good biomarker but reflects the presence of established kidney damage. Thus, hyperfiltration before albuminuria appearance must warn the clinician to start therapeutic adjustments for renal and cardiovascular protection. This new therapeutic approach for type 2 diabetes is based on CVOTs in diabetic populations with several cardiovascular and renal endpoints. The differences between endpoints in CVOTs make difficult the comparison of outcomes. However, the establishment of MACE is a first step towards a clarification; similarly, MARE describes for first-time the renal endpoints and allows the scientific community to design new clinical trials focused on renal involvement in type 2 diabetes. | 2021-06-16T05:17:55.815Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e2a71b1c285813997efc4500aad4a181d59f86a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/11/2505/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2a71b1c285813997efc4500aad4a181d59f86a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248079760 | pes2o/s2orc | v3-fos-license | Analysis on Changing Trends in China's & U.S.'s Economies and Potential Benefits for Post-COVID Recovery
An unprecedented scale of lockdown has been generated in the world by the highly contagious virus, COVID-19. Consequently, the economies are highly impaired, since businesses are closed, firms stop the productions, and global trade is constrained. The pandemic is challenging the formerly existing economic systems. Due to this, the economic systems started to alter. This article selects the two most prominent economies, of China and of U.S., to illustrate the changing trends. Specifically, the author will present the existing economies in both countries: the socialist economy with China's characteristics and the neoliberalist economy in the U.S. Following introductory sections, the article will further explore the changing trends, namely, the change between "big government" and "small government" in both countries. As the author noticed, China has a gradual change, from a big government, to a small one, while the U.S. is converse. Finally, the author uncovered the potential benefits of economic recovery after COVID-19, including the use of different methods by governments of different economies to stimulate financial recovery.
INTRODUCTION
COVID-19 was categorized as "pandemic" in 2020 by the World Health Organization.This has created a challenge for both the medical and the economic system.Great challenges almost always birth great changes, such as the Great Depression, the end of the Golden Age, and the 2008 Financial Crisis.More specifically, such great challenges reveal the need of modification in the economic systems.The main goal of this article is therefore to show how the economies are changing.To be more precise, it shows how the economic ideology is switching after the damage caused by the COVID-19 pandemic.The author chose the two most prominent conflicting economies these days, China's and U.S. Since these two countries differ from each other in political ideologies, consequently, they differ in economic system and its changing trend in post-COVID era.To be more specific, the conflict between "big government" and "small government" is experiencing a dynamic change between two countries and it is the primary focus of this article.By observing the changes in government's participation in the economy, government expenditure and economic policies, the author determines the different changing trends between "big government" and "small government".After illustrating such changing trends, the author elucidates how the changes may show benefit for post-COVID recovery.
THE EXISTING ECONOMIES
China's economic system, The Socialist Economy with China's Characteristics, is originating from 1978.Precisely, China's economy is the market economy that dominated by public ownership [1].The development of this type of economy is categorized in 4 stages.The first stage lasted from 1978 to 1984.During this period, the planned economy, which is typical to socialist countries, was still the dominating form, but it was accompanied by market economy.The second stage, 1984-1988, was marked by the government announcing "a planned commodity economy".Later, from 1989 to 1992, the Chinese government advanced the end of socialist market economy.In the last stage, which is from 1992 to today, the government endeavored to modify the economy in order to reach "perfection".
Before the COVID-19 crisis, the United States had a "Neoliberalism Economy".This form of economy derives from 1970s.At that time, the main aspect of economic deterioration involved diminished growth rate, cumulative unemployment and inflation [2].In response to the economic deterioration, neoliberalism economy aimed at the establishing new rules of capitalism function that will affect the center, the peripheral and also, the relationship between the two.Specifically, the change meant diminishing intervention of the states, new economic goal towards price stability and draining the resources of the periphery to the center [2].
Definition of "Big Government" and "Small Government"
In order to elucidate changing trend on "big government" and "small government" in the economy, the author will first define the "big" and the "small".The big government typically refers to a large scale of government involvement in the economy.Specifically, a big government will intervene in the economy by affecting the competitiveness, private sectors and market vitality with high expenditure, high level of taxations and strict policies.Conversely, a small government is less involved in the economy and therefore has less government spending, taxes, and gentle policies.
Government Engagement in China's Economy
The original concept of socialist market economy in China emphasizes the dominant force of public ownership, underlying the large proportion of government intervention in the economy.Following the COVID-19 pandemic, the reduction of government role in the current economy is visible.Two phenomena can illustrate this changing trend: competitive intensity and private sectors' dominance.To begin with, the competitive intensity has increased since the start of the COVID-19 crisis.Because there are many government-owned companies, the competitiveness became less drastic in the economy.However, the outbreak of COVID-19 allows companies with greater flexibility to stand out regardless of its ownership [3].Also, the technology continued to expand.The heightened popularity during the lockdown period stimulated the development of technology companies [3].Since the social distancing requirement increased the need for online entertainment, such companies received a major increase in demand and developed quickly in the lockdown period.Another factor that can be attributed to the increasing competitiveness in China is that COVID-19 potentially eliminates weak companies [3].
Concluding the three reasons mentioned above, the COVID-19 catalyzed the competitiveness in China because it helped to reduce the weak companies and strengthen the already strong counterparts.Contrary to past situations, where the government supported the companies, the current market in China has been less supported by the government and the rivalrous ones are the result of market's self-regulations.In addition to the intensified competitiveness, private sectors also started burgeoning following the COVID-19 pandemic.One comparison between the response to SARS and COVID-19 can elucidate this distinction.In 2003, the government represented the majoritarian force in epidemic prevention and recovery efforts, while in 2020, this position was taken by the large private companies a [3].In 2003, the state-owned enterprise accounted for 45% of the profits.Now the private enterprise accounts for almost 67% of China's economic growth, creating 90% of the new jobs [3].This domination in the private sector will continue to persist.On March 30th, 2020, the State Council published a series of policies whose purpose was to strengthen the role of the market, which would eventually boost the economy.
The Chinese government is also reducing the tax rate.Since COVID-19 has imposed a heavy burden on business, the National Development and Reform Commission in China has announced that it will lower the tax rate, especially for the medium and small size companies [4].Also, since 2019, the personal income tax has been reduced in order to guarantee a higher disposable income for the citizens [5].All these reductions in taxes indicate that the Chinese government is becoming "smaller" in the economy, by offering people more freedom in managing their income.
Concisely, China's economy tends to move towards a "small government", in which there is less government intervention and more free practice in the market.This trend is shown by the rising competitiveness in the market due to less government constraints; more private companies, contrary to government-owned ones, stepping up in the market; and reduction in tax burdens.
Government Engagement in the U.S. Economy
Contrary to what it is happening in China, the U.S. economy is gradually abandoning the Neoliberalism concepts and moving towards a "big government".
As mentioned before, the neoliberalism economy emphasizes little government participation in the economy.Without direct announcement, the Biden government is acting opposing to the idea of low government intervention [6].This is displayed by Biden's stimulating policies after the COVID-19 pandemic, which has the purpose of boosting the economy.
From 2019 to 2020, under the COVID-19 pandemic, the U.S. government expenditure increased by 9.7%.For comparison, coping with the financial crisis, such expenditure increased by 5.5%.When it was wanted to enlarge the military under Reagan's presidency, it increased by 2.3%.Moreover, when Roosevelt issued the New Deal in order to overcome the great depression, the government spending has increased by 1.6% [6].Mere increment on government spending cannot indicate whether or not the U.S. is switching to another form of economic system because such increase in the government expenditure in the market can be a transient response to the pandemic.However, the trend is likely to continue [6].Following Trump's 900 billion dollars stimulation, Biden issued a 1.9-trillion-dollar stimulation plan, which has been approved by the congress.It is expected that such expenditure can reach 6 trillion next year [6].Additionally, the current payment proportion in GDP has reached 42.6%, which is even higher than that during WWII period.Therefore, the phenomena illustrated above indicate that far from obeying the concept of neoliberalism economy, the United States is currently moving on from neoliberalism with continuing persistence of large scale of government participations and interventions in the economy.
Another phenomenon that illustrates U.S.'s "big government" movement is the increment in taxations.For tax policies, Biden proposes to elevate the top rate to 39.8% plus 3.8% surtax, which will ultimately reach a total of 43.4% [7].Increasing the tax rate for the top wealthy families indicates that the U.S. government is currently taking a more active role in redistributing the income in the nation.This will further contribute to more government expenditure that will stimulate the U.S. economy after the COVID-19 pandemic.Therefore, the changing trends in tax policies also show a "big government" approach in the United States.
To conclude, moving in an opposite direction compared to China, the U.S. government is currently being more active in the economy, as indicated by an unprecedented increment in the government expenditure to stimulate the economy.The U.S. can be regarded as changing towards a "big government", since this phenomenon seems to be a long-term trend.
POTENTIAL BENEFITS FOR POST-COVID RECOVERY
The changes discussed above are ongoing trends for China's and U.S.'s economies.The future effects of such changes are unknown.Next the author will discuss how such changes may benefit an economy after the COVID-19 pandemic.The author concedes that such changes will bring both benefits and harms.This article focuses on how these changes are expected to improve the economy, rather than doing an analysis on cost benefit.Therefore, the following paragraphs discuss only the potential benefits of changing trends in China's and U.S.'s economies.
Potential Benefits of Changing Trends in China
A "small government", towards which China is moving, will instill more incentives for people to participate the economy.When state-owned enterprises were dominating the market, the development of private sectors was largely constrained, resulting in the deprivation of economic vitality.Specifically, a small government, without any intense restrictions on businesses and competitions, will encourage more citizens to engage in economic activities.Since COVID-19 imposed limits in productions during the lockdown, such encouragement could help restore the productions level.Moreover, the reduction in taxes further encourages people to engage in businesses to earn more and elevate their living standard, while Advances in Economics, Business and Management Research, volume 212 contributing to economic prosperity.By involving people in the production of goods and services, a once stagnant economy will be boosted.The expansionary effect of a small government will help heating up the economy and will reduce time to return to a pre-COVID level.
There might be critics who will argue that a "small government" means less government expenditure for stimulations, or redistribution of the wealth in a country.However, a former study conducted by Tanzi and Schuknecht has shown that such reduction cannot have devastating effects on an economy: countries can achieve satisfying economic performances without government absorbing 40 to 50% of the GDP.Equal performances can be attained when public spending reaches 20% of the GDP [8].Therefore, becoming a "small government" can inject more vitality into China's economy China by offering more incentives for businesses and citizens to participate, while not necessarily harming the economic performances by reducing government intervention.
Potential Benefits of Changing Trends in U.S.
Stimulating the economy can still be an effect for a "big government" that is gradually emerging in the U.S.This may shroud the difference between "big" and "small" governments, because regardless of the extent of the government's intervention, the expected effect is the same.However, it is reasonable to consider the initial government size in the economy before talking about the possible differences between "big" and "small".The former U.S. government can be regarded as insufficient for economic prosperity.In contrast, the former Chinese government might have been too big, which deprived its economic vitality.Although both changes towards big and small governments are seen as economic stimuli, the way they stimulate the economy is different.For the U.S., the lack of government intervention results in the economy's difficulty to recover from a recession, as the market force is insufficient for self-regulation.The COVID-19 has caused citizens to be unemployed and businesses to operate at a loss.Moreover, the market power may be insufficient for restorations.By injecting more government expenditure in the economy, the U.S. government can potentially create more jobs, allowing more people to fuel the financial prosperity by participating productions of goods and services.
One further critique may arise: even if more government spending is seen as a stimulus, elevated tax rate may implicitly discourage economic activities, since people's power of charging their income is reduced.However, taxations served as a more important factor for the "second stage development" [9].In the second stage, the infrastructure has been largely completed.The urgent issue is to maintain the development achieved in the first stage [9].As a big government with more taxations, such maintenance is feasible, since some of the current spending was borrowed from the past, engendering wealth to secure the development from the first stage [9].Impairing the purchasing power of the citizens, higher level taxes served to maintain the products of first stage development, infrastructures, firms, etc., with the purpose of allowing such products to continue to contribute to the economic prosperity.
CONCLUSION
This article has presented the economic systems in China and U.S. for several decades.In China, the socialist economy and its characteristics emphasized the dominance of public ownership and government intervention in the economy, while in U.S., the neoliberalism economy accentuates little intervention and conformation to the "invisible hands" in the market.However, fueled by the COVID-19 pandemic, the two economic systems need moderations in order to overcome the difficulties imposed by COVID-19.China, formerly an economy with "big government" participation, now is gradually moving towards a smaller government.
In contrast, under Biden administration, the U.S. government is slowly becoming big in the economy.The change between big and small is indicated by the government's engagement in the market and tax level in each economy.With more engagement and tax level, the government can be characterized as "big".In contrast, lower such level indicates a characteristic of "small".Both of these changes serve as stimulations to the economy that is severely impaired from the COVID-19 pandemic.However, the means to reach such stimulation is different.For China, where there has once been excessive government intervention, the economy needs more freedom for the citizens to participate in the economy.Switching to a smaller economy served as an incentive for financial activities by encourage competitions between firms and lower taxes for medium and small size enterprise, which ultimately facilitates the economic prosperity.For the U.S., Advances in Economics, Business and Management Research, volume 212 neoliberalism economy resulted in insufficient financial regulations from the government, which meant insufficient power for the market to revive.A bigger government can instill more economic activities that will contribute to a post-COVID recovery in the U.S.Although the original form and the changing trends of China and U.S. economy are contrary in their initial ideology, the common goal, from the author's perspective, is to search for a more suitable economic system after the COVID-19 pandemic and maybe a different approach to an ideological harmony in the economies.The conflict between big and small government has existed for a long time and has formed different ideological groups among economists.Both China and U.S. are pioneers in discovering the balance in economy between the "big" and "small" government which may contribute to resetting the disputes in this ideological debate.Such ultimate and perfect balance in economic system and ideology may deserve more explorations for its existence in the future.
Advances in Economics, Business and Management Research, volume 212 7th International Conference on Economy, Management, Law and Education (EMLE 2021) Copyright © 2022 The Authors.Published by Atlantis Press International B.V. This is an open access article distributed under the CC BY-NC 4.0 license -http://creativecommons.org/licenses/by-nc/4.0/. | 2022-04-11T15:04:20.710Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "38255aa2284aa15e60bc2ea6900c27693ef42a95",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125971360.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2f4fcd6545d37c6909ed96be375163e3e2ea74f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
250611669 | pes2o/s2orc | v3-fos-license | The Experiences of Operating Room Nurses During COVID-19 Pandemic: A Qualitative Study
Purpose The aim of this study was to determine the changes in the physical, mental, and social conditions of operating room nurses and their personal experiences during the COVID-19 pandemic. Design The study applied a qualitative research design that included the content analysis method. Methods Face-to-face interviews were conducted online with 26 operating room nurses. Analysis of the data was completed in six steps using the content analysis method. Findings Four main themes emerged from the interviews: physical effect of the COVID-19 pandemic on operating room nurses, psychological effect of the COVID-19 pandemic on operating room nurses, operating room nurses’ perceptions on the training given to them during the COVID-19 pandemic, and effects of the COVID-19 pandemic on health care worker and patient safety and nursing care. Conclusions This study contributes new findings on the experiences of operating nurses during the COVID-19 pandemic to the relevant literature. The results of the study indicated that the nurses were negatively affected both physically and psychologically during this period, and that this directly affected patient care.
The coronavirus first appeared in the city of Wuhan in the province of Hubei, China in late 2019 and rapidly spread throughout the world. 1 The disease was at first referred to as severe acute respiratory syndrome coronavirus 2 (SARSCoV-2) before being later declared as COVID-19 by the World Health Organization (WHO). 2 The first case was reported in Turkey on March 11, 2020, the same day the WHO declared the COVID-19 disease to be a pandemic. 3,4 As the COVID-19 pandemic continues to spread rapidly around the world, health care professionals and researchers have sought to determine the best policies and procedures for delivering proper treatment and preventing recurrent waves. The management of patient clinics, intensive care units (ICUs), and operating rooms (ORs) has become increasingly important in the fight against the pandemic, and all health care professionals, especially nurses and physicians working in these units, have contributed and continue to contribute a significant amount of time and effort in this fight. 5 In the early stages of the pandemic, bed capacities increased in parallel with the need for additional ICU beds for COVID-19, and more nurses were needed to care for patients in ICU. Therefore, OR nurses were temporarily assigned to ICUs as elective surgeries were postponed. 5,6 Staff shortages in ORs during this period due to the assignment of OR nurses to different units at the hospital and the long-term, close working conditions of nurses with the patients have made it challenging to manage the pandemic, a particularly concerning issue considering the role of OR staff play in emergency and urgent surgeries and intubation of patients who need anesthesia. 6 In Turkey, OR nurses are responsible for all perioperative care except for administering anesthesia and other medications. Anesthesiologists administer anesthesia, and anesthesia technicians assist them during surgery. 7 Another issue that differs from the US and European countries is the nursing education in Turkey. Until 2014, people who graduated from 4-years health vocational high schools were working in ORs by obtaining an OR nursing certificate. With legal regulation in 2014, this problem was solved. Only nurses who graduated from university bachelors' programs were allowed to work in ORs. However, high school graduate nurses who started working before 2014 continued working in ORs. 8 OR nurses, who have been at the forefront of the fight against the COVID-19 pandemic, are tasked with managing the pandemic by identifying COVID-19 cases, informing the public about the best preventive measures to stop the spread of the virus, and providing continuity of care and treatment of patients. 9 They also played a key role in reorganizing ORs during the COVID-19 pandemic so that they could be used for patients who needed intubation and contributed to the management of the pandemic process by fulfilling all their responsibilities in other areas, such as ICU's, patient care services, disease diagnosis and classification in triage areas, during the period when only urgent and mandatory surgeries were permitted. 5,10 As the closed environment of ORs that use aerosol generating procedures for airway management increases the risk of transmission of infection among OR personnel, the WHO and other scientific authorities have recommended evidence-based preventive methods for infection control and optimization of OR management during the COVID-19 pandemic. 11 These methods can serve to prevent the spread of the COVID-19 virus through contact with contaminated ambient surfaces and aerosolization. In addition to the standard preventive measures of wearing surgical masks and caps to prevent contamination, OR nurses have taken additional protective measures, such as wearing WHO-recommended N95 masks, face shields, and protective gowns. 12 However, with these measures, OR nurses reported that they have experienced difficulties with moving during long surgeries, excessive sweating, and pressure sores from the use of the protective equipment. 13 Tabah et al 14 also stated that half of the health workers in their study complained of sweating and pressure sores, even in cases where they used their personal protective equipment for only 4 hours. 14 On the other hand, other studies have reported that health care workers did not have enough PPE to use. [15][16][17] Arnetz et al 18 stated that nurses with insufficient equipment suffered more mental problems, reporting that most of the nurses in the study suffered from depression, anxiety, and post-traumatic stress disorder. 18 As these challenges experienced by OR nurses required to prompted further exploration, the aim of this study is to determine how OR nurses' physical, mental, and social conditions changed during the COVID-19 pandemic, and to examine their personal experiences throughout the period.
Design
The content analysis method was used to analyze the qualitative data collected from the 26 OR nurses participating this study, and their socio-demographic information was also taken. Content analysis, a method used primarily for analyzing textual and visual data, follows an inductive path and primarily focuses on developing categories relevant to the research topic. 19 This analysis method was selected for its capacity to directly analyze the actual thoughts the OR nurses had about their experiences during the COVID-19 pandemic. The study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. 20
Sample Selection
There is no established rule for determining the sample size in qualitative studies. 21 Since qualitative studies are based on interviews, larger and broader samples are not required, as the content from the interviews would begin to repeat itself at some point. 22 In qualitative studies, the data collection procedure is stopped when no new information is found in the data when the data is repeated. 21 The literature on this subject suggests that when data is collected through the in-depth interview method, the sample may include approximately 30 people. 23 In this study, 26 OR nurses formed the study sample. The inclusion criteria for participation were as follows: employment as an OR nurse during the COVID-19 pandemic, voluntary agreement to participate in the study, consent to perform an audio and video recording of the interviews for use in the study, agreement to participate in an online interview, and ability to use the Zoom application. We did not have any exclusion criteria.
Data Collection
The data were collected between June 3 and June 25, 2021 interviews conducted using the free version of the web-based Zoom video conferencing application to avoid the risks associated with doing personal face-to-face interviews during the COVID-19 pandemic. Each interview lasted about half an hour (Min-max:22-38 minutes; Mean:27 minutes). The snowball sampling method was applied in this study. Researchers used their own public Instagram profiles to share a research invitation letter that included the inclusion criteria. The first participant to voluntarily agree to participate in the study communicated with the researcher via direct message on Instagram. An e-mail address was then requested to maintain communication with the participant. An informed consent form was prepared on Microsoft Forms and distributed to the participant by e-mail. Next, the Zoom invitation link was sent to the email address of the first participant to obtain their voluntary consent to the interview. Before the interview, the participant was verbally informed about the study and its aim, and verbal permission to conduct an audio and visual recording of the interview was obtained from the participant. Thus, both written and verbal consents were obtained. The interview was held on the day and at the time specified by the participant to ensure that he/she could express their thoughts comfortably and without interruption.
After the first interview was completed, the participant was asked to recommend another OR nurse for the second interview. The first participant sent the researchers' contact information to the OR nurse they recommended; the second participant then contacted the researchers. The process was repeated for the second participant from the consent stage. This cycle continued until no new information was found in the data.
All the interviews were conducted by researchers who had qualitative research experience.
Data Collection Tools and Properties
An interview guide consisting of two forms prepared in accordance with the relevant literature were used to collect the data. 24,25 The first form was a personal information form containing six questions to identify the OR nurses' socio-demographic characteristics, such as age, gender, and work experience, while the second form was a qualitative data form containing five semi-structured open-ended questions directed at identifying the experiences of the OR nurses. The interview form was sent to three different expert researchers via e-mail to obtain their expert opinions. According to their expert review, the form required no revisions (Table 1).
Ethical Considerations
Before the study, ethical approval to conduct it was granted by the Non-Invasive Research Ethics Committee (Ethics Committee approval dated 25.05.2021 and numbered 11) of Bezmialem Vakif University. Each participant gave their voluntary verbal and written consent to participate in the study online. Due to the risks associated with meeting face-to-face in person during the COVID-19 pandemic, an online consent form, a documentable method whereby the participants could read and confirm their consent, was used in place of the written consent form (https://forms.office.com/r/2YTXU6XmbS).
Following the interviews, participants were informed that the recordings obtained through the Zoom application would only be used for the intended purposes of the study, that no one except the researchers would listen to them, and that their names would be replaced with a number (eg, N1, N2) in the research report to secure their anonymity, and their consent was obtained.
The consents, transcripts, interview notes, and e-mails associated with the study will be stored for two years on a password-protected computer. At the end of two years, the researchers shall destroy these documents used as required by Article 11 of the Regulation on the Erasure, Destruction, or Anonymization of Personal Data. 26
Data Analysis
The researchers transcribed the audio recordings verbatim immediately after the interviews. The qualitative content analysis method was employed to analyze the data. According to Yıldırım andŞimşek 27 the content analysis method includes the following steps: 1. Coding of data: At this stage, the researcher examines the information obtained, tries to divide it into meaningful parts, and finds out what each piece means conceptually. The data coding process usually requires the researcher to read the data set several times and repeatedly work on the resulting codes. 2. Finding themes: Based on the codes that emerged in the first stage, it is necessary to find themes that can explain the data at a general level and collect the codes under certain categories. First, the codes are brought together and examined and tried to find common aspects. For thematic coding, it is necessary to determine the similarities and differences of the codes and the themes that can bring together the codes that are related to each other. 3. Organizing and defining data according to codes and themes: As a result of the detailed coding in the first stage and thematic coding in the second stage, the researcher creates a system that can be organized by the data collected. In the third stage, the researcher organizes the data obtained according to this system. In this way, it may be possible to define and interpret the data according to certain findings. 4. Explanation of findings: In this last stage, the researcher establishes cause-effect relationships, draws conclusions from the findings, explains the importance of the results to give meaning to the data collected, and describes the relationships between the findings. Based on the stated steps, the two researchers (HMA, ZZ), each of whom had qualitative research knowledge and experience, carried out the content analysis and determined the themes independent of one another. The researchers discussed and determined the themes that they thought best described the findings until they reached an agreement on the data. The ATLASTI data analysis software was used to analyze the data.
Trustworthiness of the Study/Rigor
Meticulosity communicates trustworthiness in a study, and results are considered trustworthy when they are confirmable and transferable. Components of trustworthiness are credibility, transferability, dependability and confirmability.
The data compiled in this study are considered to have high credibility because they are not produced by the researchers but rather, represent the thoughts and experiences of the OR nurses. Some of the participants' statements were also cited under each theme to improve the credibility of the results further. 28 The study and data analysis process was carefully performed by the research team to ensure dependability. For this purpose, the two researchers (HMA, ZZ), each of whom had qualitative research knowledge and experience, carried out the content analysis and determined the themes independently. The researchers discussed and determined the themes that they thought best described the findings until they reached an agreement on the data. To evaluate the confirmability, two faculty members specializing in qualitative research confirmed the data analysis process. 29 Prior to their use in the study, the data collected from the statements made by each participant were sent back to them to confirm their statements and thereby increase the transparency of the study. Given that the participants were working as OR nurses during the COVID-19 pandemic in different regions and hospitals of Turkey, the data can be transferable to all nurses and provide insight on health care professionals in other nations. 28
Findings
The mean age of the participants in the study was 29.1 (minmax:21-42), 20 of the participants were female, and six were male, and four of the participants were high school graduates, four had an associate degree, 13 had a bachelor's degree, four had a master's degree, and one had a PhD. The mean total nursing experience of the participants was 7.3 years, while the mean amount of time they had been working as an OR nurse was 6.1 years. OR nurses participating in the study were working five in a private hospital, 14 in a state hospital and 7 in a university hospital. During the COVID-19 pandemic, 13 participants were temporarily reassigned to another clinic. Six of the nurses were assigned to the COVID-19 ICU's, three to polymerase chain reaction (PCR) test department, and four to the COVID-19 clinic ( Table 2).
From the assessment of the experiences of the OR nurses during the COVID-19 pandemic, four main themes were identified ( Table 3). The nurses stated that the pandemic had physically and mentally affected them, made it impossible to establish and maintain the safety of health care workers and patients, and made them feel inadequate in providing patient care, and that the trainings given during the pandemic were not clear and easy to follow.
Theme 1: The Physical Effects of the COVID-19 Pandemic on OR Nurses
All 26 participants contributed to this theme. Some nurses had such tight and tiring schedules that they had to postpone the interviews or had to participate in the interviews during their breaks between surgeries. Participants stated that they were tired even if there was no change in their shifts. The OR nurses expressed that among the various problems they experienced during this period, the usage of personal protective equipment (PPE) was the primary cause of their physical issues, tiredness, and stress, which included sweating induced by the protective gowns, pressure sores from the use of masks, headaches caused by the face shields, movement constraints caused by the equipment, and difficulty seeing during surgery due to their goggles fogging up. The steaming of the glasses narrowed the field of vision. This might endanger the life of the patient. Although the work of the OR nurses was the same, the use of extra PPE brought difficulties. Therefore, nurses had to work more carefully under more challenging conditions. N19: "While wearing the protective gowns, the health care workers have to exert more effort than normal. The shields cause headaches. I couldn't even hear the doctor's questions during the surgery because of my headache. I couldn't focus." N22: "The backside of our ears definitely hurt. I have had more headaches, anxiety, and stress, and, as I said before, I have suffered hair loss. The use of PPE exhausts us, but we are at war and have to fight." N16: "I recall there being times when there was no place in my underwear that was dry." N14: "... I am a skinny person. When I wear the heavy PPE, I wobble when I walk." N26: "The equipment makes you sweat, the goggles give you a headache, and you cannot breathe with that shield. The layers of PPE, like masks, etc., obstruct our movement in the workspace, and despite all the equipment, you are still trying not to contaminate the sterile area. Although we have had success in performing operations, we have had difficulty seeing at times during operations because our goggles fog up." N21: "I have felt extra tired physically. I have had pressure sores on my nose and cheeks due to the masks." All 26 participants contributed to this theme. The OR nurses stated that, in addition to being physically impacted by the COVID-19 pandemic, they were also mentally affected. Save for cases of surgeries; nurses must not get too close to one another as a result of having to maintain social distance. To support this, the employees did not come together between the cases and did not eat together. This had hurt the communication of the operating room team and, therefore the workflow. Social exclusion has been another issue that OR nurses had experienced, especially in the first months of the pandemic, as the people that the OR nurses encountered in their social life knew that they were working in the hospital and participating in the treatment and care of COVID-19 patients. Social exclusion accelerated the exhaustion of nurses. That's why nurses feel depressed and reluctant.
N7: "When I met a friend, he never took his mask off. He said that I was a super spreader. This kind of behavior is driving us out of society, which is kind of distressing." N23: "When you do not say hello to each other in the OR, the conversation breaks down, which drives you crazy. With the breakdown of communication, we become lonely." N2: "When you go into depression, you do not want to leave the house, go any place, or go to the hospital, and the worst part is that you do not want to provide care for the patient."
Fear of Being Infected With or Infecting Someone Else With COVID-19
All 26 participants contributed to this theme. Fear of being infected with COVID-19 or infecting others with it and depression can be listed among the significant reasons nurses are affected psychologically. Most of the nurses stated that they were afraid of infecting someone else with COVID-19 rather than getting sick. This situation caused the nurses to feel paranoid and sleep disorders. In addition, these fears caused alienation from other people and emotional exhaustion.
N5: "I'm so afraid of causing someone to get sick. I don't spend time with my friends. I take public transport less often. All of this has exhausted me emotionally." N24: "I come home from the hospital after being on call feeling dirty, so I clean myself up and go to bed, but it takes me a long time to fall asleep. If I awake from my sleep with a cough, I cannot get back to sleep for one or two hours because I think about whether I was infected with COVID. What happens to people at home?" N13: "We're in a state of global depression. Nobody's doing well. We are in a miserable situation."
Theme 3: The Perceptions OR Nurses Have on the Training Provided During the COVID-19 Pandemic
A total of 25 participants contributed to this theme. In this process, some OR nurses were temporarily assigned to services where patients with COVID-19 are present. Most of the OR nurses stated that the training they received on the COVID-19 pandemic was not adequate. The primary problems they reported were that the trainings were not repeated periodically, the training content was insufficient, the trainings could not be held face-to-face, and no orientation training was provided to the nurses who had been reassigned. OR nurses assigned to wards with COVID-19 patients cited a lack of further education. The nurses had a lack of knowledge about the functioning of the new workplaces and treatments. This lack of knowledge could only be eliminated with orientation training, the training given to nurses to adapt to the new workplace. According to the OR nurses, inadequate training led to mistakes in equipment use, which made them feel more anxious and stressed. In addition, the nurses stated that fighting an unknown enemy destroyed their hope. N6: "I learned different information from the news, from Google, and my doctor friends, especially from surgical oncologists. The hospital has not provided me with any training. I had to figure it out for myself." N9: "Not much information was given. A brochure was posted on the board, but we were not provided with thorough training." N19: "We have had friends that wore their protective gowns wrong or did not know how to use them effectively. It would be far more effective if we were taught on a one-on-one basis on how to wear the gowns." N13: "Ignorance kills, not disease. An uneducated person is not a threat, but if you put a gun in their hand, then they become a threat. I was assigned to the ICU's, but no one gave me orientation training." N5: "Everyone has experienced great panic, as if the virus had come down from space. The unknown is the most terrifying thing." N7: "Uncertainty is something that diminishes one's hope for the future." N4: "We have all of the necessary equipment, but we are still concerned about how we can protect ourselves." N21: "Nurses were more anxious and stressed when they were unable to obtain proper information from the health care professionals due to early on the information on COVID-19 was constantly changing and contradictory." The Pandemic's Effect on the Safety of Health Care Workers A total of 23 participants contributed to this theme. In their evaluation of the pandemic, the OR nurses stated that access to PCR testing and PPE was the most problematic issue concerning the safety of health care workers. The major threats to the safety of health care workers were lack of routine PCR testing for patients, inability to acquire or wear PPE properly, and PCR test limitations on nurses who were not showing COVID-19 symptoms. In addition, in this process, where disinfectant and hand washing increased, the nurses also stated that there were skin irritations. The OR nurses recommended that every patient who will undergo surgery be routinely PCR tested to ensure the safety of health care workers. The nurses should have a PCR test whenever they want. Nurses stated that PPE is easy to access, and they need better training to use it correctly.
N5: "It is more difficult for us to get tests done than it is for the public. We are always blocked from getting tested to prevent wasting the limited number of tests available." N25: "The safety of the patients has been prioritized over the safety of the OR nurses." N3: "It used to be that every patient who came down to the OR had to be tested, but the tests on the patients were stopped. There were occasions when we learned that the patient had tested positive after surgery." N1: "Unfortunately, there was a shortage of PPE at first, which made people nervous. We could not find protective gowns or masks, and we did not enter ORs until the masks arrived, which disrupted the OR team's relationship." N21: "My skin is allergic to several substances, and disinfectant and hand washing have irritated my skin, from my hands up to my elbows."
The Pandemic's Effect on Patient Safety
A total of 23 participants contributed to this theme. The OR nurses reported that the PPE used in the surgeries was designed for general usage and not specialized for use in the OR. This resulted in difficulties in trying to secure surgical asepsis. Moreover, poor nursing care quality throughout the pandemic has led to conditions that have threatened patient safety. Nurses also drew attention to education on patient safety. Some of the nurses who had been reassigned to the ICU stated that they had received no orientation training and warned that it is dangerous to have untrained people in positions of authority.
N3: "When were wearing protective gowns, we were limited to washing our hands only, without being able to wash up to the elbow. This was such a disadvantage for the patient." N7: "The PPE we have used during the period of COVID-19 is larger, heavier, and more uncomfortable than the standard equipment we used before the pandemic. I have had a hard time moving while wearing the PPE, which has occasionally caused me to have difficulty controlling surgical asepsis." N17: "The usage of face shields during surgery causes us to contaminate our sterilized environment." N19: "With COVID, the issue of not allowing anybody into the room until the patient has been intubated has been problematic. The length of patients' anesthetization period has been somewhat prolonged. I believe this affects the patients adversely."
The Pandemic's Effect on Nursing Care
A total of 24 participants contributed to this theme. The OR nurses stated that throughout the pandemic they had been unable to sufficiently monitor patients during the perioperative period. The quality of patient care practices had diminished due to the shortening of preoperative patient preparation times and patient stayed in postanesthesia care units and to the limited contact with patients. They also noted that nurses were not provide adequate preoperative patient care due to social distancing measures, which prevented them from preparing patients psychologically before the surgery. The OR nurses reported that the limited communication they have been able to have with patients during the pandemic has impaired the level of care they provide.
N2: "Before, once we had the patient in the room, we would perform safe surgery. We are no longer able to do any of this. Now, we transport the patient into the room, and the anesthesiologist begins to administer anesthesia after asking their name, surname, and the type of procedure they are to undergo. We don't have the opportunity to psychologically prepare the patient for surgery first." N9: "As we limited communication with the patients, they became more nervous. The communication between the nurse and the patient has weakened." N7: "We cannot touch the patients because we are scared...Being afraid to touch the patient while giving care makes me feel like I am falling short of my responsibilities. I feel that my ability to communicate with patients has dwindled to the point where we cannot communicate at all. It is as though we have lost our ability to empathize." N25: "During this period, the OR has become a neglected unit. With this neglect, the quality of care we provide to patients is greatly impaired." N22: "Patients are always nervous and anxious because ORs are really terrifying and frightening places. When we take the extra measures imposed on us by the pandemic, they actually become even more concerned, since they can no longer communicate with us, even in the simple way of making eye contact. They get a bit more stressed."
Discussion
During the interviews, the nurses appeared to be exhausted, which was confirmed with their statements on feeling overwhelmed physically and psychologically due to the changing conditions brought about by the pandemic. Some of the participants regarded their condition as a being in a state of war and stated that they would not give up because they were warriors, while others had such tight and tiring schedules that they had to postpone the interviews or had to participate in the interviews during their breaks between surgeries.
Gao et al 30 in their study, reported that due to the excessive workload and heavy fatigue that the nurses have experienced during the COVID-19 pandemic, their shift hours have been shortened. In the present study, it was found that although there were no changes in the shifts of the OR nurses, the fatigue they experienced stemmed from the amount of PPE they had to wear during long surgeries. The protective gowns that the nurses have been required to wear over their uniforms during the COVID-19 pandemic, in addition to the standard garments required in the OR, restricts their movement and causes them to sweat excessively during the surgery. Furthermore, the sweat produced from the PPE causes the goggles to fog up, reducing their vision. The OR nurses also expressed that they experienced headaches from having to use face shields, and that the masks caused pressure sores on the ears, nose, and cheeks. In the study by Hoernke et al 31 involving the participation of health care professionals, it was reported that the tight masks they have been required to wear during the pandemic caused facial pain, marks and bruises, rashes, dry skin, as well as difficulty in breathing, headaches, and irritability, and that the protective gowns were hot and caused them to sweat, which led to overheating and dehydration.
The study by Kelechi et al 32 reported that health workers who wore masks, especially N95 masks, for more than six hours experienced dryness and peeling of the skin. Furthermore, the nasal bridge, cheeks, and forehead were identified in the said study as the areas most affected by PPE, as these were the spots where the wire and elastic loops of the mask and the face shield press against.
The pandemic has affected nurses not only psychologically but also physically. The OR nurses in the present study expressed their fear about being infected with COVID-19 and stated that the social distance they have had to maintain from their colleagues during this period has resulted in communication problems and loneliness. It is also likely that they have been excluded by friends who do not work at the hospital and have suffered from emotional exhaustion due to the social isolation they have been exposed to. Everson et al 33 in their study, stated that OR nurses suffered from anxiety, post-traumatic stress disorder, and social isolation, while Maqbali et al 34 reported in their study that one out of every three nurses have suffered from psychological disorders during the COVID-19 pandemic. In the present study, sleep disorder, being one of these psychological disorders, affected some of the participants, who stated that they woke up scared in the middle of the night. Leng et al 35 reported that nurses had high anxiety and post-traumatic stress disorder scores and attributed them to personal concerns, lack and misuse of protective equipment, physical and emotional fatigue, excessive workload, fear of being infected, and insufficient work experience.
The "uncertainty" and "unknown" codes assigned to the nurses' statements stood out in the present study, with the lack of knowledge being at the root of the psychological effects experienced by nurses throughout this period. Similarly, Moradi et al 36 indicated that the unknown factor of the pandemic was the most significant cause of the stress experienced by nurses during this period, and that the lack of inadequacy of training aggravated the fears of health care workers by triggering a sense of "fighting against the unknown". Most of the nurses in the present study noted that the training they received at the hospital for the COVID-19 pandemic was insufficient and did not help them to feel adequately prepared to address it. The main reasons for these inadequacies include lack of face-to-face training, failure to periodically repeat the trainings, and poor assessment of the effectiveness of the training. In their study assessing the knowledge, attitudes, and practices of nurses on COVID-19, Wen et al 25 determined that nurses who had working experience of less than 10 years had a lower level of knowledge and suggested that to address this issue measures be taken to improve training.
In the present study, some of the nurses who had been reassigned to the ICU stated that they had received no orientation training and warned that it is dangerous to have untrained people in positions of authority. Likewise, in the study by Fagerdahl et al 37 involving operating room nurses who had been assigned to the ICU during the COVID-19 pandemic, the nurses reported that they experienced anxiety due to unknowns. Tan et al 38 in their study, highlighted that despite the nurses' lack of qualifications to work in the units to which they had been reassigned, no sufficient training was provided to them, and they had no familiarity with their new work unit. Furthermore, the nurses who were assigned to the ICU from different units noted that they did not know how to operate devices often used in ICU to treat COVID, such as ventilators. 38 When OR nurses do not receive adequate training, there is a stronger probability that they will make mistakes in equipment use and thereby put at risk the safety of health care workers and patients alike.
The failure to supply proper equipment to health care workers in sufficient numbers and on time, the lack of sufficient PCR testing or the failure to do PCR testing, and the misuse of PPE due to lack of training are all factors that threaten the safety of healthcare workers. Sadati et al 24 in their study, indicated that a sufficient amount of equipment for nurses could not be supplied at the beginning of the pandemic. The nurses in this said study also mentioned that they did not believe that everyone needed N95 masks, as they lacked sufficient knowledge about this issue and had conflicting views on it. They further added that patients were not tested and therefore were confused about whether their body temperature had risen due to hot weather or COVID-19. In the study by G€ ul et al 39 the participating nurses reported that some of the patients who underwent surgery did not undergo PCR testing. The study by Mohammadi et al 40 found that many patients did not have PCR tests before undergoing surgery but were determined to be COVID positive after the surgery. The same study reported that even simple masks were not available for the OR nurses. The increasing incidence of dermatitis on the hands of nurses due to the use of hand sanitizers in the ORs, as well as the increased frequency of handwashing during this process, are among the notable problems that were commonly experienced by the OR nurses in the present study.
Some of the nurses in the present study expressed that the PPE threatened the ability to ensure asepsis of the OR, and that they could not practice proper surgical handwashing. Furthermore, the nurses reported that the length of time of the patients' anesthesia period was prolonged as a result of the modifications in some hospital procedures in response to the COVID-19 pandemic. They also noted that nurses were unable to provide adequate preoperative patient care due to social distancing measures, which prevented them from preparing patients psychologically before the surgery. It is likely that there have been other shortcomings in pre-and postoperative patient care during this period. In their study, Murat et al 41 reported that nurses with less than five years of work experience felt inadequate in providing patient care, while Karimi et al 42 attributed nurses' inability to provide adequate care to patients to the shortage of PPE.
Conclusion
From the results obtained in this study, we concluded that the COVID-19 pandemic has had an adverse physical and psychological effect on OR nurses. Insufficient PPE and lack of training not only generates anxiety in health care workers but also jeopardizes the safety of patients and health care workers. The social distancing measures practiced by nurses adversely affect patient care, insofar as it limits their communication with patients. During the COVID-19 pandemic, OR staff should have ease of access to necessary equipment, and orientation training should be given and evaluated to ensure correct and proper equipment use. In the training content, the significance of sustaining patient care by taking the necessary measures for patient safety and health care worker safety should be highlighted. It should be incumbent upon OR nurses to learn the latest safety precautions to be taken during the perioperative patient care period, not only by following the hospital orientation trainings, but also by following the current literature. Lastly, psychological support should be provided by the health care organization to OR nurses who have been identified as suffering from psychological problems during this period of COVID-19. | 2022-07-18T13:04:22.765Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "204645f8e716305ac4156f11ba4df407e818a505",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9289127",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecb8993b1bd2db4c18f86e9fcb12e3107f023800",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.