id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
187714536 | pes2o/s2orc | v3-fos-license | Expedition 369 summary
The tectonic and paleoceanographic setting of the Great Australian Bight (GAB) and the Mentelle Basin (adjacent to Naturaliste Plateau) offered an opportunity to investigate Cretaceous and Cenozoic climate change and ocean dynamics during the last phase of breakup among remnant Gondwana continents. Sediment recovered from sites in both regions during International Ocean Discovery Program Expedition 369 will provide a new perspective on Earth’s temperature variation at subpolar latitudes (60°–62°S) across the extremes of the mid-Cretaceous hot greenhouse climate and the cooling that followed. Basalts and prebreakup sediments were also recovered and will provide constraints regarding the type and age of the Mentelle Basin basement and processes operating during the break up of Gondwana. The primary goals of the expedition were to • Investigate the timing and causes for the rise and collapse of the Cretaceous hot greenhouse climate and how this climate mode Jurassic(?) sediments interpreted from the seismic profiles were successfully recovered. Site U1516 cored an expanded Pleistocene, Neogene, and Paleogene section and recovered a complete Ceno-manian/Turonian boundary interval containing five layers with high organic carbon content. Study of the well-preserved calcareous microfossil assemblages from different paleodepths will enable generation of paleotemperature and biotic records that span the rise and collapse of the Cretaceous hot greenhouse (including OAEs 1d and 2), providing insight to resultant changes in deep-water and surface water circulation that can be used to test predictions from earth system models. Measurements of paleotemperature proxies and other data will reveal the timing, magnitude, and duration of peak hothouse conditions and any cold snaps that could have allowed growth of a polar ice sheet. The sites contain a record of the mid-Eocene to early Oligocene opening of the Tasman Gateway and the Miocene to Pliocene restriction of the Indonesian Gateway; both passages have important effects on global oceanography and climate. Advancing understanding of the paleoceanographic changes in a regional context will provide a global test on models of Cenomanian to Turonian oceanographic and climatic evolution related both to extreme Turonian warmth and the evolution of OAE 2. The Early Cretaceous volcanic rocks and underlying Jurassic(?) sediments cored in different parts of the Mentelle Basin provide information on the timing of different stages of the Gondwana breakup. The recovered cores provide sufficient new age constraints to underpin a reevaluation of the basin-wide seismic stratigraphy and tectonic models for the region.
• Investigate the timing and causes for the rise and collapse of the Cretaceous hot greenhouse climate and how this climate mode affected the climate-ocean system and oceanic biota; • Determine the relative roles of productivity, ocean temperature, and ocean circulation at high southern latitudes during Cretaceous oceanic anoxic events (OAEs); • Investigate potential source regions for deep-water and intermediate-water masses in the southeast Indian Ocean and how these changed during Gondwana breakup; • Characterize how oceanographic conditions at the Mentelle Basin changed during the Cenozoic opening of the Tasman Gateway and restriction of the Indonesian Gateway; and • Resolve questions on the volcanic and sedimentary origins of the Australo-Antarctic Gulf and Mentelle Basin and provide stratigraphic control on the age and nature of the prebreakup successions.
Hole U1512A in the GAB recovered a 691 m thick sequence of black claystone ranging from the lower Turonian to the lower Campanian. Age control is primarily based on calcareous nannofossils, but the presence of other microfossil groups provided consistent low-resolution control. Despite the lithologic uniformity, long-and short-term variations in natural gamma radiation and magnetic susceptibility show cyclic alternations that suggest an orbital control of sediment deposition, which will be useful for developing an astrochronology for the sequence.
Sites U1513, U1514, U1515, and U1516 were drilled in water depths between 850 and 3900 m in the Mentelle Basin and penetrated 774, 517, 517, and 542 meters below seafloor, respectively. Under a thin layer of Pleistocene to upper Miocene sediment, Site U1513 cored a succession of Cretaceous units from the Campanian to the Valanginian, as well as a succession of basalts. Site U1514 sampled an expanded Pleistocene to Eocene sequence and terminated in the upper Albian. The Cenomanian to Turonian interval at Site U1514 is represented by deformed sedimentary rocks that probably represent a detachment zone. Site U1515 is located on the west Australian margin at 850 m water depth and was the most challenging site to core because much of the upper 350 m was either chert or poorly consolidated sand. However, the prebreakup
Introduction
Understanding the mechanisms, feedbacks, and temporal relationships that link climate dynamics of polar regions and the tropics is of fundamental importance for reconstructing past climate changes, including rapid shifts, and hence improving predictions in the future. High-resolution stratigraphic records from strategic locations around the globe, especially from the high-latitude oceans, are essential to achieve this broad goal. Within this context, past periods of extreme warmth, such as the Cretaceous hot greenhouse and the initial Eocene thermal maximum, have attracted increasing research interest over recent years, which has resulted in often spectacular and sometimes contradictory insights into the mechanisms of natural short-term changes in climate, biogeochemical cycling, and ocean oxygenation. International Ocean Discovery Program (IODP) Expedition 369 targeted these fundamental objectives with the specific goals of providing samples from high-latitude Southern Ocean sites with expanded late Mesozoic and Cenozoic sections and improving constraints on the tectonic history of the region.
Expedition 369 recovered sediments from the Great Australian Bight (GAB) and Mentelle Basin that will provide new insights to the evolution of Southern Hemisphere, high-latitude Cretaceous climates. The high paleolatitude (60°-62°S) location of the sites ( Figure F1) is especially important for global climatic studies because of the enhanced sensitivity to changes in ocean temperature. Study of the recovered sections will enable generation of high-resolution stratigraphic records across the rise and collapse of the Cretaceous hot greenhouse climate and concomitant changes in Earth's latitudinal thermal gradients and deep ocean circulation that continued through the Cenozoic. The well-resolved age framework of the pelagic carbonate sequences will enable more precise correla-tion between global climatic shifts and tectonic history, especially major volcanic episodes in the region. These aspects are crucial to improve our understanding of Earth's climate system and to inform the scientific modeling community about high-latitude Southern Hemisphere Cretaceous (and possibly older) records in a currently underexplored region.
Geological setting
Following the collision of Gondwana with Laurasia at 330-320 Ma, which formed the supercontinent of Pangaea, the breakup of eastern Gondwana commenced during the Jurassic (Veevers, 2006). At this time, the Naturaliste Plateau was located near the junction of the Australian, Antarctic, and Greater India plates ( Figure F1) at 60°-62°S. The first stage of breakup occurred in the Early Cretaceous as India separated from Australia. Then later, in the early Eocene, Australia separated from Antarctica.
The Mentelle Basin and rifting of Greater India
The Mentelle Basin is part of a sequence of basins along the western margin of Australia that formed during the breakup with India and separates the Naturaliste Plateau from Australia (Bradshaw et al., 2003). The basin is the southernmost in a series of en echelon basins from a rifting episode that initiated in northwestern Australia in what is now the Argo Abyssal Plain and southward (Maloney et al., 2011) and is separated from the Perth Basin by the basement high of the Leeuwin Block and the Yallingup Shelf ( Figure F2). Despite its water depth (2000-4000 m), the basement under the Mentelle Basin is believed to be continental and to contain Jurassic and possibly older sediments that were deposited in extensional basins. The Naturaliste Plateau is classed as a volcanic margin Figure F1. Late Cenomanian (bottom) and middle Eocene (top) paleogeographic reconstructions after Hay et al. (1999) with locations of Expedition 369 sites (yellow circles) in the Mentelle Basin (MB; adjacent to Naturaliste Plateau), the GAB, and selected deep-sea sites (DSDP Sites 327 and 511; ODP Sites 689 and 690) at southern high latitudes.
IODP Proceedings 3 V o l u m e 3 6 9 as evidenced by the onshore Bunbury basalt, which recent dating has given an age of 137-130 Ma (Direen et al., 2017). This date is contemporaneous with the final stages of rifting but is older than the basalt found on the Kerguelen Plateau, even though the geochemistry is similar (Olierook et al., 2016). Basalt is widely dredged around the Naturaliste Plateau, and its extent throughout the Mentelle Basin is interpreted from seismic data as a high-amplitude reflection used to denote the top of the Valanginian (Borissova, 2002). The younger sediments were deposited in a thermally subsiding basin and margin. Deep Sea Drilling Project (DSDP) Site 258 previously drilled this postrift section on the western margin of the Mentelle Basin but stopped short of the interpreted basalt horizon. Seismic data acquired in 2004 and 2009 by Geoscience Australia (tied to Site 258) provided a regional survey to appraise the stratigraphic, structural, and depositional history of the Mentelle Basin (Maloney et al., 2011;Borissova et al., 2010). Relatively young Neogene carbonate oozes unconformably overlie Paleogene deep marine chalk. Occasional, bright reflection events in this sequence are likely caused by thin chert bands. The Paleogene chalk unconformably overlies Cretaceous coccolith-rich chalk that is underlain by Albian/Aptian claystones and a glauconitic sandstone. The unconformity at the base of the ooze and at the top of the glauconitic sandstone can be identified and interpreted from the seismic data; the other boundaries do not create distinct reflections. The glauco-nitic sandstone sequence is floored by a high-amplitude reflection (not sampled at Site 258) interpreted to be caused by Valanginian volcanics. Hence, this horizon coincides with the onset of breakup and separation of Greater India from Australia and a series of subsequent volcanic episodes related to the continuing breakup on the northern and western margins of the Naturaliste Plateau. Prior rifting of the area is recorded by Early Cretaceous, Jurassic, and Permian/Triassic sequences, although the interpretation is somewhat speculative because the sequences lack borehole control.
Opening of the Australo-Antarctic Gulf
The breakup on the southern margin of Australia is thought to have started in the Cenomanian to Turonian and proceeded at a very slow rate. In contrast to the rifting history of Greater India, this margin is believed to have been nonvolcanic. Plate tectonic reconstructions that correspond to the early stages of rifting are poorly constrained and controversial (White et al., 2013). The ~15 km thick post-Middle Jurassic sedimentary sequence that accumulated in the GAB contains the largest continental margin deltaic sequence deposited during the Late Cretaceous greenhouse period. Accelerated subsidence commencing in the late Albian and continuing through the Cenomanian to Santonian led to the deposition of a thick sequence of marine shales (Totterdell et al., 2000). During the Cretaceous, the GAB was situated at the eastern tip of a partial seaway, the Australo-Antarctic Gulf (AAG), with the Naturaliste Plateau in the open ocean at the western gateway that connected the AAG with the southern Indian Ocean.
An overall transgressive phase of sedimentation in the early Paleogene was followed by the establishment of open-marine carbonate shelf conditions from the early Eocene onward. The AAG eventually widened to create the Southern Ocean, with a switch to rapid spreading after 45 Ma (White et al., 2013). An industry well, Jerboa-1, in the Eyre subbasin on the continental shelf provided a stratigraphic tie along Geoscience Australia seismic Profile s065-06 (Bradshaw et al., 2003).
Opening of the Indian Ocean and Tasman Gateway and closure of the Indonesian Gateway
After initial rifting, India drifted north-northwest from Australia with strike-slip motion along the Wallaby-Zenith Fracture Zone. This juxtaposed their continental shelves until ~120 Ma and isolated the nascent Indian Ocean from global deep-water circulation (Gibbons et al., 2013). Subsequently, the northward drift of Australia through the Cenozoic affected changes in two important ocean gateways to the Indian Ocean: the opening of the Tasman Gateway between Australia and Antarctica in the middle Eocene to early Oligocene and the restriction of the Indonesian Gateway between Australia and Southeast Asia in the Miocene to Pliocene. Both passages have important effects on global oceanography and climate, and the Naturaliste Plateau/Mentelle Basin region is well situated to monitor their opening history and resultant effects on ocean circulation ( Figure F3).
The Antarctic Circumpolar Current (ACC) in the Southern Ocean has an important role in present-day overturning circulation and surface heat redistribution (Lumpkin and Speer, 2007). Although there is no significant restriction on the ACC in the modern Tasman Gateway, the Tasman Gateway and Drake Passage both opened during the middle Eocene to Oligocene, and the flow restriction at the Tasman Gateway would have had a significant effect on ocean circulation, especially in the late Eocene to early Oligocene and earlier. During the Paleocene to Eocene, Figure F2. A. Regional context of the Naturaliste Plateau and Mentelle Basin, including location of the major reflection seismic profiles, nearby DSDP sites (red numbers), and Expedition 369 sites (yellow circles southern Australia would have been influenced more by subpolar (rather than the modern subtropical) gyres as a consequence of the closed or restricted Tasman Gateway and more southerly position of Australia (e.g., Huber et al., 2004). During the Early Eocene Climatic Optimum, however, temperatures in the southwest Pacific Ocean near Australia seem to have been warmer than climate models predicted (Hollis et al., 2012). Cooling of the Antarctic margin relative to the Australian margin near Tasmania occurred early in the opening of the Tasman Gateway, which is dated at 49-50 Ma (Bijl et al., 2013), and the identification of a separate Southern deepwater source was possible from the late middle Eocene. (Cramer et al., 2009;Borrelli et al., 2014). However, plate tectonic reconstructions (Müller et al., 2000) indicate that separation of Australia and Antarctica near Tasmania occurred at ~43 Ma, later than the oceanographic changes. Recovering material spanning this time interval, therefore, provides an important opportunity to reconcile these tectonic and paleoceanographic interpretations. The restricted surface flow through the Indonesian Gateway is essential to the surface heat flux in the Pacific and Indian Oceans and has been linked to El Niño Southern Oscillation (ENSO) dynamics and the global ocean overturning circulation (Gordon, 1986;Godfrey, 1996;Lee et al., 2002). The gradual restriction of the Indonesian Gateway from deep-water throughflow in the late Oligocene to early Miocene to variable shallow flow in the Pliocene to Pleistocene is thought to have strongly affected surface heat distribution, with potential links to the late Neogene cooling and Northern Hemisphere glaciation (Cane and Molnar, 2001;Kuhnt et al., 2004;Karas et al., 2009).
Investigate the timing and causes for the rise and collapse of the
Cretaceous hot greenhouse and how this climate shift affected the climate-ocean system and oceanic biota.
Compilations of deep-sea benthic foraminiferal and bulk carbonate δ 18 O data reveal that the world ocean experienced long-term warming from the late Aptian through middle Cenomanian; maintained extremely warm temperatures from the late Cenomanian through Santonian, with peak warmth (>20°C at midbathyal depths) during the Turonian; and gradually returned to cooler values (~6°-8°C at midbathyal depths) during the Maastrichtian (Huber et al., 1995(Huber et al., , 2002(Huber et al., , 2011Clarke and Jenkyns, 1999;Friedrich et al., 2012;O'Brien et al., 2017). Recent benthic and planktonic δ 18 O values obtained from the Turonian at Site 258 support extreme high-latitude Turonian warmth (Huber et al., 2018). Still, these δ 18 O values are problematically low and seem to defy straightforward explanations (Bice et al., 2003). Compared with existing stable isotope data (Huber et al., 1995(Huber et al., , 2002, the new analyses showed large changes at times of known climatic shifts. Although the Cretaceous has long been characterized as too warm to sustain continental ice sheets (e.g., Barron, 1983;Frakes et al., 1992;Huber et al., 2002;Hay, 2008), coincidences between sea level variations (deduced from sequence stratigraphy) and δ 18 O records have been proposed by some authors as evidence for the occasional existence of polar ice (e.g., Barrera et al., 1997;Miller et al., 1999Miller et al., , 2005Stoll and Schrag, 2000;Gale et al., 2002;Bornemann et al., 2008) and winter sea ice (Bowman et al., 2012). The "greenhouse glaciers" hypothesis has been countered by evidence for diagenetic influence on bulk carbonate oxygen isotope records and stable tropical planktonic and benthic foraminiferal δ 18 O data across several of the proposed cooling intervals (Huber et al., 2002;Moriya et al., 2007;Ando et al., 2009;MacLeod et al., 2013), Furthermore, TEX 86 values from DSDP Site 511 suggest sea-surface temperatures of 25°-30°C during the Hauterivian to Aptian interval (Jenkyns et al., 2012;O'Brien et al., 2017).
High-resolution isotopic studies of samples from Expedition 369 sites should advance our understanding and improve geographic documentation of major global climatic warming and cooling transitions during the Cretaceous. Recovery of more complete sections will lead to biostratigraphic refinements and improved regional to global correlations.
2. Determine the relative roles of productivity, ocean temperature, and ocean circulation at high southern latitudes during Cretaceous oceanic anoxic events (OAEs).
OAEs are defined as short-lived (<1 My) episodes of enhanced deposition of organic carbon in a wide range of marine environments (Schlanger and Jenkyns, 1976) and are associated with prominent carbon isotope excursions in marine and terrestrial sequences (Jenkyns, 1980(Jenkyns, , 2010Arthur et al., 1988;Gröcke et al., 1999;Jahren et al., 2001;Ando et al., 2002;Jarvis et al., 2006). Triggering of OAEs has been attributed to a rapid influx of volcanogenic and/or methanogenic CO 2 sources leading to abrupt temperature rise and an accelerated hydrological cycle, increased continental weathering and nutrient discharge to oceans and lakes, intensified upwelling, and increased organic productivity. Globally expressed Cretaceous OAEs occurred during the early Aptian (OAE 1a; ~120 Ma) and at the Cenomanian/Turonian (C/T) boundary (OAE 2; ~94 Ma), and regionally recognized events occurred during the early Albian (OAE 1b; ~111 Ma) and late Albian (OAE 1d; ~100 Ma). Cretaceous OAEs are best known from the Atlantic/Tethyan basins and surrounding continents, and records from the Indian Ocean are limited. The presence of black shales with as much as 6.9% total organic carbon (TOC) in the GAB (Totterdell et al., 2008), 11% at Site U1513, and as much as 14% at Site U1516 suggest water may even have been euxinic in the region during deposition of OAE 2. OAE deposits should have been present at Kerguelen (Ocean Drilling Program [ODP] Site 1138) and Exmouth Plateaus and adjacent basinal areas (primarily ODP Site 763), but drilling strategies and poor recovery resulted in all of the cores missing the OAE record.
Recovery of a continuous record of the C/T boundary OAE 2 was anticipated at GAB Site U1512, western Mentelle Basin Site U1513, and northern Mentelle Basin Site U1514. Actual recovery at Expedition 369 sites was different (see Principal results) but included apparently excellent records of OAE 2. At all sites, the anticipated OAE 2 interval is at a shallow burial depth (~260-460 meters below seafloor) where sediments are thermally immature and preservation of biogenic material is good to excellent. Future studies will compare the Expedition 369 sites and other high-latitude OAE 2 sites to establish (1) whether significant changes in ocean circulation were coincident with OAE 2 (e.g., MacLeod et al., 2008;Martin et al., 2012), (2) over what depth ranges these changes, if any, occurred (Zheng et al., 2013), and (3) whether OAE 2 in the high-latitude Southern Hemisphere was coincident with major changes in sea-surface temperatures (Jarvis et al., 2006). We are particularly interested in establishing whether the C/T succession contained evidence for the "Plenus cold event, " an important cooling (~4° to >5°C) event within OAE 2 known from the Northern Hemisphere. This event is associated with changes in surface water circulation (e.g., Zheng et al., 2013) and reoxygenation of bottom water, but it remains unclear whether the Plenus cold event was a global or regional phenomenon. Data from the high southern latitudes are currently lacking (Jenkyns et al., 2012) and would address this critical gap in our model for OAE 2.
Identify the main source regions for deep-water and intermediate water masses in the southeast Indian Ocean and how these changed during the Gondwana breakup.
Over the past few years, study of Cretaceous intermediate and deep-water circulation patterns has been galvanized by an increase in published neodymium (Nd) isotopic data (e.g., Jiménez Berrocoso et al., 2010;Robinson et al., 2010;MacLeod et al., 2011;Martin et al., 2012;Murphy and Thomas, 2012;Jung et al., 2013;Moiroud et al., 2013;Voigt et al., 2013;Zheng et al., 2013). Nd isotopic values (expressed as εNd) have emerged as a promising proxy for reconstructing past circulation and are applicable across a wide range of water depths, including abyssal samples deposited below the carbonate compensation depth.
Typically measured on either phosphatic fossils (fish teeth, bones, and scales) or oxides leached from bulk samples, εNd values record a depositional to early diagenetic bottom water signature generally resistant to later diagenetic overprinting (Martin and Scher, 2004). The bottom water signature, in turn, reflects the εNd value of the source region of that water mass because Nd enters the ocean largely as riverine or eolian input, has a residence time shorter than the mixing time of the oceans, and has semiconsolidated behavior. Because εNd values in likely source regions vary by 10-15 units compared to an analytical precision of ~0.3 units, stratigraphic trends in εNd can be used to infer changes in circulation and mixing patterns through time. However, εNd values of samples are also influenced by local and global volcanic inputs, and the bottom water εNd signature of a water mass can be modified during circulation by a high particle flux or boundary exchange, especially near detrital sources.
Cretaceous εNd data have been used to test, refine, and revise earlier circulation hypotheses that were based largely on carbon and oxygen isotopes. Nd isotopic data have also documented correlation between εNd shifts and both long-term climate trends and shorter bioevents (e.g., OAE 2) and demonstrated a degree of complexity within and among sites not predicted by early studies. The latter is particularly true for the Southern Ocean, where circulation changes, water column stratification changes, volcanic inputs, and establishment of a widespread source of Southern Component Water have all been invoked to explain observed patterns (e.g., Robinson and Vance, 2012;Murphy and Thomas, 2012;Jung et al., 2013;Voigt et al., 2013). Nd studies of samples from sites representing a range of depths within the Mentelle Basin/Naturaliste Plateau, when combined with parallel paleotemperature estimates from δ 18 O and TEX 86 and documentation of calcareous microfossil assemblages, should help reduce uncertainty in interpretation of previous studies.
Characterize how oceanographic conditions changed at the Mentelle Basin during the Cenozoic opening of the Tasman Gateway and restriction of the Indonesian Gateway.
The Mentelle Basin sites are well positioned to monitor paleoceanographic variations in the Leeuwin Current/Undercurrent system. The surface Leeuwin Current is unique in that it flows poleward along the eastern boundary of the Indian Ocean ( Figure F3). The current is caused by the north-south gradient between cooler waters to the south and warm surface waters along the northwestern Australian coast. These warmer waters are derived from the Indonesian Throughflow (ITF), which overrides the prevailing wind stress and results in the poleward flow (Pattiaratchi, 2006;Godfrey, 1996;Domingues et al., 2007;Waite et al., 2007). The strength of the Leeuwin Current varies seasonally with ITF strength and interannually with ENSO dynamics, strengthening in winter and under La Niña conditions. The intermediate Leeuwin Undercurrent is derived from an eddy system associated with the Flinders Current near the Naturaliste Plateau (Middleton and Cirano, 2002;Waite et al., 2007;Meuleners et al., 2007;Divakaran and Brassington, 2011). The Flinders Current and Leeuwin Undercurrent are conduits of the Tasman leakage, a pathway for return flow to the North Atlantic Ocean of deepwater upwelled in the Pacific and a component of the Southern Hemisphere supergyre that links the subtropical gyres of the Atlantic, Pacific, and Indian Oceans (Speich et al., 2002(Speich et al., , 2007Ridgway and Dunn, 2007;van Sebille et al., 2012). The Tasman leakage allows interconnection of Antarctic Intermediate Water (AAIW) in the Pacific, Atlantic, and Indian Oceans, and the Flinders Current-Leeuwin Undercurrent system seems to play a role in the conversion of Subantarctic Mode Water to AAIW (Ridgway and Dunn, 2007). Deepwater in the Mentelle Basin is derived from Antarctic Bottom Water (AABW) and lower Circumpolar Deep Water that enter the Perth Basin between the Mentelle Basin and Broken Ridge, with substantial upwelling of AABW in the IODP Proceedings 6 V o l u m e 3 6 9 southern portion of the Perth Basin (Sloyan, 2006;McCartney and Donohue, 2007). Coring in the Mentelle Basin complements previous drilling around Australia, especially off Western Australia (DSDP Leg 27, ODP Legs 122 and 123, and IODP Expedition 356) and southern Australia and Tasmania (ODP Legs 182 and 189). Coring recovered Paleocene to Eocene and upper Miocene to recent sequences at deep-water and intermediate-water locations. Recovered material from this expedition will contribute to investigations of (1) the early Paleogene greenhouse climate at a high-latitude (~60°S) site, (2) oceanographic changes in the early stages of the opening of the Tasman Gateway, and (3) oceanographic changes during the late stages of restriction of the Indonesian Gateway.
Resolve questions about the volcanic and sedimentary origins of
the basin and provide stratigraphic control on the age and nature of the prebreakup succession.
The interlinked aspects of the geology and evolution of the Naturaliste Plateau and Mentelle Basin suggest that recovering the volcanic rocks at the Valanginian/late Hauterivian unconformity will further our understanding of this region. Drilling the unconformity provides information on • The timing and position of the breakup (both on the western and southern margin, using paleomagnetic studies and 40 Ar/ 39 Ar dating of lavas); • The nature of the various phases of volcanism (core description, petrophysics, and geochemical and isotopic study); • Geographic and environmental reconstructions; and • The depositional history of the basin.
The principal cause of high-amplitude discontinuous reflectors overlying the interpreted Valanginian breakup unconformity corresponds to extrusive volcanics that flowed into the Mentelle Basin at the time of the breakup with Greater India. Drilling into the Valanginian volcanics and pre-Valanginian sediments (Sites U1513 and U1515) provides stratigraphic control on the age and nature of the prebreakup succession and early rifting in the Mentelle Basin. Dating and analysis of the postbreakup sediments will provide needed information about the development of the Mentelle Basin and particularly the influence of the later rifting with Antarctica. The results from this expedition will address a number of key tectonics questions for the region.
Site background and objectives
The objective for coring Site U1512 (Table T1) was to obtain a continuous Upper Cretaceous record of marine black shales in the GAB across OAE 2, which straddles the C/T boundary. Our plan was to compare the Site U1512 sediment record with coeval Expedition 369 sequences cored in the Mentelle Basin to characterize the geochemical and biological responses to extreme global carbon cycle perturbations in different paleoceanographic settings at high southern latitudes. Site U1512 lies in the GAB in ~3000 m of water on the continental slope. During the Cretaceous, the GAB was situated in the central to eastern end of a partial seaway (the AAG), with the Mentelle Basin and Naturaliste Plateau in the open ocean at the western gateway that connected the AAG with the southern Indian Ocean. Although the OAE 2 target was not reached because it is significantly deeper than anticipated, studies of the recovered se-quence of early Turonian to late Santonian marine claystone will provide new insight into the evolution of Late Cretaceous climate and oceanography in the region of the AAG.
Lithostratigraphy
The sedimentary sequence in Hole U1512A is divided into two main lithostratigraphic units ( Figure F4). Unit I is a 10.06 m thick Pleistocene sequence of pinkish to white calcareous ooze with sponge spicules. The unit extends from the beginning of the hole to 10.06 m core depth below seafloor, Method A (CSF-A) (Sections 369-U1512A-1R-1 through 2R-1; 0-10.2 m CSF-A). The unit consists of medium and thick beds that are massive and lack physical and biological sedimentary structures. In this unit, biogenic grains are the major constituent, and they comprise dominant calcareous nannofossils, abundant foraminifers, and common sponge spicules. Unit II is a 690.32 m thick sequence of silty clay that gradationally transitions into silty claystone (Sections 2R-1 through 73R-CC at the bottom of the hole; 10.2-701.4 m CSF-A). This unit is black to dark gray mottled silty claystone composed of quartz, clay minerals, pyrite, siderite, and dolomite with varying degrees of bioturbation. Micropaleontological analysis indicates a Late Cretaceous (Santonian to Turonian) age for this unit. Unit II is further divided into Subunits IIa (silty clay) and IIb (silty claystone) based on the degree of sediment lithification. Subunit IIa is 75.10 m thick and composed of very dark greenish gray to black unlithified silty clay. It is characterized by the presence of pyrite, both as nodules and in a disseminated form within the silty clay. Zeolite, foraminifers, calcareous nannofossils, and sponge spicules are present in trace amounts throughout the subunit. Inoceramid bivalve fragments and alteration halos occur frequently throughout the subunit. Subunit IIb is 615.22 m thick and composed of lithified black silty claystone. Included in this subunit are 23 thin to medium beds of glauconitic and sideritic sandstone that are no thicker than 32 cm and are mostly massive with normal grading. Bioclast traces present in this subunit include foraminifers, calcareous nannofossils, radiolarians, sponge spicules, and organic matter.
Biostratigraphy and micropaleontology
Samples from all Hole U1512A core catchers were analyzed for calcareous nannofossils, planktonic foraminifers, and benthic foraminifers. In addition, calcareous nannofossil assemblages were evaluated from split core sections. Observations of other distinctive and potentially age or environmentally diagnostic microfossil groups, such as organic-walled dinoflagellate cysts (dinocysts), radiolarians, fish debris, and inoceramid prisms, were also made in all core catcher samples. Calcareous nannofossil datums form the chronologic framework for Hole U1512A because they are most consistently present. In contrast, planktonic foraminifers are rare. Similarly, rare dinocyst taxa from Cores 369-U1512A-47R through 70R (~440-672 m CSF-A) and radiolarians from Cores 5R through 35R (~38-333 m CSF-A) provide valuable additional age confirmation of Upper Cretaceous sediments.
Stratigraphic positions of calcareous nannofossil biozones and age assignments for Site U1512 are presented in Figure F4. The calcareous plankton indicate Pliocene to Pleistocene-age sediments from 0 to 11.63 m CSF-A, but no ages were assigned to samples from this level to 42.40 m CSF-A because either no microfossils were found or studied samples include mixed Cretaceous and Neogene ages. Calcareous nannofossils occur throughout most of the underlying Santonian to Albian sequence, although they are mostly rare with generally moderate to poor preservation. Cretaceous planktonic foraminifers are rare and poorly preserved. However, where present, the ages of these foraminifers are consistent with those of calcareous nannofossils. No calcareous microfossils were found in an upper Turonian to Coniacian interval from 211 to 296 m CSF-A. Tubular agglutinated forms dominate benthic foraminiferal assemblages at Site U1512 and indicate either a lower to midbathyal environment or a marginal/restricted environment throughout the Late Cretaceous.
Paleomagnetism
The natural remanent magnetizations (NRMs) of all archivehalf core sections and 21 discrete samples collected from the working halves of Hole U1512A were measured. The archive halves were stepwise treated with an up to 30 mT alternating field (AF) demagnetization field and measured with the pass-through superconducting rock magnetometer (SRM) at 5 cm intervals. The NRM intensity of the section is relatively weak and varies from 1.5 × 10 −5 to 7.8 × 10 −2 A/m with a mean of 5.5 × 10 −4 A/m. The drilling-induced magnetic overprints can generally be removed by AF demagnetization at 10-20 mT. Inclinations of the characteristic remanent magnetizations (ChRMs) are predominantly negative, ranging from around −70° to −20° and indicating predominantly normal polarity. The uppermost ~80 m display a very noisy signal because of the significant coring disturbance (biscuiting) introduced by the rotary coring process. Positive inclination values occur between 0 and 75, 175 and 190, and 256 and 259 m CSF-A. The intervals from ~0 to 75 and 175 to 100 m CSF-A also exhibit sporadic or consecutive negative ChRM inclinations mixed with dominantly positive inclinations, making it impossible to assign magnetic polarity. The interval between 256 and 259 m CSF-A exhibits consistent downwardpointing paleomagnetic inclinations, defining a zone of reversed polarity probably associated with a short geomagnetic excursion.
Shipboard micropaleontological studies suggest that Core 369-U1512A-5R (38.4-47.4 m CSF-A) is Santonian and the base of Core 73R is early Turonian. Therefore, the majority of the sedimentary cores from 38.4 to 700 m CSF-A document the uppermost segment of the 41.5 My Cretaceous Normal Superchron (CNS) C34n.
Petrophysics
Physical property data were obtained with the Whole-Round Multisensor Logger (WRMSL), Natural Gamma Radiation Logger (NGRL), P-wave velocity caliper (PWC), Section Half Multisensor Logger (SHMSL), and discrete samples. WRMSL P-wave measurements were discontinued deeper than Core 369-U1512A-11R because of poor contact between the core sections, their liners, and the caliper. Natural gamma radiation (NGR) values ( Figure F4) average 32.8 counts/s, and bulk density estimates from gamma ray attenuation (GRA) average 1.7 g/cm 3 . GRA bulk density values in siltstone/claystone do not exceed 2.2 g/cm 3 , whereas in siderite Table T1. Expedition 369 hole summary. DSF = drilling depth below seafloor. APC = advanced piston corer, HLAPC = half-length APC, XCB = extended core barrel, RCB = rotary core barrel. Download nodules and glauconitic sandstones, they increase to 3.28 g/cm 3 . WRMSL magnetic susceptibility values average 9.35 instrument units (IU) and do not exceed 16 IU in claystone and siltstone, but magnetic susceptibility increases to 253.58 IU in glauconitic sandstone and siderite; point measurements from the SHMSL agree with these trends. At scales longer than 10 m, the NGR and magnetic susceptibility records do not correlate over silty/clayey intervals from Cores 3R through 15R, possibly because of the high abundance of pyrite differentially influencing the magnetic susceptibility values. The NGR and GRA bulk density records display parallel trends in this interval. From Core 16R to Core 62R, pyrite abundance markedly decreases, and all three data types (magnetic susceptibility, NGR, and GRA bulk density) display similar trends. From Core 63R to Core 73R, both NGR and magnetic susceptibility decrease, but GRA bulk density remains stable. At shorter scales (<10 m), magnetic susceptibility, GRA bulk density, and NGR show high-amplitude, 3-5 m thick cycles in Cores 10R through 19R, 34R through 43R, and 62R through 73R. P-wave velocity in the silty claystone ranges from 1670 to 2346 m/s, although faster velocities (3397-5774 m/s) were obtained for the discrete layers of sideritic sandstone. High-resolution (2 cm) reflectance spectroscopy and colorimetry data from archive-half core sections display high-amplitude variability.
On average, three discrete moisture and density (MAD) samples were taken from each core. Overall, the MAD results show that bulk density increases and grain density and porosity decrease. The bulk density of the dark silty claystone is 1.54-2.37 g/cm 3 , and the density of the sideritic sandstone intervals ranges from 3.21 to 3.49 g/cm 3 . The porosity of the silty claystone is 28%-65% with most measurements between 40% and 48%. The porosity of the sideritic sandstone ranges from 5% to 13%.
Potassium (K), uranium (U), and thorium (Th) content were deconvolved from the NGR data. U/Th ratios are <0.2 throughout the entire hole, indicating oxic conditions during deposition. K/Th ratios sharply decrease in the uppermost 100 m, possibly because of the presence of salt in this interval, which may increase the K content.
One downhole logging run measured NGR, density, sonic velocity, and resistivity values of the borehole wall using a modified triple combination (triple combo) tool string with an added Dipole Shear Sonic Imager (Quambo). Excellent borehole stability and favorable low-heave weather conditions permitted logging of the entire open borehole. Inclinometer readings progressively increase from roughly 0° shallower than 210 m wireline log matched depth below seafloor (WMSF) to 27° near the base of the hole, indicating that borehole orientation deviated from vertical during coring. Table T5 in the Site U1512 chapter (Huber et al., 2019a) for calcareous nannofossil event definitions. Background trends in the density, NGR, and resistivity logs are relatively stable in the upper 300 m of the borehole. Below that interval, each of these three logs records minimum values near 325 m WMSF, which increase downhole to plateaus approaching maxima for the hole. Additionally, the density and resistivity logs preserve thin spikes in values that likely correspond to the thin sideritic and glauconitic sandstone beds commonly observed (see Lithostratigraphy). In general, downhole measurements record trends similar to those observed in the physical properties measured from the cores, such as meter-scale cyclicity in NGR, and provide a continuous petrophysical stratigraphy that spans occasional gaps in core recovery.
Geochemistry
The geochemistry program was designed to characterize the composition of interstitial water and bulk sediments and to assess the potential presence of volatile hydrocarbons for routine safety monitoring. A total of 73 headspace gas samples were taken; hydrocarbons were detected in 68 samples. In Cores 369-U15812A-1R through 9R, headspace samples were free of gas or had very low concentrations. Below this, methane was the dominant gas detected (as high as 104,000 ppmv), with very minor ethane and occasional propane (as high as 653 and 148 ppmv, respectively). Methane/ethane ratios ( Figure F4) suggest a transition from biogenic production shallower than ~473 m CSF-A to possible thermogenic sources below that depth.
For interstitial water analyses, 46 samples were recovered from whole-round squeezing of sediment intervals. As a result of sediment lithology, interstitial water yield was low for the majority of the cores recovered deeper than Core 369-U1512A-12R (~115 m CSF-A). The final interstitial water sample was taken from Core 59R (~560 m CSF-A). Interstitial water salinity generally decreases with depth because of decreases in sulfate (SO 4 2− ), magnesium (Mg), and K concentrations and general decreases in sodium (Na), bromide (Br − ), and chloride (Cl − ) possibly caused by low-salinity water present at greater depths. The dissolved Mg, K, and boron (B) concentration profiles reflect alteration of volcanic material and clay mineral formation. Sulfate is readily depleted in the upper ~93 m (Core 10R) of the sedimentary column ( Figure F4) because of intense bacterial SO 4 2− reduction and is accompanied by synchronous increases in ammonium (NH 4 + ), alkalinity, and lithium (Li), along with high barium (Ba) values in samples below the depth where SO 4 2− is exhausted. Alkalinity ranges from 4.4 to 16.52 mM with a maximum at 93.55 m CSF-A, and pH ranges from 7.76 to 7.97 with a slight decrease downhole; both measurements were limited to the uppermost ~130 m because of the small interstitial water volumes obtained from deeper samples. Dissolved calcium (Ca) and to a lesser degree strontium (Sr) increase toward 300 m CSF-A, more than likely due to carbonate diagenesis. Decreasing Ca and Sr concentrations deeper than 300 m CSF-A indicate that carbonate dissolution/recrystallization may prevail at depth. Dissolved silicon (Si) shows a short positive excursion between 310.68 and 329.27 m CSF-A (Figure F4), reflecting the presence of biogenic opal-A in the sediment; lower values from deeper than Core 35R may be due to opal-A/cristobalite and tridymite (CT) transformation. The elevated manganese (Mn) concentration demonstrates the reducing character of the entire sedimentary sequence.
CaCO 3 content from Cores 369-U1512A-1R through 73R varies from 0.06 to 6.66 wt%, except for Core 1R, which is composed of calcareous ooze with 90.97% CaCO 3 . The low carbonate percentages in Cores 2R through 73R reflect the very low contribution of calcareous nannofossil and foraminiferal components to the sedi-ment. TOC, which is predominantly terrestrially derived, ranges from 0.20 to 1.31 wt%, and total nitrogen (TN) ranges from <0.01 to 0.10 wt%. TOC/TN ratios generally decrease with depth, ranging from 6.30 to 31.19. This trend is likely caused by decomposition of N-containing terrestrial organic matter that released carbon as methane but retained the produced NH 4 + in clays. Eight freezedried bulk sediment samples with TOC >1 wt% were analyzed on the source rock analyzer. The results show very low hydrogen index values, and the kerogen in the samples was classified as Type III, suggesting a predominantly terrestrial origin for the organic carbon.
Stratigraphic correlation
Only one hole was cored at Site U1512 with the rotary core barrel (RCB) system. Recovery was excellent, exceeding 100% in 24 of the 73 cores recovered, and total recovery was 90%. Distinctive features for correlation included large-scale (>10 m) trends and changes of variable amplitude on shorter scales (1-5 m) in NGR and magnetic susceptibility data, as well as distinct peaks in these measurements corresponding to sandstone layers. Despite being unable to correlate cored intervals, recognition of matching features in NGR records from the Hole U1512A cores and wireline logs permitted correlation from the CSF-A scale to the WMSF scale.
Age-depth model and sedimentation rates
Sedimentation rates are 36 m/My for Santonian Zones CC17 and CC16 and 19 m/My for Zone CC15 ( Figure F4). The interval from the uppermost Coniacian to middle Turonian, which encompasses Zones CC14 and CC13, has an average sedimentation rate of 63 m/My. Sedimentation rates accelerate markedly in the lower to middle Turonian Zone CC12 to 272 m/My. This estimate does not correct for the hole deviation from vertical (see Petrophysics).
Background and objectives
The objectives for coring Site U1513 (Table T1) on the western margin of the Mentelle Basin were to (1) obtain a continuous Late Cretaceous sediment record to better document the rise and fall of the Cretaceous hot greenhouse climate at southern high latitudes (~60°S paleolatitude), (2) characterize how oceanographic conditions changed during the Cenozoic opening of the Tasman Gateway and the restriction of the Indonesian Gateway, and (3) obtain basalt from the base of the sedimentary sequence to provide stratigraphic control on the age and nature of the pre-Gondwana breakup succession. A particularly important goal was to obtain a complete OAE 2 sequence across the C/T boundary to characterize associated biotic, oceanographic, and climatic changes. The Site U1513 sequence will be compared with coeval Expedition 369 sections cored elsewhere on the Mentelle Basin and with other ocean drilling (e.g., DSDP Site 258) and industry data from the Western Australia margin and in the GAB to identify any regional differences in the geochemical and biological responses to the OAEs and Cretaceous and Neogene ocean circulation history.
Lithostratigraphy
The Site U1513 cored section is divided into six lithostratigraphic units, five sedimentary and one igneous, based on a combination of data from Holes U1513A, U1513B, U1513D, and U1513E. Lithostratigraphic units and boundaries are defined by changes in lithology identified by macroscopic core description, microscopic examination of smear slides and thin sections, and X-ray diffraction (XRD) and X-ray fluorescence (XRF) analyses. Each extrusive sequence is generally bounded by chilled margins but also by faults or textural and color changes. Most discrete flows appear to be massive, thin sheets of olivine ± pyroxene-or plagioclase-phyric (some megacrystic) basalt. The least-altered portions of the lowermost sequence (Unit 7) show a higher degree of vesicularity and highly angular vesicles that may indicate subaerial to very shallow eruption depths. A xenolith-bearing diabase dike intrudes the flow sequences. The contact between the xenolith-bearing diabase dike and the extrusive is defined by either faulted or chilled margins with alteration halos. These flows show a lesser degree of alteration in Hole U1513E than in Hole U1513D, and preliminary megascopic and thin section analyses reveal the original porphyritic, microcrystalline, or vesicular textures, with some of the bottom flows showing interesting crosscutting lineation features and absence of minor intrusion intervals.
Biostratigraphy and micropaleontology
Samples from all core catchers from Holes U1513A and U1513D and selected samples from Hole U1513B were analyzed for calcareous nannofossils, planktonic foraminifers, and benthic foraminifers. In addition, samples from split core sections were also evaluated for calcareous nannofossils and/or planktonic foraminiferal assemblages, as necessary. Observations of other distinctive and potentially age or environmentally diagnostic microfossil groups such as dinocysts, radiolarians, ostracods, fish debris, bryozoans, small corals, and inoceramid prisms were also made for all core catcher samples. Calcareous nannofossil and planktonic foraminiferal events form the chronologic framework for Site U1513 shallower than 450 m CSF-A.
Stratigraphic positions of calcareous nannofossil biozones and age assignments for Site U1513 are presented in Figure F5. Abundant planktonic foraminifers and calcareous nannofossils form the biostratigraphic framework of Site U1513 and indicate recovery of Pleistocene to Miocene strata unconformably overlying a lowermost Campanian to Albian sequence. Preservation of nannofossils is good to excellent throughout the sequence, and preservation of planktonic foraminifers varies throughout the sequence from very poor to excellent, with samples showing minimal or no evidence of recrystallization at several intervals in the Cretaceous. Sediments sampled from deeper than 372 m CSF-A are predominantly barren of planktonic and benthic foraminifers, and samples from deeper than 437 m CSF-A are barren of calcareous nannofossils.
Benthic foraminifers indicate a bathyal water depth throughout Site U1513 and are dominated by calcareous taxa above the middle Turonian and by agglutinated taxa in lower Turonian through upper Albian samples.
Paleomagnetism
The NRMs of all archive-half core sections and 98 discrete samples collected from the working halves of Holes U1513A, U1513B, U1513D, and U1513E were measured. The archive halves were stepwise treated with up to 20 or 30 mT AF demagnetization and measured with the pass-through SRM at 5 cm intervals. Discrete samples were progressively demagnetized up to 60 or 80 mT and measured with the spinner magnetometer or the SRM. The NRM intensity of the recovered cores is 10 −5 to 1 A/m and broadly co-varies with lithology. The calcareous ooze and chalk in the upper part and the basalt in the basal part of Hole U1513D display the weakest and the strongest NRM intensity, respectively. Despite the weak NRM of the calcareous ooze/chalk, the demagnetization results after 20 mT show inclination zones of dominant positive and negative values, defining a magnetic polarity sequence from Chron C1n to Chron C2An.3n for the uppermost ~65 m. The inclinations in the ~65-455 m CSF-A interval are mostly scattered, and dominant negative values from 200 to 450 m CSF-A indicate a normal polarity that is assigned to Chron C34n based on shipboard biostratigraphy. The inclinations deeper than 455 m CSF-A exhibit a distinct pattern of zones of either positive or negative values, establishing a well-defined magnetic polarity sequence ( Figure F5). The polarity sequence between 455 and ~690 m CSF-A is tentatively correlated with Chrons M0r-M10n, indicating the absence of most of the Aptian strata and increasing sedimentation rates between ~530 and ~690 m CSF-A. The well-defined reversed and normal polarities deeper than ~690 m CSF-A occur in the basalt unit and cannot be correlated with the geomagnetic polarity timescale (GPTS) without constraints of ages from the basalt.
Petrophysics
Physical property data were obtained with the WRMSL, NGRL, PWC, and SHMSL and on discrete samples. Cores from the uppermost 35 m exhibit cyclicity in NGR (~15 counts/s amplitude; ~5 m thickness), and the measurements were deconvolved into U, Th, and K concentrations. The C/T boundary interval shows a distinct NGR plateau of ~40 counts/s at ~240-245 m CSF-A. Additionally, NGR values preserve a broad trend to higher counts throughout a mudstone interval spanning from 230 to 455 m CSF-A with a trough near 320 m CSF-A (Figure F5). Below a contact with underlying volcaniclastic sandstone at 455 m CSF-A, NGR decreases by nearly an order of magnitude from 75 to 10 counts/s, and magnetic susceptibility increases by two orders of magnitude from ~10 to ~1000 IU. Similarly, both grain and bulk density step to higher values across this transition. NGR values, more specifically U content, spike across an interval near 675 m CSF-A, possibly signifying abundant terrestrial organic matter. The indurated breccia and crystalline rocks in lithostratigraphic Unit VI show spikes in magnetic susceptibility and density and have nearly undetectable NGR counts. In the overlying sedimentary sequence (Units I-V), porosity and PWC measurements show a generally gradual but punctuated change to lower and higher values, respectively.
Downhole logging was conducted in Holes U1513A, U1513D, and U1513E using several downhole tool configurations, including the Quambo, which measures NGR, density, sonic velocity, and resistivity; the traditional triple combo; the Formation MicroScanner
Geochemistry
The Site U1513 geochemistry program was designed to characterize interstitial water and bulk sediment composition and to assess the potential presence of volatile hydrocarbons for routine safety monitoring. Samples were taken from Holes U1513A and U1513D. All 90 headspace gas samples showed only low concentrations of methane (≤60 ppmv) and trace levels of ethane and propane.
For interstitial water analyses, 60 samples were recovered from squeezing 10 cm whole rounds from 0-366.4 and 471.8-687.3 m CSF-A. Salinity was generally constant, with the exception of distinctly fresher interstitial water between 281.8 and 303.0 m CSF-A. This interval of low salinity is also apparent in the Br − and Cl − profiles. Mg, K, and Na concentration profiles reflect the alteration Figure F5. Site U1513 summary. Recovery and data from Hole U1513E are not included, but data are consistent with the bottom of Hole U1513D. Hole U1513C (~17 m) was sampled completely on the catwalk. NGR: green = Hole U1513A, light blue = Hole U1513B, dark blue = Hole U1513D, yellow shading = seaflooranchored and floating spliced intervals. RGB green: dark green data curve = 50-point moving average. See Table T5 and T6 in the Site U1513 chapter (Huber et al., 2019b) for calcareous nannofossil and planktonic foraminifer event definitions. of volcanic material found in lithostratigraphic Units IV and V. No evidence for significant sulfate reduction was detected; sulfate is present in all samples, and Ba concentrations are correspondingly low. Ca ( Figure F5) and Sr concentration profiles primarily reflect the release of these elements during alteration of volcanic material. Li appears to have been released in Unit IV and then incorporated into alteration products in Unit V. Dissolved Si reflects the presence of biogenic opal-A in Units I and II; lower concentrations in Units III and V may reflect the opal-A/CT and CT to quartz transitions, respectively. Elevated Mn concentration demonstrates the reducing character of the sedimentary sequence below Unit I. In addition, 129 bulk sediment samples were collected downhole to ~690 m CSF-A (Core 369-U1513D-65R), the contact with igneous material. Additional samples were measured at a higher resolution through the putative OAE 2 and 1d intervals (between Sections 17R-4 and 19R-CC; black shales with high TOC between 246.32 and 247.34 m CSF-A). CaCO 3 content varies from 0 to 93 wt%, reflecting variations in lithology. TOC is broadly <1 wt% except in the thin black shales, where TOC reaches 10.5 wt%. TN is generally below detection. A total of 57 samples with TOC ≥0.8 wt% from the possible OAE 2 and 1d intervals were analyzed with the source rock analyzer. Samples with a higher TOC content (>3 wt%) were found to contain dominantly marine organic matter, whereas the source of the organic matter in low-TOC samples could not be determined.
Stratigraphic correlation
Recovery in any one hole at Site U1513 ranged from poor to excellent, but when combined, overall recovery was excellent for most of the interval that penetrated and spanned the Valanginian through the present day. Splices were constructed for the 0-95 m core composite depth below seafloor (CCSF) intervals (Holes U1513A and U1513B) and for the 220-295 m CCSF intervals (Holes U1513A and U1513D) (Figure F5). These splices cover the late Miocene through recent and the middle Cenomanian through middle Turonian, respectively, as estimated from bio-and magnetostratigraphy. Portions of both splices were formed by appending subsequent cores from the same hole because of aligned core breaks or poor recovery in the other hole (i.e., there was no bridge across core breaks in these intervals). However, correlation to downhole logging data minimized the uncertainty introduced by this approach. The 95-220 m CSF-A interval was recovered in Holes U1513A and U1513D. Despite this, no splice was attempted in this interval because recovery was too low to meaningfully correlate at the meter scale, but pooled data suggest recovery should be sufficient to generate good records with 1 My resolution. The interval from 295 to 757.4 m CSF-A was only cored in Hole U1513D, but recovery was generally very good to excellent, averaging 82% across ~70 m of basalt and basaltic breccia (lithostratigraphic Unit VI) and 75% over the 395 m of overlying sandstones and claystones (Unit V) between the basalt and the lower splice. The oldest biostratigraphic date for these overlying sediments is middle Albian, although magnetostratigraphy suggests portions could be older.
Age-depth model and sedimentation rates
Sedimentation rates are presented in Figure F5 for the Albian through Campanian portion of Site U1513. Sedimentation rates averaged ~12 m/My from the Albian through Coniacian but dropped appreciably during the Santonian and lower Campanian to only 8 m/My, with an apparent rate of only 3 m/My in the Santonian. Alternatively, part of the Santonian may be missing due to a hiatus in sediment accumulation. Sediment accumulation rates deeper than 450 m CSF-A are based on the paleomagnetic record. Sedimentation rates for the Barremian to upper Hauterivian (Chron M0 to the base of Chron Mr8) averaged approximately 10 m/My, whereas estimated rates for the lower Hauterivian and Valanginian (Chron M9 to Chron M10) are ~132 m/My.
Background and objectives
Site U1514 (Table T1) is the northernmost and deepest site targeted during Expedition 369. The greater paleodepth of the site relative to other sites cored in the Mentelle Basin provides the opportunity to characterize the evolution of deep-water circulation in this region during the final phase of breakup among the Gondwana continents. Because Site U1514 is located at a high paleolatitude (~60°S), the sediments there preserve a paleoclimate record that serves as a highly sensitive monitor of global climatic changes. The site was expected to sample a series of Cenozoic and possibly Late Cretaceous sedimentary drifts and erosional features that would enable greater insight into the early and later phases of the opening of the Tasman Gateway and restriction of the Indonesian Gateway. The current seabed is composed of Paleogene/Neogene/Quaternary oozes that sit unconformably on the Cretaceous (Maloney et al., 2011).
The primary objectives for coring Site U1514 were to (1) obtain a continuous Cenozoic sediment record in the Mentelle Basin to characterize how oceanographic conditions changed during the Cenozoic opening of the Tasman Gateway and the restriction of the Indonesian Gateway; (2) reconstruct middle through Late Cretaceous paleotemperature changes to document initiation of the Cretaceous hot greenhouse climate, the duration of extreme warmth, and the timing of the switch to a cooler climate; and (3) obtain a complete and well-preserved sediment record across mid-Cretaceous OAEs to better understand their cause and accompanying changes in the climate-ocean system and the marine biota.
Lithostratigraphy
The Site U1514 cored section is divided into three main lithostratigraphic units based on data from Holes U1514A and U1514C, and Units I and III are further divided into two subunits ( Figure F6). Lithostratigraphic units and boundaries are defined by changes in lithology identified by macroscopic core description, microscopic examination of smear slides, and XRD and XRF analyses.
Lithostratigraphic Unit I is a 81.20 m thick Pleistocene to Eocene sequence of very pale brown to pale yellow nannofossil ooze, foraminiferal ooze, and sponge spicule-rich nannofossil ooze that is Pliocene to Pleistocene in age. The unit is divided into Subunits Ia and Ib at 30.38 m CSF-A in Hole U1514A. Subunit Ib spans the Miocene to Eocene and differs from Subunit Ia by an increased abundance of sponge spicules. Furthermore, the color of Subunit Ib is yellow-brown and distinctively darker than Subunit Ia. Unit II is a 308.01 m thick Eocene to Paleocene sequence of light greenish gray clayey nannofossil ooze, sponge spicule-rich clay, and nannofossilrich clay that grades into clayey nannofossil chalk and nannofossilrich claystone. Unit III is a 126.43 m thick sequence of greenish gray, brown, and black claystone that is Paleocene to Albian in age. Unit III is divided into Subunits IIIa and IIIb at 454.33 m CSF-A in Hole U1514C. Subunit IIIb was deposited during the Cenomanian/Albian to Albian and is distinguished from overlying Subunit IIIa (Paleocene to Cenomanian/Albian) in that it is a darker greenish gray/black claystone. Soft-sediment deformation, possibly including IODP Proceedings 13 Volume 369 slumping, is indicated by intervals of convoluted and overturned bedding in Subunits IIIa and IIIb.
Biostratigraphy and micropaleontology
Samples from core catchers in Holes U1514A and U1514C were analyzed for calcareous nannofossils, planktonic foraminifers, and benthic foraminifers. As necessary, additional samples from splitcore sections were evaluated for calcareous nannofossils and/or planktonic foraminiferal assemblages. Observations of other distinctive and potentially age or environmentally diagnostic microfossil groups, including calcispheres, diatoms, radiolarians, fish debris, sponge spicules, and inoceramid prisms, were also recorded.
Stratigraphic positions of calcareous nannofossil biozones and age assignments for Site U1514 are presented in Figure F6. Calcareous nannofossils occur throughout the succession cored at Site U1514, except for a few barren samples in the Cenomanian to early Turonian. Specimens are moderately to well preserved throughout. Planktonic foraminiferal assemblages recovered at Site U1514 are generally rare, with poor to moderate preservation, although discrete samples in the Pleistocene, Paleocene, Turonian, and Albian contain seemingly unrecrystallized specimens. Planktonic foraminiferal assemblages in Hole U1514A span Pleistocene Subzone Pt1a through lower Eocene Zone E4. Assemblages in Hole U1514C range from middle Eocene Zones E8-E9 to the Thalmanninella appenninica/Pseudothalmanninella ticinensis Zones of the upper Albian. An apparently complete (at least to biozone level) though bioturbated Cretaceous/Paleogene (K/Pg) boundary section was recovered in Core 369-U1514C-23R.
Benthic foraminiferal assemblages are dominated by epifaunal, calcareous-walled taxa that indicate bathyal to abyssal paleowater depths throughout the recovered interval.
Paleomagnetism
The NRMs of all archive-half core sections and 82 discrete samples collected from the working halves in Holes U1514A and U1514C were determined as part of the paleomagnetism measurement program (Figure F6). The archive halves were stepwise treated with up to 20 or 30 mT AF demagnetization and measured with the pass-through SRM at 5 cm intervals. Discrete samples were progressively demagnetized up to 60 mT and measured with the SRM. The NRM intensity of the recovered cores is 10 −6 to 1 A/m and broadly co-varies with lithology. Inclinations after the 20 mT demagnetization step exhibit intervals dominated by positive and negative inclination values, defining an almost complete magnetic polarity sequence with 74 identified and dated reversals from Chron C1n (Brunhes) to Chron C34n (the CNS). The magnetic data are of Figure F6. Site U1514 summary. Hole U1514B (~15 m) was sampled completely on the catwalk. NGR and carbonate: blue = Hole U1514A, green = Hole U1514C. Yellow shading = floating spliced interval. See the Site U1514 chapter (Huber et al., 2019c) for additional details. excellent quality in the advanced piston corer section (0-95 m CSF-A) and exhibit larger scatter caused by drilling disturbance in the extended core barrel and RCB cores. The sequence is interrupted by four hiatuses (11,18,30, and 41 m CSF-A) placed at sharp lithologic boundaries and confirmed by biostratigraphic observations.
Petrophysics
Magnetic susceptibility, GRA bulk density, NGR, thermal conductivity, P-wave velocity, color reflectance spectroscopy and colorimetry (RSC), and MAD were measured on whole-round sections, split core sections, and discrete samples from Site U1514. Several unique features were identifiable using the physical property data ( Figure F6). Notable features include distinct signals in the NGR, magnetic susceptibility, and GRA bulk density data near the Chron C19r event (~152 m CSF-A), the Paleocene to Eocene interval (~275-280 m CSF-A), the K/Pg boundary (382-415 m CSF-A), and the Cenomanian to Turonian interval (415-445 m CSF-A). However, the Cenomanian to Turonian interval is within a zone of extensive sediment deformation and is unlikely to reflect paleoceanographic events.
Magnetic susceptibility varies between 1.76 and 50.48 IU, and measurements consist of sections of high-and low-frequency variations downhole. GRA bulk density ranges from 1.6 to 1.9 g/cm 3 . NGR ranges from 0 to 105 counts/s with high-amplitude cyclic fluctuations downhole that are coincident with changes in sediment RSC. The bulk density, grain density, and porosity of cored material were measured on discrete samples (MAD). These data show several deviations from the expected trend. In several sections, porosity increases with depth. These increases may reflect lithologic changes and/or be associated with soft-sediment deformation that may have led to several packages of material being more over-or undercompacted than the surrounding beds. P-wave velocity ranges from ~1500 m/s near the seafloor to ~2100 m/s at ~290 m CSF-A. Velocity tends to decrease below this depth to 1800-1900 m/s at the bottom of the hole (515.7 m CSF-A), except for the 390-470 m CSF-A interval, where velocities are scattered between 1800 and 2300 m/s. The latter interval spans the zone with soft-sediment deformation.
Downhole logging was conducted in Hole U1514C using the Quambo tool string. The measurements yielded similar results for the overlapping depth intervals where core recovery was good. The downhole tools provided continuous coverage of the borehole and filled several coring gaps. The most striking features include several peaks in NGR at ~395, ~425, and ~445 m WMSF and between 455 and 480 m WMSF. Interestingly, the two peaks in the NGR log at ~395 and ~425 m WMSF correspond to a decrease in bulk density, sonic velocity, and resistivity, as well as more clay rich lithofacies. Slower sonic velocities are also notable between 420 and 440 m WMSF, which could (at least partially) reflect a thick zone of softsediment deformation. Magnetic susceptibility data were collected in Hole U1514C, but the signal quality was poor. In addition, in situ temperature measurements were obtained in Hole U1514A and were combined with the thermal conductivity data to determine a heat flow of 45-49 mW/m 2 .
Geochemistry
The Site U1514 geochemistry program was designed to characterize the composition of interstitial water and bulk sediments and to assess the potential presence of volatile hydrocarbons for routine safety monitoring. Samples were taken from Holes U1514A and U1514C. A total of 56 headspace gas samples were taken, with only low concentrations of methane (≤90 ppmv) and trace levels of ethane detected.
For interstitial water analyses, 54 samples were recovered from whole-round squeezing of samples from sediment intervals in Holes U1514A (0-247.7 m CSF-A) and U1514C (255.0-515.7 m CSF-A). Sample salinity is generally constant, with the exception of distinctly fresher interstitial water in lithostratigraphic Subunit IIIa ( Figure F6). This low-salinity interval reflects decreased concentrations of many elemental profiles, particularly Br − and Cl − . Mg, K, Ca, Li, Sr, and Na concentration profiles reflect alteration of volcanic material from depths below the cored interval at Site U1514. Moderate sulfate reduction was inferred because sulfate is present but decreases with depth. Ba concentrations are correspondingly low. Si concentrations reflect the presence of biogenic opal-A in Units I and II and the top part of Subunit IIIa; lower concentrations at the bottom of Subunit IIIa and in Subunit IIIb may reflect the opal-A/CT transition. Elevated Mn and Fe concentrations demonstrate the reducing character of the sedimentary sequence at certain intervals at this site.
A total of 64 bulk sediment samples were collected downhole to ~513 m CSF-A (Core 369-U1514C-35R). CaCO 3 content varies from 0 to 90 wt%, reflecting variations in lithology ( Figure F6). TOC is generally <0.3 wt% except in the possible OAE 1d interval, where TOC reaches 1.2 wt%. TN is generally below detection. A total of 8 working-half samples from the disturbed interval that likely spanned the Turonian and Cenomanian and possible OAE 1d interval (Cores 32R-34R) were analyzed on the source rock analyzer. Although the lower TOC content samples were generally inconclusive, kerogen in samples with higher TOC content (>1 wt%) were found to have a dominantly terrestrial source.
Stratigraphic correlation
Recovery in Hole U1514A was excellent (near 100%), and the total recovery of Holes U1514A and U1514C was 65%. Target depths were recommended before and during the coring of Hole U1514C, which aided the bridging of coring gaps in Hole U1514A. A splice was created for the overlapping portion of the lower Eocene, spanning from 195.6 to 266.1 m CCSF in Holes U1514A and U1514C. This splice was established by identifying similar trends in NGR and subsequently comparing high-resolution physical property data. Recognition of sharp peaks in NGR enabled correlation of core data to wireline logging results and confirmed the accuracy of the splice.
Together, Holes U1514A and U1514C span from the end of the Albian to the present, with good coverage over much of the Paleogene and Upper Cretaceous, including a seemingly complete record over the K/Pg boundary in Core 369-U1514C-23R. Downhole, a multicolored interval of deformed sediments spanning Cores 25R through 29R is consistent with the downslope motion of the upper portion of the sequence at this site.
Age-depth model and sedimentation rates
Sedimentation rates vary throughout the section, with the lowest rates recorded in the Neogene and Cretaceous (3-9 m/My) and the highest rates (13-15 m/My) recorded in the Eocene and Paleocene ( Figure F6). Major unconformities are present in the lower Pleistocene, Pliocene, Miocene, and Oligocene.
Background and objectives
Site U1515 (Table T1) is the easternmost Mentelle Basin site and shallowest site targeted during Expedition 369. The primary objective at this site was to provide evidence of the prebreakup IODP Proceedings 15 Volume 369 rifting history in the region prior to the final separation of Greater India and Antarctica. The site location was chosen based on seismic evidence of dipping strata below what is interpreted to be the eastward extension of the Valanginian unconformity cored at Site U1513 in the western Mentelle Basin. The extrusive basalts that cover this unconformity in the western Mentelle Basin are not present at Site U1515. Structural interpretations suggest that depocenters in the eastern Mentelle Basin are older (Permian? to Jurassic) than those in the western Mentelle Basin (Jurassic?) (Borissova et al., 2002). Site U1515 is the first site to sample this eastern depocenter and will test hypotheses concerning early Mesozoic rifting. Cores recovered from this site will enable investigation of the tectonic and structural relationships with similarly aged rifts along the western margin of Australia, in particular the adjacent Perth Basin (Bradshaw et al., 2003), and rift structures in Antarctica (Maritati et al., 2016). Finally, the cored record will ascertain the provenance of the earlier (Jurassic?) sediments. Candidates include the Pinjarra orogen or Albany-Frazer province.
Lithostratigraphy
The cored section in Hole U1515A, the only hole at Site U1515, is divided into two main lithostratigraphic units (I and II; Figure F7), which are further divided into five subunits (Ia, Ib, and IIa-IIc). Figure F7. Site U1515 summary. Red wavy line = unconformity inferred from seismic and physical property data. See the Site U1515 chapter (Huber et al., 2019d) for additional details. Lithostratigraphic units and boundaries are defined by changes in lithology identified by macroscopic core description, microscopic examination of smear slides and thin sections, and XRD and XRF analyses. Unit thicknesses are not given because of the overall low core recovery. Lithostratigraphic Unit I (Cores 369-U1515A-1R through 15R) is a sequence of calcareous ooze/chalk with sponge spicules, silicified limestone, bioclastic limestone, chert, sandy limestone, and sandstone (arkose). Subunit Ia (Cores 1R through 8R) consists of light greenish gray calcareous ooze with sponge spicules, whereas Subunit Ib (Cores 9R through 15R) is generally more lithified and consists largely of calcareous chalk and 10-40 cm thick silicified limestone with frequent chert beds. Because of poor recovery in this interval, the contact between Units I and II was not recovered. Unit II (Cores 24R through 55R) largely consists of gray to black silty sand and glauconitic sandstone/silty sandstone. Subunit IIa (Cores 24R through 36R) is characterized by abundant glauconite and consists of black to greenish gray silty sand and sandstone. Subunit IIb (Cores 37R through 41R) consists largely of fine-to coarse-grained sandstone with interbedded siltstone and claystone. This subunit differs from Subunit IIa in that it contains less glauconite and more abundant pyrite nodules. Subunit IIb grades into organic-rich silty sandstone and claystone with coal and plant debris, which are characteristic components of Subunit IIc (Cores 44R through 55R). The sediments recovered in Unit II are possibly terrestrial in origin.
Biostratigraphy and micropaleontology
Hole U1515A core catcher samples were analyzed for calcareous nannofossils, planktonic foraminifers, and benthic foraminifers. Observations were recorded for other distinctive and potentially age or environmentally diagnostic microfossil groups, including calcispheres, radiolarians, pollen grains and spores, fish debris, sponge spicules, and inoceramid prisms.
Microfossils occur in the upper part of the hole (Cores 369-U1515A-1R through 15R), whereas the lower part (Cores 16R through 49R; no samples were taken below Core 49R) is barren of all calcareous and siliceous microfossil groups. However, a spore found in a smear slide of Sample 39R-1, 97 cm, was identified as Contignisporites sp. (likely Contignisporites glebulentus), which could indicate a Pliensbachian age or younger. Most of the Neogene and Paleogene samples (Cores 1R through 14R) indicate reworking of Pliocene, Miocene, Oligocene, and Eocene species. The nannofossil biostratigraphy in Hole U1515A spans from upper Pleistocene Subzone CN14b to upper Campanian Zone CC22. Planktonic foraminiferal assemblages are in good agreement with this stratigraphic determination, spanning from upper Pleistocene Subzone Ptlb through the late Campanian/late Santonian Globigerinelloides impensus Zone. Benthic foraminiferal assemblages indicate an outer neritic to upper bathyal paleodepth throughout the analyzed interval ( Figure F7).
Paleomagnetism
The NRMs of most of the archive-half core sections and 19 discrete samples collected from the working halves of Hole U1515A were determined (Figure F7). The archive halves were stepwise treated with up to 20 mT AF demagnetization and measured with the pass-through SRM at 5 cm intervals. Discrete samples were progressively demagnetized up to 60 mT and measured with the SRM. The NRM intensity of the recovered cores is 10 −5 to 1 A/m and broadly co-varies with lithology. Inclinations after the 20 mT demagnetization step exhibit intervals dominated by positive and neg-ative inclinations, defining a brief magnetic polarity sequence from Chron C1n (Brunhes) to Subchron C1r.2r. Although the magnetic record is noisy and the core recovery is poor, intervals of predominantly normal and reversed polarity can be discerned in the remainder of the sections deeper than 20 m CSF-A. However, a correlation to the GPTS is not possible, mainly because of the poor core recovery and the lack of biostratigraphic control.
Petrophysics
Site U1515 had low core recovery (18%), so physical property data are sparse and discontinuous, particularly between ~130 and ~270 m CSF-A. Despite the quality of the record, the data show very broad trends from the top of the hole to the bottom, and some comparisons can be made between physical property data and lithology, including a general increase in P-wave velocity and thermal conductivity that corresponds to a change from unlithified to weakly lithified glauconitic sand, sandstone and interbedded siltstone, and claystone (lithostratigraphic Subunits IIa and IIb) to silty sandstone and claystone with coal and plant debris (Subunit IIc). This change in velocity also corresponds to a 364-373 m CSF-A unconformity identified in seismic images. Other broad trends include an overall increase in thermal conductivity, an increase in bulk and grain densities, and an overall decrease in porosity. The color reflection and bulk density data are noisy, but they show some trends that can be correlated with lithostratigraphic units. Similarly, the NGR ( Figure F7) and magnetic susceptibility data also show broad trends and potentially highlight zones where changes in lithology occur (e.g., the highest magnetic susceptibility values, high bulk density, and low NGR at ~270 m CSF-A correspond to glauconitic sandstone).
Geochemistry
The Site U1515 geochemistry program was designed to characterize the composition of interstitial water and bulk sediments and to assess the potential presence of volatile hydrocarbons for routine safety monitoring. Effectively, no gas was detected in the 38 headspace gas samples that were taken.
For interstitial water analyses, 17 samples were recovered from whole-round squeezing of sediment samples from intervals at 2.9-77.1 and 287.8-441.60 m CSF-A. Sampling was restricted due to low core recovery at Site U1515, which limits interstitial water interpretation. Sample salinity is generally constant, and alkalinity generally decreases downhole. Mg, K, and Ca concentration profiles possibly reflect alteration of volcanic material from depths below the cored interval for this site. Increasing Sr concentration with depth in lithostratigraphic Unit I may indicate carbonate recrystallization. Low levels of sulfate reduction were detected; sulfate is present but decreases with depth. Dissolved Si reflects the presence of biogenic opal-A in Unit I; lower concentrations in Unit II indicate the interval falls below the opal-A/CT transition ( Figure F7). Elevated Mn and Fe concentrations in Unit II demonstrate the reducing character of the sedimentary sequence in that interval.
A total of 33 bulk sediment samples were collected downhole to ~511 m CSF-A (Core 369-U1515A-55R). Within the intervals with carbon-rich layers, small chips were taken for analysis. Carbonate content is very high (~80-90 wt%) in the upper part but drops to nearly 0 wt% deeper than 160 m CSF-A ( Figure F7) TOC content (>1 wt%), kerogen was found to be predominantly terrestrial in origin, except for the ~430-460 m CSF-A interval, where a more significant algal contribution is suggested.
Site U1516
Background and objectives Site U1516 (Table T1) is located in the south-central Mentelle Basin. Objectives for drilling at Site U1516 were to (1) obtain a continuous and expanded Cenozoic and Upper Cretaceous pelagic carbonate sediment record in the Mentelle Basin to reconstruct climatic history across the rise and fall of the Turonian and early Eocene hot greenhouse climates, (2) determine the relative roles of productivity, ocean temperature, and ocean circulation in the climate evolution at high southern latitudes during Cretaceous anoxic events; and (3) characterize how oceanographic conditions changed during the Cenozoic opening of the Tasman Gateway and the restriction of the Indonesian Gateway. The Site U1516 sequence will be compared with coeval Expedition 369 sections cored elsewhere on the Mentelle Basin and with other IODP and industry data from the southern and western Australia margins to correlate recovered lithologies with seismic lines across the Mentelle Basin and to identify regional differences in the geochemical biological responses to the OAEs and Cretaceous, Paleogene, and Neogene ocean circulation history.
Lithostratigraphy
Site U1516 is divided into four main lithostratigraphic units (I-IV; Figure F8), with Unit I divided into three subunits (Ia-Ic). Lithostratigraphic units are defined by changes in lithology identified by macroscopic core description, microscopic examination of smear slides and thin sections, and XRD and XRF analyses. Lithostratigraphic Unit I is a Pleistocene to Paleocene sequence of calcareous/foraminiferal/nannofossil oozes and chalks with sponge spicules. Subunit Ia consists of Pleistocene to Miocene pinkish white, pinkish gray, and very pale orange sponge spicule-rich calcareous oozes. Subunit Ib consists of sponge spicule-rich Figure F8. Site U1516 summary. Hole U1516B (~16 m) was sampled completely on the catwalk. Light blue = Hole U1516A, green = Hole U1516C, dark blue = Hole U1516D. Yellow shading = floating spliced interval. See the Site U1516 chapter (Huber et al., 2019e) for additional details. calcareous chalks and calcareous chalks with sponge spicules and spans the Miocene to Eocene. The transition between Subunits Ia and Ib is defined by a shift to higher NGR and bulk density values and a decrease in L* values. Subunit Ic, which is Paleocene in age and consists of claystones, is likely to be a condensed interval. An unconformity between the Paleocene and the Turonian marks the boundary between Units I and II. Unit II is calcareous chalk interbedded with chert and grades into light greenish gray and greenish gray nannofossil chalk with clay that is also interbedded with chert. The boundary between Units II and III is placed at the C/T boundary and is marked by the first occurrence of black laminated claystone at the top of Unit III. Unit III is an alternating sequence of black, greenish gray, and gray claystone (sometimes with abundant nannofossils) and clayey nannofossil chalk with occasional parallel laminations. Unit IV ranges from the Cenomanian to the Albian and is a sequence of black and dark greenish gray nannofossil-rich claystone and claystone with nannofossils that has subtle alternations in color throughout.
Biostratigraphy and micropaleontology
Coring at Site U1516 recovered a succession of sediments from the Albian through the Pleistocene ( Figure F8). Calcareous nannofossils, planktonic foraminifers, and benthic foraminifers occur throughout this succession, and preservation and abundance are sufficient to provide biostratigraphic and paleoecologic information for the entire section.
Calcareous nannofossils are abundant to common throughout the section, with barren samples present only in the middle Albian and associated with the C/T boundary. Preservation is generally good to moderate, with poor preservation associated only with a condensed Paleocene sequence. Reworking of Paleogene taxa into the Neogene assemblages is common. Preservation of planktonic foraminifers is generally good at Site U1516, with some samples in the upper Albian ranked as excellent. Abundance is more variable; the Neogene, Paleogene, and Turonian generally contain abundant planktonic foraminifers, whereas the Albian contains only rare specimens. Benthic foraminiferal abundance and preservation are also variable. In general, examination of benthic foraminifers indicates a bathyal paleodepth during the Albian through Cenozoic.
Paleomagnetism
We measured the NRM of all archive-half core sections from Holes U1516A, U1516C, and U1516D ( Figure F8). The archive halves were stepwise treated with up to 20 mT AF demagnetization and measured with the pass-through SRM at 5 cm intervals. The NRM intensity of the recovered sedimentary cores is 10 −6 to 10 −1 A/m, and lithostratigraphic Unit I, which consists of mainly calcareous oozes and chalk, generally displays weak magnetism. Despite the weak NRM of the calcareous oozes/chalk of Unit I, inclinations after 20 mT demagnetization show zones of dominantly positive and negative values, defining a magnetic polarity sequence with a total of 84 identified and dated reversals for the upper ~430 m interval that spans Chrons C1n (Brunhes) through C22r. The magnetic polarity sequence is interrupted by a sedimentary hiatus at ~270 m CSF-A based on biostratigraphic constraints. Deeper than ~430 m CSF-A, inclinations in Units II-IV, which mainly consist of claystones, exhibit predominantly negative values, indicating a normal polarity. The normal polarity zone spans from ~430 to 525 m CSF-A and is assigned to Chron C34n, the CNS, based on shipboard biostratigraphic analysis.
Petrophysics
Site U1516 physical property data were collected from Holes U1516A, U1516C, and U1516D. Thermal conductivity generally increases to slightly higher values downhole, whereas porosity and Pwave velocity show a minor overall decrease downhole. In comparison, bulk and grain density varies very little downhole. Exceptions were observed within the interval between 380 and 460 m CSF-A, which contains a pronounced excursion toward higher bulk density, thermal conductivity, and P-wave velocity values; a minor excursion toward higher grain density values; and a strong excursion to lower porosity values. This interval also corresponds to an interval of relatively high magnetic susceptibility and the top of an interval of increasing NGR (Figure F8). Despite the strong correlation among physical properties, this interval does not correlate to any of the lithostratigraphic unit boundaries. NGR and magnetic susceptibility show similar overall trends throughout Site U1516, increasing when the lithology becomes richer in detrital components. At the transition between lithostratigraphic Units II and III (~470 m CSF-A), both NGR and magnetic susceptibility increase. Enrichment in U is notable after deconvolution of the NGR in the black claystones possibly related to OAE 2 (Cores 369-U1516C-30R to 32R; 465-470 m CSF-A). In Unit IV, similar features have been found in the evolution of both proxies with Site U1513, allowing correlations between these two sites.
Geochemistry
The Site U1516 geochemistry program was designed to characterize the composition of interstitial water and bulk sediments and to assess the potential presence of volatile hydrocarbons for routine safety monitoring. No gas was detected in the 57 headspace gas samples that were taken.
For interstitial water analyses, 52 samples were recovered from whole-round squeezing of samples from Holes U1516A (0-223.6 m CSF-A) and U1516C (244.0-541.6 m CSF-A). Sample salinity is generally constant, with the exception of distinctly fresher interstitial water in lithostratigraphic Unit IV ( Figure F8). This low-salinity interval reflects decreased concentrations in many elemental profiles, particularly Br − and Cl − . The Mg, K, and Ca concentration profiles possibly reflect alteration of volcanic material from depths below the cored interval at this site. The Sr profile likely reflects carbonate diagenesis. Low levels of sulfate reduction were detected; sulfate is present but decreases with depth. Dissolved Si reflects the presence of biogenic opal-A in Subunit Ia; decreasing concentrations deeper than Subunit Ib indicate the opal-A/CT transition. Elevated Mn ( Figure F8) and Fe concentrations in parts of Subunits Ia and Ib and Units II-IV demonstrate the reducing character of the sedimentary sequence in these intervals.
A total of 43 bulk sediment samples were collected downhole to ~540 m CSF-A. Additional samples were measured from the possible OAE 2 interval. CaCO 3 content varies from 0 to 94 wt%, reflecting variations in lithology. TOC is 0-1.2 wt% except in the black shale interval, where it reaches 14 wt%. TN is generally below detection.
Eleven samples, including one from a putative OAE 2 black shale at ~469.5 m CSF-A, were also analyzed using the source rock analyzer. The sample from the top of the 8 cm thick black interval indicates Type II kerogen, whereas samples with low (<2 wt%) TOC from the OAE 2 interval and lithostratigraphic Unit IV are composed primarily of Type IV kerogen. T max values indicate thermal immaturity.
Stratigraphic correlation
Cores from Hole U1516A provide a 225 m thick, continuous record of middle Miocene to recent deposition, and the sequence seems to be biostratigraphically and magnetostratigraphically complete. In Hole U1516C, coring gaps limit knowledge of the lower Miocene, much of the Oligocene, and portions of the Eocene at this site, but both the Oligocene/Miocene boundary interval and a 30 m long interval of the upper Eocene were well recovered. In addition, much of the Upper Cretaceous, all of the Paleocene, and much of the lower and middle Eocene are either missing or represented in a 15 m thick interval of condensed deposition and/or erosion and nondeposition spanning from Section 369-U1516C-26R-4, 106 cm, to the top of Core 25R. In contrast, an excellent record of the upper Albian to the middle Turonian was recovered between Holes U1516C and U1516D, including a seemingly complete splice across the possible OAE 2 interval.
Age-depth model and sedimentation rates
The Neogene has an average sedimentation rate of ~18 m/My from the Pleistocene through the upper Miocene ( Figure F8). Much of the middle and lower Miocene are missing at a disconformity with an estimated 8 My hiatus. The lowermost Miocene and uppermost Oligocene are present at this site, separated from the lower Oligocene by a disconformity with ~4 My missing. The lower Oligocene through middle Eocene has an average sedimentation rate of ~8 m/My. This sequence is separated from the Turonian by a condensed interval containing several biostratigraphic units of the middle Paleocene. The lower Paleocene through upper Turonian is missing at a disconformity, with a hiatus of at least 29 My. The middle Turonian through upper Albian has an average sedimentation rate of ~8 m/My.
Preliminary scientific assessment
Expedition 369 met all of the proposed science objectives and exceeded many of them during its investigation of the tectonic, paleoclimatic, and paleoceanographic history of the GAB and the Mentelle Basin. Sediment recovered from sites cored in both regions will provide a new perspective on Earth's temperature variation at subpolar latitudes (60°-62°S) during the rise and fall of the mid-Cretaceous and early Eocene hothouse climates, as well as the consequent paleoceanographic and biotic changes. The recovered sediment and basalt will also provide constraints on the timing of rifting and basin subsidence during the last phase of breakup among remnant Gondwana continents.
The following is a discussion of how the scientific objectives of the expedition were met and additional discoveries attained for each of the primary goals:
Investigate the timing and causes for the rise and collapse of the
Cretaceous hot greenhouse and how this climate shift affected the climate-ocean system and oceanic biota.
Recovery of Cretaceous sediments yielding foraminifers that show minimal diagenetic alteration was a major goal of Expedition 369 because these samples are essential for reliable Cretaceous climate reconstructions. We achieved this objective at Sites U1512-U1514 and U1516 (Figures F9, F10, F11, F12). The sequence that will yield the most continuous Cretaceous climate record ranges from the middle Albian through the early Campanian (~28 My) at Site U1513, adjacent to where Site 258 was drilled with only 22% Cretaceous sediment recovery (Luyendyk and Davies, 1974). Im-portantly, analysis of Cenomanian sediments yielding good microfossil preservation at Sites U1513, U1514, and U1516 will fill a critical temporal gap in the climate record at southern high latitudes. Moreover, good core recovery and microfossil preservation in portions of the Maastrichtian to Campanian (Site U1514), Santonian to Turonian (Site U1512), Santonian to early Campanian (Site U1513), Turonian to Cenomanian (Sites U1513 and U1516) and late Albian (Sites U1513, U1514, and U1516) (Figures F4, F5, F6, F8) will significantly improve reconstructions of the climatic and oceanographic changes that occurred across the rise and fall of the hot Cretaceous greenhouse climate.
2. Determine the relative roles of productivity, ocean temperature, and ocean circulation at high southern latitudes during Cretaceous OAEs.
Complete and well-preserved microfossil assemblages were recovered from above, below, and within the likely OAE 2 and 1d intervals, and the lithologies include beds of laminated black claystone with high TOC content ( Figure F11). Some authors have suggested that OAE 2, which spans the C/T boundary (~94 Ma), was triggered by CO 2 outgassing during a widespread pulse of volcanism (Turgeon and Creaser, 2008;Du Vivier et al., 2014). The 97% composite recovery across the likely OAE 2 interval in Holes U1513A and U1513D and 100% recovery in Holes U1516C and U1516D provide a unique opportunity to study this event in greater detail than any OAE 2 sequence in the world because of the abundance and good preservation of calcareous microfossils and the expected presence of organic biomarkers across the interval. Osmium isotope measurements through the cored sequence will determine Figure F9. Age-depth plots constrained by biostratigraphy, Sites U1512-U1514 and U1516. Horizontal lines = unconformities. IODP Proceedings 20 Volume 369 Figure F10. Stratigraphic summary and correlation of NGR records, Sites U1512-U1516 (west to east). Yellow stars = critical intervals that will be the focus of intensive shore-based study. (This figure is also available in an oversized format.)
Eoc. the timing of eruptions prior to, during, and after the event. Parallel oxygen isotope analyses of well-preserved benthic and planktonic foraminifers will provide an important test of whether oceanic warming was triggered by a volcanic event and whether predicted cooling followed the burial of organic carbon during the peak of the OAE . Measurement of additional chemical proxies and study of the microfossil assemblages for both OAEs will characterize changes in carbon chemistry, nutrient flux, types and amount of organic carbon burial, and changes in microfossil assemblages. Results from study of the OAE intervals cored at Sites U1513, U1514, and U1516 will provide a significant advance in our understanding of the cause and effects of these global anoxic events. A final additional bonus was the apparent recovery at Sites U1513 and U1516 of the relatively little studied Mid-Cenomanian Event.
Identify the main source regions for deep-water and intermediate water masses in the southeast Indian Ocean and how these changed during Gondwana breakup.
Several intervals cored during Expedition 369 will be investigated using εNd to trace sources and circulation patterns of deepwater masses (and thus changing connections between basins), as well as local weathering inputs and potential global influences such as hydrothermal input from large igneous province volcanism. For the Cenomanian in general and the OAE 2 interval in particular, εNd patterns obtained from sediments cored at Sites U1513 and U1516 will provide a geographically distant test between competing volcanic and circulation models developed for the North Atlantic. Integration of deep circulation among basins and the increasing importance of the Southern Ocean as a deep-water source can be temporally constrained by comparing εNd values and trends in the Mentelle Basin cores to values documented elsewhere. Finally, the timing and regional importance of the opening of the Tasman Gateway and the evolution of Antarctic circulation patterns across the Eocene/Oligocene boundary can be determined from εNd values obtained from sediment cores at Site U1514 in the northern Mentelle Basin.
Characterize how oceanographic conditions changed at the Men-
telle Basin during the Cenozoic opening of the Tasman Gateway and restriction of the Indonesian Gateway.
The opening of the Tasman Gateway and restriction of the Indonesian Gateway were major factors that influenced the evolution of global climate during the Cenozoic, and both oceanic gateway changes profoundly affected the climate of Australia and Antarctica. The Eocene opening of the Drake Passage and the Tasman Gateway led to development of the cold ACC that isolated Antarctica from warm equatorial currents, resulting in the buildup of a continental ice sheet in Antarctica (Bijl et al., 2013;Scher et al., 2006). Northward movement of Australia toward equatorial waters during the Miocene caused substantial reorganization of ocean current pathways in the Indian Ocean and major shifts in the climate of Australia (Gallagher et al., 2017;Groeneveld et al., 2017). Continued northward movement restricted current circulation across the Indonesian Gateway during the Pliocene, which then reduced the influence of the warm-water ITF in the Indian Ocean and initiated the arid climate that characterizes modern western Australia .
Because of the mid-latitude location, Cenozoic sedimentation in the Mentelle Basin has been particularly sensitive to northern and southern movements of Antarctic waters and changes in oceanic gateway passages that connect the western equatorial Pacific Ocean with the Indian Ocean. Study of Eocene deposits recovered at Sites U1514 and U1516 ( Figure F10) will further our understanding of the oceanographic and climatic consequences of the opening of the Tasman Gateway. An unexpected success with respect to the Paleogene record was the discovery during programmatic XRF core scanning of the recovery of the Paleocene/Eocene Thermal Maximum interval at Site U1514 ( Figure F10).
IODP Proceedings 21 Volume 369
High-resolution studies of Miocene and Pliocene sediments recovered from Sites U1513, U1415, and U1516 will establish the timing, magnitude, and rates of climate and ocean circulation changes that affected the Australian continent and the southeast Indian Ocean region as the seaway between Australia and Antarctica widened and deepened and the Indonesian Gateway became more restricted.
Resolve questions about the volcanic and sedimentary origins of
the basin and provide stratigraphic control on the age and nature of the prebreakup succession.
Sampling the prebreakup sediments was achieved at Site U1515 (Figures F7, F10, F11). The margin-wide unconformity was crossed at 364 m CSF-A, and coring sampled a series of carbon-rich claystones interspersed with poorly cemented sandstone in a faultbounded segment of the eastern Mentelle Basin. The claystone is believed to be of Early Jurassic age and to have been deposited during the early stages of rifting within Gondwana, which was undergoing a period of thermal subsidence following an earlier Permian rifting event (Bradshaw et al., 2003). Tilting of these sediments is indicative of a later stage of rifting and fault reactivation in the midto Late Jurassic.
Our deepest hole (U1513E) cored ~84 m of volcanic material and recovered ~54 m (Figures F5, F10). Onboard analysis identified separate extrusive flow sequences intercalated with sedimentary breccia beds that were later intruded by a younger diabase dike. The older extrusive volcanics appear to be a mix of subaerial and marine flows, which suggests they were deposited close to sea level. Isotopic dating of the volcanics was not possible on board, although stratigraphic relations mean that the extrusive flows are older than the overlaying mid-Valanginian sediments dated by magnetostratigraphy. Volcanic activity, evidenced from the intersecting seismic profiles as either isolated bright reflectors interpreted as sills or volcanic cones, appears to be present throughout lithostratigraphic Unit V, which is Valanginian to Barremian in age. Although the basalt sequences are highly altered, we anticipate that enough material has been collected for Ar/Ar analysis to at least date some of the flows and the dike. Results will be compared with a recent compilation of basalt samples from both nearby dredge and onland sampling of the Bunbury basalt (Direen et al., 2017;Olierook et al., 2016).
Although only one of the sites in the Mentelle Basin managed to sample the basalt that marks the onset of the breakup between Greater India and Australia/Antarctica, all Expedition 369 sites contribute to significantly improving the stratigraphic control of the regional reflection seismic data. Site U1512 recalibrates the current seismic interpretation and therefore the role of the Wallaroo Fault System as an active fault synchronous with the initial Figure F11. Bulk sediment geochemical summary, Sites U1512-U1516 (west to east).
Site U1512 Site U1513
Site U1514 phase of seafloor spreading between Antarctica and Australia. Sediments cored at sites around the Mentelle Basin enable dating of key stratigraphic units that record the rifting of both Greater India and Antarctica from Australia that can be correlated to the regional seismic reflection data. Erosional hiatuses and faults in the sedimentary succession can now be dated and linked with episodes of uplift, erosion, and subsidence, which in turn can be linked to the wider tectonic and thermal histories of this margin. Figure F12. Interstitial water geochemical summary, Sites U1512-U1516 (west to east).
Site U1512 Site U1513
Site U1514 | 2019-06-13T13:24:31.344Z | 2019-05-25T00:00:00.000 | {
"year": 2019,
"sha1": "024a3f67ff05e355ce02fa61fb04e56f0a47fbc1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14379/iodp.proc.369.101.2019",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b6437c8ebfc41600a1e92308bfb8dc5eb6bada2",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"History"
]
} |
201212694 | pes2o/s2orc | v3-fos-license | Evaluation of upconverting nanoparticles towards heart theranostics
Restricted and controlled drug delivery to the heart remains a challenge giving frequent off-target effects as well as limited retention of drugs in the heart. There is a need to develop and optimize tools to allow for improved design of drug candidates for treatment of heart diseases. Over the last decade, novel drug platforms and nanomaterials were designed to confine bioactive materials to the heart. Yet, the research remains in its infancy, not only in the development of tools but also in the understanding of effects of these materials on cardiac function and tissue integrity. Upconverting nanoparticles are nanomaterials that recently accelerated interest in theranostic nanomedicine technologies. Their unique photophysical properties allow for sensitive in vivo imaging that can be combined with spatio-temporal control for targeted release of encapsulated drugs. Here we synthesized upconverting NaYF4:Yb,Tm nanoparticles and show for the first time their innocuity in the heart, when injected in the myocardium or in the pericardial space in mice. Nanoparticle retention and upconversion in the cardiac region did not alter heart rate variability, nor cardiac function as determined over a 15-day time course ensuing the sole injection. Altogether, our nanoparticles show innocuity primarily in the pericardial region and can be safely used for controlled spatiotemporal drug delivery. Our results support the use of upconverting nanoparticles as potential theranostics tools overcoming some of the key limitations associated with conventional experimental cardiology.
Introduction Cardiovascular diseases (CVD), an umbrella term for a number of different pathologies including arteriosclerosis, coronary artery disease, arrhythmia, hypertension and heart failure, are the leading cause of morbidity and mortality in the world [1]. Heart failure is a major cause of late morbidity and mortality after myocardial infarction (reviewed in [2]). Therapeutic interventions to prevent heart failure are of considerable active research. For example, treatment of the infarcted heart through experimental strategies has involved the delivery of growth factors, cytokines, and drugs to the infarcted cardiac tissues (reviewed in [3][4][5]).
Principal drug delivery methods indirectly target the heart. However, despite the efficiency of the usual routes of administration, they often result in off-target effects as well as limited concentrations and low retention of the factors in the desired area. To circumvent the low cardiac bioavailability and systemic effects observed by peripheral administration routes, targeted drug delivery approaches are currently considered. For instance, direct injections into the cardiac muscle, the myocardium, have been used to localize drugs or stem cells to infarcted regions [6]. Drug vectorization to the heart can also be achieved by injection into the pericardial space which has a large potential for localized drug delivery [7]. Both approaches for injection, whether in the intramyocardial (intraMY) and intrapericardial (intraPE) space, are used in experimental and clinical settings and are highly adaptable to the study design and expected endpoints. To date, despite significant advances in the development of effective cardiac therapeutic agents and in drug delivery technologies, enduring drug availability in the heart remains a complex and pervasive conundrum in research and clinical settings. As such, there is a need to develop drug-delivery and repeat-release approaches to deliver and maintain therapeutic materials to cardiac tissue with minimal adverse effects.
The need for spatiotemporal control for targeted release of various drugs that can be coupled with the capacity for imaging has accelerated interest in theranostic technologies. Theranostics is the integration of imaging (diagnostics) and drug delivery (therapy) for application in personalized medicine [8]. In recent years, lanthanide-doped upconverting nanoparticles (UCNPs) have emerged as suitable tools for use in theranostics because of multiple features associated to the lanthanide ions. In particular, their unique optical properties originating from their electron configuration of the 4f shell [9]. Indeed, the various lanthanide ions typically present a great number of energy levels and their excited states are usually long lived [10]. Due to these spectroscopic features, these rare earth ions are in contrast to other chromophores whereby they accept energy in the near-infrared range (NIR; 980 nm, when doped with Ytterbium as the sensitizer) and emit at lower wavelengths than the excitation. When the nanocrystals are doped with Thulium as the activator, the main emission is observed in the biologically favourable 800 nm range [10]. The illumination in the NIR, and anti-Stokes luminescence [11] is used to reduce the auto-fluorescent background allowing for improved signalto-noise ratio. The narrow emission bandwidth of UCNPs allows ease of multiplexed imaging, and very limited to inexistent photobleaching making them relevant for long-term and repetitive imaging [12]. In addition, UCNPs permit deep tissue penetration, for instance reported to be between 1.6 cm and 3.2 cm with minimal signal-to-noise background [13,14] because of the excitation in the NIR that is within the optical transparency window [15,16].
Given their advantageous photophysical properties, UCNPs have been exploited in a variety of ex vivo [17] as well as in in vivo situations, ranging from luminescence-based imaging [18]; to anti-cancer drug delivery [19]; and, to mouse behavior control via optogenetics [20]. In the context of targeted drug delivery, in order to minimize the drawbacks of conventional drug delivery with more specific and efficient target delivery, UCNPs as nanocarriers are promising tools [21,22]. Few recent examples of heart-specific studies employing various nanomaterials are revealing a promising role for nanoparticle use to unravel the complexity and help treat CVDs [23][24][25]. While still in its infancy, there is an imperative need to optimize the use of nanoparticles on the heart; with particular attention to the long-term conservancy of cardiac tissue and cardiac function in vivo.
The aim of the present study was to determine the innocuity of UCNPs to the heart and then explore their potential for time and space-controlled drug delivery. NaYF 4 :Yb,Tm NIRemitting UCNPs were synthesized and used to measure the differential accumulation and distribution of the nanoparticles over a 15-day course following either intraperitoneal, intraMY or intraPE injections. Indeed, the pericardium, which is a thin layer that separates the heart from the thoracic cavity and provides structural support while also having a substantial hemodynamic impact on the heart, was a focal point of this study. Notably, we examined heart rate (HR), as determined by real-time electrocardiogram (ECG) monitoring, during the excitation upconversion phase that was repeated throughout the entire time course, to identify whether the UCNPs and or the laser beam localized to the chest area, have an impact on cardiac rhythm.
Materials
Hydrated rare earth chlorides (99.9%), octadecene and oleic acid were from Alfa Aesar (Karlsruhe, Germany). NH 4 F, and NaOH were from Sigma-Aldrich (Saint Quentin Fallavier, France). PEO 6000 -PAA 6500 was from Polymer Source (Montreal, Canada). All Other solvents were from Sigma Aldrich and of HPLC grade.
Preparation of NaYF 4 :Yb,Tm Upconverting Nanocrystals
Upconversion nanocrystals were synthesized by a well-established high temperature liquidphase protocol [26]. YCl 3 .6H 2 O (315 mg, 1.03 mmol), YbCl 3 .6H 2 O (174 mg, 0.45 mmol), TmCl 3 .6H 2 O (3 mg, 7.8 μmol), dissolved in 500 μL of water were added to 24.5 mL of octadecene and 4.5 mL of oleic acid. The suspension was heated to 160˚C under argon using a heating mantle, maintained at 160˚C for 1 h, which yielded a clear light-yellow solution. After cooling to room temperature, 10 mL of a MeOH solution containing NH 4 F (222 mg), and NaOH (150 mg) were added dropwise while stirring. The suspension was heated and maintained at 50˚C under a continuous argon flow for 30 min. The temperature was increased and maintained for 1 h at 100˚C to allow for the evaporation of the methanol. The flask was then stoppered and 3 vacuum/argon cycles were applied using a high-vacuum pump. The heating mantle was then gradually increased to 310˚C (the temperature reached 300˚C after 15 min, 310˚C in 30 min) for a total of 90 min. After cooling to room temperature, the crude reaction mixture was poured into 8 mL of ethanol. The particles were centrifuged at 9000 g and rinsed twice with ethanol. The solvent was then removed in a vacuum oven at 80˚C, yielding 324 mg of a white powder.
Preparation of Polyethylene oxide-polyacrylic acid (PEO-PAA)-coated UCNPs
5 mg of the above described particles were dispersed in 3 mL of toluene together with 5 mg of PEO 6000 -PAA 6500 . 3 drops of 1 M NaOH were added to help with polymer dissolution. This solution was transferred to a sealed vial and heated to 100˚C for 1 h using a monomode microwave oven (monowave 300 Anton Paar, Les Ulis, France). Deionized water (1 mL) was then added and the aqueous phase was collected. Purification was achieved by three centrifugation steps at 9000 g. Following the last centrifugation, the particles were suspended in 2 mL of deionized water.
Animals
Wild-type, 9-11week old males C57BL/6J (Envigo) were used. Animals were housed with food and water available ad libitum under standard 12h light/dark cycles. Animal procedures were approved by the national Animal Care and Ethics Committee (CE2A122 protocol number 2017092913349468) following Directive 2010/63/EU.
Injections
For intraperitoneal injections, animals where briefly anaesthetized by a gas induction by 2% of isoflurane inhalant mixed with 1 L.min -1 100% O 2 and injected with 5 μL of a 1 μg.μL -1 solution of nanoparticles, and allowed to regain consciousness in their respective cages. For cardiac injections, anesthesia was induced using ketamine / xylazine (125/5 mg.kg -1 ) by intraperitoneal injection and maintained with 2% of isoflurane inhalant mixed with 1 L.min -1 100% O 2 . The analgesic, buprenorphine (100 μg.kg -1 ), was administered subcutaneously. Using an operating microscope Zeiss OPM1 FC, tracheal intubation was performed for ventilation with the mini-wind (Harvard Apparatus). An incision of the 4 th intercostal space was performed to provide adequate exposure of the thoracic cavity. The left atrioventricular block was exposed and for intraMY injections a catheter was used to inject 5 μL of the nanoparticle solution into the myocardial wall of the apex of the left ventricle. For intraPE injections, a catheter was gently introduced within the pericardium and 5 μL of nanoparticle solution was injected. Evans Blue was used as a tracking dye to ensure lack of spillage. Controls consisted of phosphate buffer saline (PBS) injections into the respective areas. The intercostal space and skin surface were successively sutured using an Ethilon 6/0 thread (Ethnicon).
BioImaging and Image processing
The setup consisted of a dark chamber supplied with an arrival for a precisely blended mixture of oxygen and isoflurane anesthetics. The compartment was also equipped with the Small Animal Physiological Monitoring system (Harvard Apparatus), which permitted rectal temperature control of the animal and continuous ECG readings. For upconversion and imaging, the chamber was fitted with a CCD camera (iKon-M 934, Andor) that was paired with an optical fiber and a beam expander (BE05M-B, Thorlabs) allowing for a 2.8 cm circular illumination area for excitation centered at 980 nm. All imaging sessions were performed with an excitation laser beam power density of 290 mW.cm -2 (controlled with a gentec power detector (PM130D, Thorlabs)) with 5 or 10 sec exposure times and were collected in control room temperature at 21˚C. Hearts, including the pericardium, were microdissected, rinsed quickly in PBS and imaged with an excitation laser beam power density of 290 mW.cm -2 .
A multicolor LUT was applied to the upconversion luminescence images, using a constant display range across all images. This upconversion image was then overlaid with an image recorded with white light illumination.
Transthoracic Echocardiography
Mice were anesthetized by 2% of isoflurane inhalant mixed with 1 L/min 100% O 2 to maintain light sedation throughout the procedure. They were immobilized ventral side up on a heating platform to maintain the body temperature at 37˚C ± 0.5˚C. Mice chests were shaved and warmed ultrasound gel was applied to the area of interest. Transthoracic echocardiography was performed using a Vevo 2100 system (VisualSonics) with a 40-MHz transducer by a trained user. Images were captured on cine loops at the time of the study and afterward measurements were done off-line. Cardiac ventricular dimensions were measured in M-mode and B-mode images 4 times for each animal. Left ventricle ejection fraction (LVEF) was calculated using parameters automatically computed by the Vevo 2100 standard measurement package. Measurements were obtained by an examiner blinded to the treatment of the animals. All measurements were performed excluding the respiration peaks and obtained in triplicate; mean values were used for data analyses.
Heart rate variability (HRV) and power spectral analysis
For the acquisition of ECG, mice were anesthetized by inhalation of isoflurane at concentrations of 3% during the induction phase. Anesthesia was maintained at 1.5% and continuous recordings were obtained consisting of a minimum 10 min basal state, followed by laser usage (5 sec) and 10 min recordings using the Small Animal Physiological Monitoring system (Harvard Apparatus). ECG signal was obtained with three limb leads. The ECG signals were digitalized at 4 kHz, processed and monitored (Labchart v7, AD Instruments). The R-R interval was acquired from the ECG signal. Sections of stable HR (5 min before and after laser induction), free of noise and artifacts, were analyzed. HRV was assessed both in time and frequency domains.
Time domain measurements included the following metrics: the standard deviations of the R-R intervals (SDNN) and the root-mean square differences of successive R-R intervals (RMSSD). The SDNN corresponds to all the cyclic components responsible for the overall variability. RMSSD provides information about high frequency variations in HR. Overall, all these metrics reflect the autonomic status (27).
Frequency domain analysis was performed using fast Fourier transform with 1,024 spectral points' series using Welch's periodogram with 50% overlapping window. Frequency domain analysis was analyzed in two separate spectral components: low frequency (LF: 0.15-1.50 Hz) and high frequency (HF: 1.50-5.00 Hz) bandwidths. Very low frequency (VLF: < 0.15 Hz) was excluded from the analysis. These spectral components were expressed in absolute values of power (ms 2 ) and in normalized units (n.u.). The LF spectrum and the LF (n.u.) [(100 � LF power / (total power-VLF power)] have been considered as an index of cardiac sympathetic tone whereas HF spectrum and HF (n.u.) [(100 � HF power / (total power-VLF power)] reflected cardiac parasympathetic tone. The LF/HF value was obtained from ratio LF (ms 2 ) / HF (ms 2 ) and emphasized the sympathovagal balance [27].
Analysis of cardiac tissue necrosis
After 15 days, mice were euthanized by cervical dislocation following isoflurane exposure and hearts were dissected. Hearts were quickly rinsed in cold PBS, and imaged by photo-upconversion to locate nanoparticles. To assess necrosis, cross sections were obtained using a Zivic Mouse Heart Matrix (Zivic) and 200 mm slices were incubated in 1% triphenyltetrazolium (TTC) at 37˚C incubator for 10 min. Slices were gently removed from TTC and placed in 4% paraformaldehyde (Sigma) at 4˚C for 24 h. The sections were rinsed gently in saline, placed within clear plastic sheets and images of TTC stained sections were captured using a digital scanner. Both sides were scanned and the digital photomicrographs were analyzed for white (damaged/necrotic) versus red (live) tissue using ImageJ, quantified and expressed as a percentage of the sum of necrotic areas from all sections by the sum of all areas from all sections multiplied by 100 [28].
Statistical analysis
To compare results between the injection groups, 2-way ANOVA was used. Paired t-test was used for within group comparison. Pearson's correlation was used. A p-value < 0.05 was considered significant.
Synthesis of the upconverting nanoparticles
Upconverting nanoparticles were synthesized using a well-established high-temperature wet chemical synthesis [26]. Fig 1 shows the characterization of their main features. TEM (Fig 1 (A)) was used to establish that the synthesized particles are highly homogeneous in size and have slightly elongated oval shape, with dimensions of 34 nm in minor axis, 38 nm in major axis (Fig 1(B)). Further insight of the crystal phase was obtained by powder X-ray diffraction and an exact match with hexagonal β-phase NaYF 4 reference data was observed (Fig 1(C)). The upconversion spectrum was recorded following 980 nm excitation (Fig 1(D)). Beside a minor emission in the blue range (475 nm, 1 G 4 ! 3 H 6 transition), and two in the red range (647 nm, 1 G 4 ! 3 F 4 ; 695 nm, 3 F 3 ! 3 H 6) the emission spectrum is dominated by a very strong emission centered at 800 nm ( 3 H 4 ! 3 H 6 transition) which is typical of particles doped with thulium as emitter.
The particles initially covered in oleic acid ligands were rendered hydrophilic and biocompatible using a hydrophilic polymer coating, by the direct interaction of the OA-capped UCNPs with the macromolecules at 100˚C (Polyethylene oxide-b-polyacrylic acid, PEO-PAA) [29].
Experimental set-up
In this study, we investigated the fate of hydrophilic upconverting nanoparticles, following injection in three sites in mice: the peritoneum, the myocardium and the pericardium. The study is conceptually described in Fig 2. In order to track nanoparticles, we assembled a bioimager that comprised a 980 nm laser source for excitation, which was passed through a beam expander in order to achieve an irradiated area of approximately 2.8 cm in diameter. The animal was anaesthetized using isoflurane and positioned according to the region of the body being examined. A physiological monitoring apparatus was implemented in the imager, which allowed for ECG to be recorded simultaneously to laser irradiation, and a rectal probe allowed for temperature control. The upconverted light was collected with a highly sensitive CCD camera, with peak quantum efficiency centered on 800 nm.
Persistence of nanoparticles in the heart
Evans blue dye and bioimaging by upconversion immediately following the injection of the PEO-PAA coated nanoparticles were used to control for targeted placement and for any residual spillage of the injected solutions from the injection site. No non-specific leaks were detected in our experimental groups receiving intraMY or intraPE injections immediately following surgery. Time-course emission intensity was performed in the heart to determine the persistence of the nanoparticles within this region. Clearance of the nanoparticles was determined based on emission intensity at two regions: the kidney and liver. Following the injection by intraperitoneal route, which was included as control, we were unable to detect the presence of the nanoparticles in the heart at any of the time points measured. By 7-days post-injection there was no trace of nanoparticles in the subjects (Fig 3(A)). Injection in the myocardium permitted the retention of the nanoparticles within the heart for 15 days, however, reduction of the upconversion luminescent emission at the heart was observed in the first 7 days post injection. While nanoparticles were retained in the heart muscle in all animals receiving intraMY injections, the dispersal of the UCNPs in these subjects were heterogeneously localized over time. Fig 3(B) is a representative animal, showing that nanoparticles persist in the intraMY space for 7 days, but are cleared from the heart within 15 days. IntraPE injections demonstrated retention in the heart region for 2 weeks with a slow clearance (Fig 3(C)). Altogether, these data demonstrate that long-term nanoparticle retention in the heart is possible after intraMY and intraPE administration, but is optimally conserved in the cardiac region when localized to the pericardial space.
Effects on body weight, Cardiac Function and Morphometry
Body weight was monitored throughout the experiment. As shown in Fig 4(A), the mouse body weight of the control and experimental groups were similar showing a good macroscopic tolerance for the UCNPs. To further evaluate the in vivo effects of cardiac targeted nanoparticles on cardiac functions we implemented the Small Animal Physiological Monitoring System (Harvard Apparatus) along with the imaging setup. Real-time ECG monitoring during laser induced upconversion showed no effect on the heart rate in all subjects (Fig 4(B)) even in the presence of UCNPs in the myocardium or pericardium (as confirmed in Fig 3). No discernable variation was observed over the course of the 2 weeks post-injection following repeated upconversion on the same subjects (Fig 4(C)). Therefore, we noted that presence of the UCNPs in addition to a brief (5 seconds) exposure of the laser beam power density of 290 mW.cm -2 to the chest region did not alter the ECG profile in UCNP-treated animals or controls.
Concerning in vivo endpoints, transthoracic echocardiography showed no effect of the interventions on the LVEF, a measure of LV muscle function, between the groups (Fig 4(D)). ECG analyses showed no significant difference between experimental groups and controls in (Fig 4(E)) or in any time domain measurements of HRV, either for SDNN (Fig 4(F)) or RMSSD (Fig 4(G)). Moreover, no significant change was noted in frequency domain measurements of HRV, including total power, LF (Fig 4(H)), LF (n.u.) (Fig 4(I)), HF (Fig 4(J)), and HF (n.u.) (Fig 4(K)), therefore, the sympathovagal balance (LF/HF ratio) was unmodified in the presence of nanoparticles (Fig 4(L)). In an attempt to sublocalise the UCNPs within the heart following intraMY and intraPE injections, relevant hearts were dissected and the pericardia were reclined as shown in Fig 4(M). It was observed that intraMY led to a localization of the UCNPs in the heart (Fig 4(M), left), while UCNPs injected intraPE remained within the pericardium and did not bind to the heart itself (Fig 4(M), right). Macroscopic examination of necrosis of cardiac tissues at 15 days post injection of the nanoparticles and quantification of percent-tissue death are respectively shown in Fig 4(N) and 4(O) and evidence the absence of necrosis of cardiac tissue following pericardial sac injections, even if some tissue damage is perceivable in animals receiving intraMY.
Discussion
We have synthesized NaYF 4 :Yb,Tm upconverting nanocrystals with an average size of 38 nm that exhibit spectrally sharp, large anti Stokes-shifted emission. For the persistence study, the particles, which are initially covered in oleic acid ligands, were rendered hydrophilic by coating with a layer of PEO-PAA copolymer. This double hydrophilic copolymer was anchored to the nanoparticles by multiple chelation from the polyacrylate segment. The second block, composed of PEO, ensured aqueous dispensability, and biocompatibility [30].
In an effort to identify the potential application of these UCNPs for heart specific photodynamic imaging, we decided to target the nanoparticles to the myocardium and/or the pericardial space. First, we assessed the potential of the nanoparticles to be maintained within the cardiac region. UCNPs injected in the myocardium exhibited release from the heart, yet there was no detectable accumulation in the liver or kidney. While intraMY injections permit highly localized delivery for clinical and research applications, incomplete retention, and even immediate leakage, from the heart have been reported for intraMY administration [31][32][33]. This phenotype is attributed to the vascularization and dynamic environment of the myocardium. Improved, consistent and longer-term retention was observed for nanoparticles injected in the pericardial space. We found that nanoparticles injected into the pericardial sac result in highest retention. Pericardial sac injections are considered as a minimally invasive method to optimally localize therapeutics to the heart [7,34]. UCNPs were retained in the pericardial region, and evaded lymphatic clearance even though they measured~38 nm (Fig 2), which is smaller than the depicted circular fenestrations of the parietal pericardium (diameter up to 50 μm [35]) therefore rendering their potential for several new and exciting applications. Our findings are in agreement with a recent study that showed BODIPY-containing PLGA nanoparticles administered intrapericardially were retained for a half-life of roughly 7-days and showed no microscopic detrimental consequence to the heart [36]. Moreover, in terms of nanoparticle clearance, our findings correspond to other studies which show that the biodistribution and clearance of most nanoparticles in vivo result in their accumulation in the liver or kidney [37][38][39]. Nevertheless, studies are needed to identify if acute exposure of the UCNPs to the heart elicits detrimental consequences in the long-term in the heart. In addition, future studies will need to center on the development of UCNPs favoring their excretion properties for potential clinical application.
Despite favorable localization of the nanoparticles to the heart via intraMY or pericardial space injections, their influence on cardiac rhythm in vivo and over time, is unknown. Here we performed simultaneous and continuous ECG recordings with time-space controlled repetitive upconversion to the cardiac region and showed no adverse effects on HR. The frequency of the cardiac cycle is reflected as HR and is one of the most important physiological parameters that gives correct assessment of heart function [40]. Overall, nanoparticles in the myocardium or pericardial space did not alter the cardiac rhythm as detected by ECG analyses. End point experiments included echography, to visualize and assess cardiac function via the calculation of the LVEF [41], HRV analysis to identify cardiac rhythm abnormalities, and, autonomic nervous system assessment [42]. No difference between UCNP-injected and control subjects was noticed. Though a slight decrease in the LVEF was observed in mice receiving direct intraMY injections of UCNPs as compared to pericardial space injections, neither of the administration routes resulted in cardiac muscle necrosis. Altogether, these data show that UCNPs can be localized and maintained in the heart for 15 days and can be subject to repeated excitation and upconversion with no effects on cardiac functioning or tissue integrity. Future work now aims to use the UCNPs for drug delivery, for which several strategies have already been proposed that include photo-uncaging of the drug itself or release of drug from loaded mesoporous carriers [43], to the heart in models of hypertension and heart failure. This spatiotemporal delivery to the myocardium will allow to us to resolve on-target effects including pharmacodynamic dose responses.
Conclusions
Altogether, we provide the first evidence for multifunctional UCNPs that showed strong upconversion luminescence under 980 nm excitation, retention in the cardiac regions over the course of 15 days, biocompatibility, and high signal-to-noise ratio in vivo. The lengthy exposure of the UCNPs in the heart, and repeated excitation had no discernable effect on cardiac function. Future works are now aimed at applying our UCNPs to improve and multiplex with other imaging techniques, such as cardiac MRI. Our strategy has the potential to optimize targeted delivery of materials to the heart for biomarker identification in order to stratify heart failure and help enhance therapeutic strategies. | 2019-08-23T06:03:59.417Z | 2019-07-31T00:00:00.000 | {
"year": 2019,
"sha1": "474ebd736e03cbd7255ce2b6adcbaa50dcb87a14",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0225729&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e2f2653720f2c50663345a093d1a2cf03429d4e",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256962823 | pes2o/s2orc | v3-fos-license | PKC Dependent p14ARF Phosphorylation on Threonine 8 Drives Cell Proliferation
ARF role as tumor suppressor has been challenged in the last years by several findings of different groups ultimately showing that its functions can be strictly context dependent. We previously showed that ARF loss in HeLa cells induces spreading defects, evident as rounded morphology of depleted cells, accompanied by a decrease of phosphorylated Focal Adhesion Kinase (FAK) protein levels and anoikis. These data, together with previous finding that a PKC dependent signalling pathway can lead to ARF stabilization, led us to the hypothesis that ARF functions in cell proliferation might be regulated by phosphorylation. In line with this, we show here that upon spreading ARF is induced through PKC activation. A constitutive-phosphorylated ARF mutant on the conserved threonine 8 (T8D) is able to mediate both cell spreading and FAK activation. Finally, ARF-T8D expression confers growth advantage to cells thus leading to the intriguing hypothesis that ARF phosphorylation could be a mechanism through which pro-proliferative or anti proliferative signals could be transduced inside the cells in both physiological and pathological conditions.
On the basis of this evidence, we sought to investigate if ARF role in cell spreading and its functional relation with FAK could be regulated by PKC activity. Here we show that during cytoskeleton remodelling induced by cell spreading, ARF protein levels increase in the cytoplasm through a PKC dependent mechanism. Mimicking the phosphorylation status of the protein is sufficient to drive its localization in the cytoplasm and to rescue spreading defect as well as FAK phosphorylation of ARF silencing in HeLa cells, thus resulting in an increased proliferative ability. Taken together these data indicate that PKC activation can prime ARF involvement in cell spreading leading to increased FAK activation and cell proliferation.
Results
Threonine to Aspartic mutation in Threonine 8 is sufficient to affect ARF localization. The threonine 8, lying in the most conserved region of the protein, is also highly conserved within ARF protein sequence of different species. To analyse the relation between this site and the other PKC consensus sites (serine residues in position 52 and 127 15 ), we constructed double (T8-S52) and triple (T8-S52-S127) mutants in which each single potential PKC site was replaced either with an alanine ("A" serie), that cannot be phosphorylated, or with an aspartic acid ("D"), that mimics the phosphorylation status of the protein. ARF protein displays various degree of accumulation in nucleoli and/or scattered throughout the nucleoplasm 24,25 . We then tested if the inserted mutations could affect ARF subcellular localization evaluating subcellular localization of tagged WT and mutant ARF proteins transfected in U2OS cells by IF with anti Hys antibody. For each mutant, we counted the number of transfected cells displaying nuclear (Fig. S1, nucleolar + diffuse nuclear, left and middle panel) and nucleo-cytoplasmic localization (Fig. S1 right panel) and these data were reported in a graph of Fig. 1A. Immunofluorescence experiments showed that both double and triple mutations mimicking the un-phosphorylatable status of the protein, display a localization pattern similar to WT and T8A mutant performed as control (Figs 1A and S1). In contrast, the double and triple mutants of the "D" series localize both in the nucleus and in the cytoplasm in almost 50% of transfected cells (Fig. 1A), as previously reported for the T8D mutant 15 . These results suggest that Thr8 mutation alone is sufficient to determine ARF localization. This allowed us to statistically analyse the role of T to A mutations vs T to D mutations. Comparing the percentages of nucleo-cytoplasmic localization of mutants of the A series with those of the D series, we obtained the plot shown in Fig. 1B. We could observe how the localization of un-phosphorylatable ARF proteins (as well as of the WT) is significantly different from that of the "D" mutants. Collectively these results suggest that mimicking phosphorylation on threonine 8 alone is the signal sufficient to induce ARF accumulation in the cytoplasm.
ARF is induced in the cytoplasm during cytoskeleton reorganization through PKC activation.
As the recent identified ARF role in cell spreading correlates with its cytoplasmic localization, we wondered if ARF phosphorylation could be the signal priming this new function. We first analysed if PKC activation could be detected during cytoskeleton remodelling in HeLa and H1299 cells (lung carcinoma). To this aim, we monitored levels of pPKC during cell spreading by western blot using the anti pPKC pan antibody that recognizes all the PKC isoforms phosphorylated at a carboxyl-terminal residue homologous to serine 660 of PKC β II (activated pPKC). Cytoplasmic and nuclear protein extracts were collected from untreated (nt) and detached cells, as well as from replated cells five hours after seeding, when spreading process is almost completed. As shown in Fig. 2, upon cell detachment pPKC and ARF protein levels increase in the cytoplasm ( Fig. 2A and B) of both cell lines. Interestingly, ARF increase is restricted to the cytoplasmic compartment. Real time quantification of ARF mRNA levels in untreated, detached and spreading cells, shows no increase of ARF transcription in any of the tested condition in both HeLa and H1299 cells (Fig. S2) thus suggesting a post-translational mechanism involved in ARF stabilization.
To better analyse if the observed PKC activation is required to induce ARF increase, we knocked down PKC expression by RNA interference and analysed ARF cytoplasmic levels by western blot in HeLa cells. As control, cells were also treated with ARF specific siRNA. Control (siLuc and siSCR), PKCalpha and ARF depleted cells were collected 72hrs after transfection or subjected to trypsinization and divided in two aliquots. One was subjected to direct lysis (Fig. 3A, detached cells), while the other aliquot replated and cells collected after 24 hours. Subcellular fractionation was performed in order to analyse ARF levels in the cytoplasm. In agreement with our hypothesis, ARF levels in PKC alpha depleted cells decrease with a similar efficiency to ARF depleted cells upon detachment, remaining low until 24 hrs post plating (Figs 3A and S3). Western blot with anti-PKC antibody shows efficient silencing although small fraction of PKC molecules is resistant to silencing in line with the notion that HeLa cells also expresses epsilon and delta PKC isoforms both recognized by this antibody and not targetted by the the used siRNA. Pharmacological inhibition of PKC with bisindolylmaleimide I, an inhibitor of the catalytic subunit of PKC, confirms the requirement of catalytic active PKC in this process (Fig. 3B). Collectively these experiments show that during cytoskeleton remodelling PKC is activated resulting in increased cytoplasmic ARF protein levels.
ARF T8D mutant rescues spreading defect of ARF depleted cells and promotes cell viability.
We previously showed that ARF silencing induces an evident morphological defect in different cell lines, including HeLa and H1299 cells. Due to the inability to properly organize the cytoskeleton structures, ARF depleted cells display a striking rounded morphology 14 . Interestingly, upon PKC downregulation by siRNA or by bisindolylmaleimide I treatment, cells showed spreading defects similarly to ARF depletion (Fig. S4). We thus explored the hypothesis that ARF phosphorylation could have a role in this mechanism. To this aim, we tested if mutant ARF proteins reintroduction is able to fulfil the function of the endogenous protein through a rescue experiment. HeLa cells were transfected with either T8A or T8D expressing vectors and empty vector as control by electroporation. Twenty-four hours later, cells were treated with scrambled or ARF-specific siRNA targeting endogenous ARF transcript (Fig. S5a). To monitor spreading process, cells were detached by trypsinization, replated and images of spreading cells collected 5 hours post plating. The ability to spread was quantified and plotted as shown in Fig. 4A. Less than 40% of ARF-depleted cells are able to properly spread both in empty vector and T8A expressing cells. In line with our hypothesis, T8D transfection is able to rescue morphology defect caused by ARF depletion in full. As shown previously, rounded phenotype caused by ARF depletion is also accompained by a decrease of FAK activation 14 . During cellular adhesion and upon integrin binding to the extra cellular matrix (ECM), FAK is activated through auto-phosphorylation of tyrosine 397 (Y397). This activation is followed by increased Src binding to FAK resulting in its massive phosphorylation onto several tyrosine residues within FAK sequence 26,27 . We thus preliminary checked if and which ARF mutant is also able to rescue FAK activation by western blot of crude extracts with anti pFAK antibodies. Results showed that in T8A expressing cells devoid of ARF expression, lower levels of both total and pFAK on tyrosine 397 are achieved, while no difference between siSCR and siARF could be detected in T8D expressing cells (Fig. S5b). The failure of T8A to induce FAK phosphorylation could be the cause of reduced spreading ability. On the other hand, this could reflect the impairment of T8A to stabilize FAK. To discriminate between these two hypotheses, we overexpressed wt and ARF mutants in HeLa cells and analysed FAK levels by immunoprecipitation followed by phospho-FAK immunodetection (both pFAKY397 and pTyr) during spreading. The experiment showed that all ARF proteins are able to positively induce FAK activation. Notably, T8D has the higher efficiency (Fig. 4B) as expressed at higher levels in the cytoplasm due to its increased stability. Our previous studies showed that FAK activation correlates with ARF ability to confer pro-proliferative properties to cells. We thus analysed if T8D mutant, in virtue of its ability to activate FAK phosphorylation, could also confer a growth advantage when expressed in cells. To this aim, HeLa cells, that endogenously express p14ARF protein and are characterized by inactivation of the p53 pathway, were transfected with plasmids encoding WT, T8A and T8D ARF mutants and empty vector (CMV) and cell growth was evaluated by comparing residual cell number 72 hrs after transfection. Comparable number of viable cells were found in both empty vector and ARF transfected samples, thus meaning that ARF expression in these cells has no effect on cell proliferation, as expected [28][29][30] . Similar behaviour was observed in T8A expressing cells. Interestingly, in line with our hypothesis, we constantly found an increased number of cells in T8D transfected samples (Fig. 4C). In contrast to this, similar experiment performed in H1299 cells showed no differences among wt and mutants in cell proliferation (Fig. S6).
Discussion
Although to date the majority of studies on ARF has focused on its tumor suppressor roles, new evidences are paving the way to the hypothesis that ARF might promote survival. Here we presented evidences showing how p14ARF, upon PKC activation, is able to positively influence cell growth. The PKC family consists of at least ten serine/threonine kinases playing a central role in cell proliferation, differentiation, survival and death 31,32 . We have previously showed a novel and time-dependent ARF localization at focal adhesions upon cell spreading mirroring ARF role in cytoskeleton organization 14 . We now further characterized this aspect, showing that this function is controlled by PKC activity and can account for some of ARF pro-proliferative capabilities. While mimicking a dephosphorylated status of the protein does not alter the nuclear localization of the protein, phosphorylation of Threonine 8 is sufficient to induce an increase of ARF protein levels in the cytoplasm. More interestingly, while the no-phosphorylatable mutant appears to retain the tumor suppressor properties of the WT protein, the T8D mutant instead behaves as a constitutive active mutant conferring pro-survival properties to the cells.
The evidence that ARF functions can be thus modulated by phosphorylation events taking place on the conserved Threonine 8 leaves us to the concept that ARF role in cell survival may more realistically depends on the cellular and/or tissue type and context, thus accounting for the controversy surrounding the topic. It is interesting to underline how also the cyclin-dependent kinase inhibitor p21, in addition to the well-known growth inhibitor function, displays cancer promoting features in some cell context, as recently reported 33,34 . This behaviour, also shared by p14, suggests how the cell environment/status can act in an epistatic manner in directing tumor suppressors functions. In line with this, in H1299 cells, wt and mutants ARF proteins behave similarly as regard cell proliferation. Nevertheless, we found that T8D expression induces an increase of pFAK-Y397 phosphorylation (preliminary data not shown), thus suggesting that the effect of ARF phosphorylation on FAK activation can be disengaged from the effect on cell proliferation in these cells.
By means of pharmacological studies, protein kinase C has been implicated as a key molecule involved in cell spreading and migration, in part through interaction with beta1-integrin [35][36][37] . Given the context dependency of ARF role within the cell, we focused on ARF function in cell spreading in a cell context in which ARF functions as tumor suppressor are blocked, such as HeLa cells, thanks to the inactivation of the p53 pathway. Our data show that during cytoskeleton remodelling induced by detachment, both the activated form of PKC and ARF protein levels increase in the cytoplasm. We previously showed that ARF ability to favour cell spreading is accompanied by the transduction of growth signals arising from the integrin/FAK functional interaction 14 . We now added data showing that T8A can only in part rescue FAK activation, although this is not enough to allow cell spreading. This can be explained by decreased stability of the T8A mutant respect to T8D protein 15,38 . Our data depict a scenario in which during cell spreading or uncontrolled PKC activation, ARF stabilization and thus FAK activation could allow the pro-proliferative signal transduction within the cells. The effect of ARF and its mutants on cell proliferation suggested us that the T8D behaves as a constitutive active mutant. As the WT has no effect on cell proliferation this could mean that in HeLa cells something can block this ARF function. Interestingly, in H1299 cells we did not observed difference in cell growth profiles of WT and mutant proteins transfected cells. We previously demonstrated that ARF-FAK pro-proliferative axis is interrupted due undetectable levels of the Death Associated Protein Kinase (DAPK) in this cell context. DAPK is a Serine/threonine kinase playing important roles in tumor suppression and apoptosis. We previously showed that ARF expression prevents DAPK mediated anoikis. It has been shown that FAK activation can be counteracted by DAP kinase expression 39,40 by disrupting signal transduction between integrin and FAK upon ECM interaction during spreading. This leads to the interesting hypothesis that ARF and/or T8D could be able to protect FAK by DAPK negative effect while T8A being less efficient. Interestingly, when we analysed transfected cells 24 hrs after transfection, we observed that while both controls and ARF expressing cells displayed a certain percentage of dying cells, T8D expressing cells did not (data not shown). This suggested us that T8D expression could protect cells from cell death and thus results in the observed increased cell number.
Our data led us to conceive a model in which, during cell spreading and PKC activation, p14ARF protein levels increase in the cytoplasm. It has been widely reported that PKC mediated phosphorylation regulates composition and turn-over of focal adhesions where assembly and disassembly of actin fibers take place upon different environmental inputs. In particular, cellular motility and invasivity, two of worst signals of cancer progression, are sustained by this highly dynamic phenomenon 20,41,42 . The involvement of PKC in tumor progression has thus gained recognition as potential therapeutic targets for the treatment of various malignancies. A deeper understanding of the PKC dependent ARF functions, both in physiological or pathological contexts, may provide useful information about the environmental cues that determine ARF functions as tumor suppressor or tumor promoter.
Materials and Methods
Cell cultures, transfection and treatments. HeLa cells were purchased from SIGMA. U2OS and H1299 were purchased from the American Type Culture Collection (ATTC) and authenticated by STR DNA Profiling Analysis. Cells were grown as described 15 and were routinely tested for mycoplasma contamination by PCR based method and kept in culture for no more than 6 weeks after resuscitation.
ARF mutants used in this study were generated as described 15 . The cells were transfected with Lipofectamine 2000 reagent (Invitrogen) or with electroporation (Neon transfection System, Life Technologies Carlsbad, CA, USA) as described in 43 .
For RNA interference experiments, ARF siRNA (harbouring the stealth modification), that anneals in exon 1ß of p14ARF transcript, and scrambled siRNA (negative control) sequences has been reported in Vivo et al., 2009. PKC alfa and luciferase siRNAs are available by Qiagen (Hilden, Germany). For rescue experiment, the cells were transfected with an ARF siRNA, that anneals in the 5′UTR region of ARF endogenous transcript, as previously described in Vivo et al., 2017 and Kobayashi. All siRNAs were transfected using RNAiMAX reagent (Invitrogen). Rescue was performed as described in Vivo et al., 2017. Treatments in this study were performed as follows: treatment with bisindolylmaleimide I (from Calbiochem): 24 hrs after plating, HeLa cells were treated either with DMSO or bisindolylmaleimide I at 5 μM and 2,5 μM final concentrations for 2 hours then were detached by trypsin to synchronize the adhesion/spreading process. An aliquot of cells was harvested and total extracts prepared for subsequent analysis as described. Another aliquot of cells was replaced in presence of bisindolylmaleimide I at 2,5 μM as final concentration and, after 5 hrs and 24 hrs post plating, the cells were harvested and total extracts prepared as described. Live phase-contrast images were acquired using a Nikon Eclipse microscope (Tokyo, Japan) with 20x objective. 5 fields were randomly selected in the plates for each experimental point and images acquired with the Image Pro Plus software (Media Cybernetics). Cell spreading was quantified as described in Vivo et al. 2017. Cell proliferation assay. 4 × 10 6 HeLa cells were transiently transfected by electroporation as described 14 .
For H1299, 4 × 10 6 cells were transiently transfected by electroporation with the indicated plasmid at 500 ng. After 72 hrs from transfection (48 hrs for H1299), the cells were counted using ScepterTM automatic cell counter (Millipore) following the manufacturer's protocol. The data obtained were analysed using Scepter ™ Software Pro (from Millipore).
Immunoprecipitation assays were performed with cytoplasmic extracts (from 5 × 10 6 cells per sample). Subcellular fractionation was done as previously described and cytoplasmic extracts were incubated with anti-FAK C-20 antibody as described in 14 and in 44,45 . WB and antibodies. Western blot (WB) analysis was performed as previously described (Vivo et al., 2013).
IF and localization were performed as described in 15 , using anti His antibody to detect WT and mutant ARF proteins.
Spreading efficiency. HeLa cells were detached by trypsinization and replated at a density of 1 × 10 5 /ml. Live phase-contrast images were acquired using a Leica DMi8 inverted microscope (Wetzlar, Germany) with 20x objective. 5 fields were randomly selected in the plates for each experimental point and images acquired with LAS X life scence (Leica). To quantify the percentage of rounded cells, for each transfection point, we counted rounded (and adherent) cells and pooled data from three to five experiments as described in 14 .
Real time experiment was performed as described 46 .
Statistical analysis. Data presented in this work derive from experiments performed at least in triplicate (biological replicates), except when differently stated. The sample size of each experimental point is reported in the relative figure legend, as well as the specific statistical analysis performed. In all the experiments in which single cells were analysed 5 to 10 fields were randomly selected in the coverslip for each experimental point. t-test and ANOVA were performed using GraphPad Prism 5.0 software.
Data availability. All data generated or analysed during this study are included in this published article (and its Supplementary Information files). | 2023-02-18T14:57:43.897Z | 2018-05-04T00:00:00.000 | {
"year": 2018,
"sha1": "dd09549486ce5d9ad27b856a283ca2258c5afd62",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-25496-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "dd09549486ce5d9ad27b856a283ca2258c5afd62",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
235703089 | pes2o/s2orc | v3-fos-license | Beyond misperception–two types of mental model errors in a dynamic decision task
This article contributes to research on mental models and how they underpin decision policies. It proposes a framework for the joint use of mental models of dynamic systems and the theory of mental models initiated by Johnson–Laird and defines two types of errors: (1) misrepresentation of the system’s structure, and (2) failure to deploy relevant mental models of possibilities. We use a dynamic decision task based on Moxnes’ “reindeer experiment” to formulate three intuitive policies, their underlying mental models, and the reasoning, and evaluate the policies under varying initial conditions. Each of the policies generates problematic behaviors like dependance on initial conditions, underperformance because of flawed goal setting and oscillation due to leaving the delay in a feedback loop out of account. We identify errors of both types in the mental models and relate them to the behavioral problems. Limitations and questions for further research conclude the paper.
Introduction
This article contributes to mental model research in dynamically complex situations. It introduces the combined use of mental models of dynamic systems Ford, 1998, 1999;Groesser and Schaffernicht, 2012;Lane, 1999), and a theory of reasoning-the theory of mental models (Johnson-Laird, 1983, 2010-to describe two types of mental model errors that lead to detectable flaws in policies. Roughly two decades ago, Erling Moxnes's "reindeer experiment" showed that individuals with professional experience underperformed in a comparatively simple dynamic decision task (Moxnes, 2004). Poor decisions seemed to come from failing to perceive relevant feedback relationships. In Moxnes's words, the data "suggests, however, that a vast majority had highly inappropriate mental models" (p.151, emphasis added). But his research did not aim at eliciting or analyzing these mental models-it was a contribution to the literature of misperception of feedback, which had emerged a decade earlier (Sterman, 1989a, b). What followed was a series of experimental studies focused on developing mathematical models that replicate the subjects' decisions (see Gary and Wood, 2016), without the aim to study the underlying reasoning of the participants.
In management and organization studies, mental models are the way people understand the structure and the working of a system (Rouse and Morris, 1986). Compatible with this general definition, the system dynamics field has developed a conceptual definition of mental models of dynamic systems (MMDS when in singular and MMDSs for plural). This definition introduced specific features for the study of feedback-rich systems, in particular that an MMDS contains contain the perceived structure of a system Ford, 1998, 1999;Lane, 1999). Groesser and Schaffernicht (2012) then proposed an operational definition with a data structure for representing MMDSs. However, only a few studies with a detailed examination of MMDSs have been published so far (for a discussion, see Schaffernicht, 2017Schaffernicht, , 2019. One reason for this scarcity may be than researchers consider selfreported language assertions as not sufficiently reliable for scientific research (Arango A., Castañeda A., and Olaya M., 2012). Another reason may be that a MMDS only represents structure (not the reasoning process), and researchers find the efforts to elicit, represent and analyze this type of mental model paramount as compared to the insights to be gained from the comparison of MMDSs.
Cognitive psychologists elicit and analyze assertions articulated by individuals when studying reasoning. There are diverse theories of human reasoning, like probabilistic approaches (Baratgin et al., 2015;Chater and Oaksford, 1999) and formal inference rules (Braine and O´Brein, 1991;Braine and O'Brien, 1998;O'Brien, 2014;Rips, 1994). The theory of mental models (Johnson-Laird, 1983, 2001, 2010Ragni and Johnson-Laird, 2020) proposes that reasoning is based on mental models of possibilities (we use MMP for singular and MMPs for plural), which are consistent with their mental image (the so-called iconic mental model) of the situation in which a decision has to be made.
Cognitive scientists and management scholars focusing on the perceived structure use the same term mental model but give it different meanings-albeit the historic roots of the term are the same (Johnson-Laird, 2004). Nevertheless, the definitions of mental models are complementary: the mental models of possibilities which an individual mentally deploys are drawn from the underlying iconic mental model of the situation. We propose to consider a MMDS as a particular form of iconic model. This creates a link between both types of mental models and leads to two types of mental model error. 2) MMP errors are when a relevant mental model of possibilities is not deployed and processed.
One can only think of possibilities in terms of recognized features. Therefore, some MMP errors may happen because of MMDS errors. Of course, unconsidered possibilities can lead to policies with "surprising" effects.
To test this combination and the possibility to identify mental model errors leading to flawed policies, we have used a dynamic decision task inspired by the "reindeer experiment" (Moxnes, 2004).
Assuming naïve decision-makers without specific domain knowledge or training in systems thinking, the "herd management game" has some superficial differences but maintains the causal structure.
Decision-makers must maximize production drawn from animals without compromising the herd's sustainability. Intuitive reasoning steps based on the briefing information leads to three intuitive decision policies. The implied mental models (MMPs and MMDSs) are derived from the reasoning steps. Simulation shows that each of the policies generates problematic behaviors like dependence on initial conditions and oscillations; production performance is majorly poor. We identify several MMDS and MMP errors in the mental models underlying each of the policies; we also pinpoint the links between misperceiving structural features of the system and the flaws in each policy.
The ability to identify specific types of mental model errors that explain flaws in decision policies is useful for advancing the understanding of how people take deliberate decisions. Hence, the combined use of both types of mental model concepts and methods has the potential to facilitate mental model research.
The remainder of the article is structured as follows: the second section briefly introduces the theory of mental models and its link with mental models of dynamic systems. Then, the herd management game section elaborates the three policies, their underlying mental models, and the reasoning steps, 4 followed by a discussion of their behavior and performance in the simulations. The subsequent discussion section introduces a diagnostic of MMDS errors and MMP errors and then outlines important research questions emerging from our results. The conclusions summarize our findings and their limitations, together with a call for further cumulative research.
Mental models of dynamic systems as iconic representations of decision situations
The conceptual definition of a mental models of dynamic systems by Ford (1998, 1999) established the analogous relationship between the actual situation and the mental model of it. Groesser and Schaffernicht (2012) then proposed an operational definition. They argued that if the causal structure of the external situation contains feedback loops with stocks and flows, an accurate mental model of this system will contain the same type of elements. This definition is conceptually compatible with Rouse and Morris's influential elaboration of mental models (1986) and allows researchers to build on mainstream methods in organizational and management studies (Langfield-Smith and Wirth, 1992;Markóczy and Goldberg, 1995;Schaffernicht, 2017;Groesser, 2011, 2014) while putting interdependence into focus (Schaffernicht, 2019).
MMDSs represent the causal structure of a system, they do not include the reasoning and the mentally simulated behaviors.
The theory of mental models: reasoning with mental models of possibilities
The theory of mental models (see also Khemlani and Johnson-Laird, 2019) offers one explanation for the different aspects involved in human reasoning. The structural elements of MMDSs-variables, causal links, and feedback loops-constitute a vocabulary for assertions concerning what can happen.
For instance, a positive causal link from a variable motivation to another variable effort will give rise to assertions like "when there is a higher motivation, then there will be more effort". Such assertions imply several possibilities, and the theory analyzes and explains how and when humans process them or fail to do so.
To do that, it proposes several theses and general principles. Three of the principles are relevant here: (1) Representation; (2) Dual process; and (3) Modulation (see e.g. Khemlani, Byrne, and Johnson-Laird, 2018). To introduce them, we focus on the way the theory accounts for the conditional. i Sentential connectives refer to models (Johnson-Laird and Ragni, 2019). People deem those models as possibilities and link them to conjunctions ii (Khemlani, Hinterecker, and Johnson-Laird, 2017).
The conditional is a sentential connective relating an antecedent p to a consequent q in the manner expressed by the following assertion (identified by roman numbers to avoid confusion with the assertions discussed in section 3): [I] If p then q Here, p and q often represent assertions describing events like "it rains" and "I'll be wet"; however, they can also refer to behaviors like "I work more hours per day" and "I will feel more fatigue." When models of possibilities are deployed, [II] they correspond to an assertion like [I] (Johnson-Laird, 2012).
The symbol "¬" stands for negation. There is a missing possible combination in [II] which is shown in [III].
The reason for this is clear: as in classical propositional logic, [III] represents the situation in which a sentence such as [I] is false.
The principle of representation supports all of this. It claims that sentential connectives are usually understood as sets with "conjunctions of possibilities" (Khemlani et al., 2018). Hence this principle is also the principle providing that [II] is the set of possibilities related to [I].
The second general principle is the principle of dual process. The theory of mental models is a dual process theory (Khemlani et al., 2018) arguing that two systems work in the human mind. System 1 is quick, intuitive, and effortless. System 2 is slow, reflexive, and needs effort (Byrne and Johnson-Laird, 2020;Stanovich, 2012). The theory of mental models follows this approach, distinguishing between the intuitive mental models of possibilities (MMPs) visible to the first system and the socalled Fully Explicit Model (FEM-albeit we do not capitalize for mental models of dynamic systems or of possibilities, we follow the convention in the literature of the theory of mental models to capitalize this term), which would require the second system to step in.
If something is asserted in the form of a conditional, only one mental model is deployed: system 1 identifies only one model. That model matches with the first conjunct in [II]; this conjunct is: On the other hand, the Fully Explicit Models of the conditional are all of those in [II]. The theory suggests that Fully Explicit Models are harder to note because they include negations: p is negated in the second conjunct in [II], and both p and q are negated in the third conjunct in [II] (Johnson-Laird, 2012). This is an essential point of the theory of mental models: many reasoning mistakes happen because of insufficient mental effort, leading to taking only part of the possibilities into account.
Humans tend to use the quick system and only think about MMP (IV). If they made a further effort and used the reflexive second system, they could access to [II], and the error rate should be lower (Byrne and Johnson-Laird, 2009).
According to the third general principle-principle of modulation-the content of sentences can change their models or possibilities (Khemlani et al., 2017;Quelhas, Johnson-Laird, and Juhos, 2010). [VIII] demonstrate this. Their structure is typical for the theory of mental models (Orenes and Johnson-Laird, 2012).
[V] If they come from Germany, then they come from Berlin.
Taking "they come from Germany" as p and "they come from Berlin" as q, the set of possibilities of The reason for this is clear. Possibility [VII] is the second possibility implied by [II]; it cannot be admitted for [V] because it is not possible that they do not come from Germany, and they come from Berlin.
[VII] Possible (¬p & q)
However, the case in which the conditional is false in classical logic, that is, [III], has to be added in this case. It is the second conjunct in [VI]. The reason is apparent: they can come from Germany and not from Berlin, but another German city.
Still another example is [VIII].
[VIII] If it is cloudy, then it may be warm.
This last conditional admits all the combinations, that is, [II] plus [III]. Therefore, its set of combinations is [IX].
Where p indicates that it is cloudy, and q refers to the fact that it is warm.
The four possible scenarios can happen when (VIII) is true. Nonetheless, up to ten combinations of models that can be linked to the conditional (Johnson-Laird and Byrne, 2002 Although the literature on the theory of mental models does not use causal diagrams, such diagrams and the definition of polarity can be used to represent assertions like in [II]. Consider an example in which p and q represent statements about the behavior of variables rather than statements about facts or events: p stands for "I work fewer hours" and q for "I feel more fatigue." 7 Then assertion [I]-if p then q-appears to be a very simple causal diagram (black printed variable and link in Figure 1), involving only one positive link from p to q. As will be intuitive for most individuals, the positive polarity implies the mental model " Possible (p & q)." Note that possibility [¬p & ¬q] is also implied by the positive polarity. However, most untrained individuals will find it harder to imagine-as predicted by the theory of mental models. The remaining two mental models are not intuitive. However, when there are other factors of influence, they make sense once one. The causal diagram states that there are also "other causes of fatigue," which explains why it is reasonable to state Possible [¬p & q]; this part of the diagram is dark gray because it is not obvious. The last component of the diagram is even less salient: there are also energizing causes, like stimulating substances, that will relieve fatigue. While (p & ¬q) is altogether possible, coming to this thought is more effortful. It is easy to overlook.
Figure 1: The relationship between mental models and causal diagrams
If the additional variables are not included in the MMDS, possibility (p & ¬q) would not only be overlooked because of its lack of salience: what is not in the MMDS is not available for reasoning. If there is only one causal link from "work hours" to "fatigue," this actually states that this causal relationship is always there and there is no other causal influence stemming from other variables (Pearl, 2009;Pearl and Mackenzie, 2018). Recognizing less salient factors as relevant and accounting for them in the MMDS takes more mental effort. According to the theory of mental models, if there is no reason to make such additional efforts, they will be overlooked.
Conceptual framework
The conceptual framework proposed here consists of four layers. The first layer is the situation: an unstructured cloud of features that may or may not be relevant to achieve a given goal. At level 2, the MMDS is someone's attempt at identifying the situation's causal structure inside a conceptual boundary. Any MMDS may have two types of boundary mismatch: (a) relevant features may have been left out, and (b) irrelevant or even illusory features may have been included. Boundary mismatches can be revised and corrected later on (Sterman, 2002), but while they exist, they will preclude the possibility to recognize possibilities or induce to recognize irrelevant possibilities: every possibility for every relationship between two variables in the MMDS is part of the Fully Explicit Model. We refer to MMDS boundary mismatches as error type 1.
The third layer corresponds to the MMPs. A second type of error is found here: some of the possibilities included in the Fully Explicit Model may not be considered in mental models of possibilities. A possibility that is unaccounted for is equivalent to "this is not possible". But if it is actually possible in the situation (layer 1), then this is an MMP error. We refer to MMP errors as error type 2. Just like type 1 errors, errors of type 2 lead to flawed decision policies.
Policies are the fourth layer. When designing a decision policy, leaving possible circumstances unconsidered opens the door for decisions that provoke undesired outcomes. Possibilities may have remained unconsidered because of either type of errors. Hence, avoiding or identifying and correcting such errors is important for policy development.
The four layers situation, MMSD, MMP and policy are illustrated in Figure 2, which also expresses that each layer depends on the previous one. Since people use their reasoning abilities to figure out decision policies, the ability to identify MMDS errors and MMP errors is helpful. The following section shows this in an exemplary decision task.
The decision situation and its mental model of a dynamic system
In contrast to Erling Moxnes's interest in individuals with specific domain knowledge, here the aim is to study the thinking of naïve individuals: people who (1) do not have specialized knowledge in the domain of the decision situation, and who (2) do not have specific training in analytical or other reasoning methods. The "reindeer experiment" is easy to convert into a "herd management game" (referred to simply as the "game" hereafter) replacing "reindeer" and "lichen" by "animals" and "food," and avoiding references to "slaughtering" because they can trigger emotional reactions in some individuals. The causal structure of the "game" is analogous to the "reindeer experiment".
Decision-makers in the game are briefed with the same information as in the "reindeer experiment".
Their goal is maximizing the production based on the animals in their herd in a sustainable way, without diminishing or even annihilating the food, over a span of 15 years. The only decision they take each year is setting the desired herd size. All animals have a constant rate of reproduction, independent from food availability. But if food becomes insufficient, some animals will starve. Food (measured in mm) has a yearly rate of regeneration that depends on the current thickness. If there is little food, there will be little regeneration. And if food approaches the maximum level of 60 mm, the rate of regeneration also drops. We adopt the computation used by Moxnes: Decision-makers are not shown this equation but told that the highest regeneration is in the middle between 0 mm and 60 mm of food. In that case, the annual food regeneration would be 5 mm. They get historical data from a fictitious predecessor. All animals are equal in their annual food consumption of 0.004 mm. The briefing information allows a systems modeler or systems thinker to construct a sufficiently complete MMDS to figure out a successful herd management policy-but we work with naïve players.
The situation is summarized in the following Figure 3, where stock variables are shown as boxes.
Variable names appearing in the diagrams are printed in italics when used in the text. Feedback loops are labeled by R for reinforcing and B for balancing, but no names are assigned because these loops will play no role in the mental models discussed later. The polarity of the loop from food to food regeneration depends on the level of food: for values smaller than 30 mm, the link is positive, implying a reinforcing loop. However, for values greater than 30 mm, the link is negative, and the loop is balancing. The varying polarity is symbolized by a "v".
Basic assumptions and common elements
In this subsection, we introduce three naïve policies for steering the number of animals in the herd to maximize production without scarifying sustainability. These policies have some commonalities in terms of the underlying mental model of the situation. However, some elements of the mental model are interpreted in diverging ways, leading to different reasoning steps and possibilities, and eventually to different policies. We describe these structural features, assumptions, reasoning steps, and decision rules using a common set of variable names and typographic conventions. Variable names are in italics and the description of values or behaviors is underlined. The symbols =, < and > are used to compare values of variables or the results of calculations; stands for "is assigned the value of," and → is used for "if … then" in conditional statements. When statements include references to the year before or the year after, and in these cases, the variables are printed with sub-indices y, y-1 and Since the three policies do not differentiate between stock or flow and do not consider the feedback loops, the causal diagrams representing them do not show stock variables in boxes and no loop labels are included.
The following assertions describe arguments that are common to the three policies, and the corresponding mental models of possibilities. We label assertions with A and sequential numbers, using dots to represent hierarchical relationships. The mental models of possibilities have a suffix -Ps (for possibility) followed by a sequential number starting with zero for the first one (the salient possibility), and then the possibilities in the FEM.
A1)
I have more animals, → I will have more production.
A1-Ps0 is the salient MMP. A1-Ps1 is impossible in the game when one considers only one year.
However, an increased number of animals in a given year may lead to overconsumption in later years, which eventually diminishes the accumulated production at the end of the 15 years. Individuals who overlook this possibility can compromise sustainability and the overall performance.
Concerning A1-Ps2, the productivity of real animals can be increased by, for instance, selecting highly productive individual animals and possibly additional nutrition. The game excludes this: individuals who overlook this possibility are not in danger of making flawed decisions.
In real life, A1-Ps3 can be problematic because one might achieve more production without increasing the number of animals by, for instance, boosting productivity with a food complement.
However, since this is impossible in the game, A1-Ps3 is always true, and there is no risk of omitting of consideration.
The next step asserts: Assertion A2.1 leads to: A2.2-Ps0 is an intuitive possibility, but the game's structure also allows the less obvious possibilities to be true. Consumption drains food, but there also is the inflow of natural food regeneration, which depends on the previous food level. Whenever food regeneration exceeds consumption, food will not decrease (A2.2-Ps1). For instance, if there was very little food and animals have been drastically reduced in previous years, food has increased during the preceding year. This may encourage an increase in the herd size. If food had been smaller than 30 mm before, the increased food stock caused food regeneration to increase. In that case, there will be an additional amount of food regeneration.
If the additional consumption is not greater than the additional food regeneration, the food net regeneration cannot become smaller than it was in the prior year. However, food is not always less than 30 mm.
A2.2-Ps2 addresses the opposite case: if the food level has been greater than 30 mm and food had been increasing over the past years, then next year's food regeneration will be smaller than before.
So, even if the herd size remains constant and consumption does not increase, this amount of consumption may now be greater than food regeneration, leading to a negative food net regeneration: a decrease in food. Individuals who do not heed A2.2-Ps1and A2.2-Ps2 risk to keep their herd too small and being surprised that food starts increasing or decreasing.
It is possible to have a constant or decreased consumption and equal or more food (A2.2-Ps3): if there is either (a) an equilibrium between food and animals or (b) natural food regeneration increases beyond consumption, there will not be more consumption and more food. However, failure to consider this possibility cannot lead to problematic effects, except in the very special case that the positive net food regeneration pushes food from just below 30 mm to just over 30 mm. Then the same consumption would produce a negative net food regeneration greater than the previous positive net food regeneration. This is very unlikely, so overlooking this possibility would be an inconsequential type 2 error.
A2.3) food decreases each year → this violates sustainability.
This assertion represents a piece of general knowledge concerning sustainability. In the game's context, sustainability is the ability to keep up operations for the human who wants to extract production from animals, for the animals who want to stay alive, and for the plants that serve as food.
Without food, there would be no animals and no production. Therefore, this assertion is not a conditional-it is rather a prohibited different rule: the food must not decrease each year. It is included in the chain of reasoning steps because it is the context in which the following steps take place.
A2.4)
Therefore: I have more animals, → I will need more food (to be sustainable).
One may then think that having a larger stock of food enables one to sustain more animals: A3) I have more food, → I can sustain more animals.
A3-Ps0 is a very intuitive possibility, but it is only true if food < 30 mm: whenever food > 30 mm, it is false because food regeneration will decrease and therefore food will decrease. When food > 30 mm, A3-Ps1 is true, and so is A3-Ps2: if food decreases, it will approach a thickness of 30 mm, which yields the maximum food regeneration. A3-Ps3 can be safely neglected. Assertion A3 is intuitive but disregards the dynamic nature of the situation-this will later have a consequence on the choice of a policy. The immediate consequence is: Based on the discussion of the possibilities of A3, clearly A4-Ps0 is false, albeit intuitive. The food regeneration for food = 60 mm is less than for food = 30 mm. This implies that A4-Ps1 and A4-Ps2 are true (A4-Ps4 can be neglected). However, naïve decision-makers who pay attention to the salient possibilities will come to the following conclusion:
A5)
From A4) and A1) it follows that: I have the most food, → I will have the largest production.
The salient MMP A5-Ps0 is false, and the less obvious possibilities A5-Ps1 and A5-Ps2 are true.
From here on, the reasoning steps for each policy are distinct. The possibilities A3-Ps0, A4-Ps0 and Consider next the consequences for the ensuing reasoning steps. Only the decision-maker's policy can set the values for animal target and food target. Since the animals depend on food, the food target must be set first. The information provided in the briefing is sufficient in principle to figure out the correct value. It explicitly states that maximum food net regeneration is reached at half of the maximum food level. Decision-makers must infer by themselves that food decreases due to the animals' food consumption (assertions A2.1, A2.2, A2.3, and A2.4). They also must understand the need to compensate for this decrease of food without being told so. Intuitively, the reasoning steps A3-Ps0, A4-Ps0, and A5-Ps0 will come to mind. A very naïve decision-maker may not think this through to define a food target, but instead settle with the thought "I do not know" and reach: The conclusion implied by these reasoning steps would be to use the highest possible food level: bring food net regeneration into focus: the largest food consumption must not exceed the largest food regeneration to maintain sustainability. This implies that "half of the maximum food" leads to the highest sustainable consumption, and therefore: The three specific versions of assertion A6 assign a particular value to a variable and are not conditionals. The value assigned will have a consequence for decisions taken, not for the mental processing of the possibilities. iv This leads to three different policies for driving the animal target, and each of them will be introduced in turn.
Policy P1
This policy assumes that: This means that the decision-maker cannot directly identify a value for animal target but will instead observe and interpret the development of food "since last year." But it is important for the decisionmaker to interpret the meaning of the herd size with respect to sustainability and the meaning of the food level in that context.
There are two ways to think about how to recognize a sustainable situation, and both are based on the detection of a stable food level: A7.1) foody = foody-1 → animalsy-1 is the sustainable number of animals.
A7.2) foody = foody-1 → animalsy-1 is the sustainable number of animals given foody-1, but for other food levels, the sustainable number of animals might be different.
In the case of A7.1, we have the following explicit possibilities: As before, A7.1-Ps0 immediately appears as a representation of assertion A7.1. If things were simple, the other possibilities would be impossible. But the intricate structure of the relationship between (a) the current number of animals, (b) their collective consumption, (c) the impact of consumption on food on one side, and (d) the influence of the previous food level on regeneration and (e) the impact of regeneration on the food level requires us to be mindful of the nonlinear relationship between food and its' regeneration. There is, in fact, one sustainable herd size for each food level. While they may imply different amounts of production, they all keep the current food level constant. It follows that Whenever food increases or decreases, the herd size has not been sustainable for the previous food level, and consequently A.7.1-Ps3 will frequently happen in the game. The only exception would be that the current herd size is sustainable for a food level of 30 mm, but the previous food level was greater than 30 mm. However, in this case, A.7.1-Ps2 must be considered.
To avoid getting trapped in a suboptimal but sustainable situation, it is preferable to follow A7.2.
Heeding sustainability of the herd size does not implicitly assume that there is only one sustainable herd size. Different sustainable herd sizes are possible. However, this thought leads to the second assertion and a distinct set of mental models: Now, A7.2-Ps1 and A7.2-Ps2 are impossible, but A7.2-Ps2 also means that the negation "¬" only applies to the specific food level. Of course, if food is greater than 30 mm, any number of animals which causes food to decrease would be unsustainable for the current food level. But food would only decrease until the natural regeneration is large enough to replace the consumed food: food would become constant exactly at the level for which the herd size is sustainable. Consequently, production would be greater. This is equivalent to stating that: A7.2-Ps3 is always true, and decision-makers cannot make a mistake by overlooking it. According to this deliberation, policy P1 is as follows:
Policy P1:
If food has changed (foody <> foody-1) →I will change animal target in the same direction.
Otherwise, I will slightly increase animal target.
The policy statement is a decision rule; whereas decision-makers could in principle decide differently and do the contrary. However, we assume that this is highly unlikely and therefore do not discuss the various logical possibilities. Following this policy, a decrease in food will lead to a decrease in animals when the herd size is adjusted to the animal target. An increase of food will lead to an increase of animals when the herd size is adjusted to the animal target. The multiplier is used to modulate the strength of the reaction, since there is no reason to assume that animal target ought to be changed in the same proportion as the observed food change. The second part is intended to converge a suboptimal animal target: if a slight increase in the number of animals does not lead to a decrease of food between years y and y+1, then the decision-maker has found a larger sustainable number of animals. Otherwise, the first part of the policy would be triggered for the following year, and the number of animals would be corrected downwards, back to the sustainable number of animals identified one year earlier.
The MMDS beneath policy P1 is shown in its causal diagram representation in Figure 4. The assertions omit feedback loops or stock variables; therefore, there is no loop symbol, and no difference is made between the variable types. The balancing loop food-food net change-animal target-excess animals or animal deficit-production or animals purchased-animals-food consumption-food adjusts the number of animals in the herd to reduce the food net change progressively and thus find a herd size which-in the reasoning of the decision-maker-will optimize the accumulated production. However, it is not in the mental model, and therefore it is not labeled in the diagram. This policy articulates a hill-climbing logic that allows the decision-maker to set the animal target without referring to a food target.
Policy P2
Policies P2 and P3 are based on the idea that one can specify a value for food target, derive a value for animal target and then apply control logic to keep the gap between the target value and the actual number of animals small enough: balancing feedback for system thinkers, but not for naïve decisionmakers. Consider first the setting of the food target in P2: A6.2) food target 60 mm.
The following assertion expresses the belief that the appropriate food level leads to maximum production: A8.1) If food = food target → production will be maximized.
An individual who sets the food target at 60 mm and uses only A8.1-Ps0 to come to a decision will get disappointing outcomes: if food = food target, then A8.1-Ps1 holds true, and production will not be maximized. Simultaneously, the food levels will be distinct from the food target maximizing production. The origin of this error lies in the incorrect belief concerning the food target. This rule prescribes what the decision-maker will do in response to each described condition. We assume people will not do the contrary and not discuss the logical possibilities. The following Figure 5 summarizes the MMDS structure behind this reasoning.
Figure 5: Causal diagram representation of the MMDS beneath policy P2
In Figure 5, the food target gets a value according to assertion A6.2 and the reasoning behind it, which depends on food. The dotted arrow shows the logical dependency. Note that this is not an actual feedback loop in this situation because the food target is constant during the 15 simulated years.
Policy P3
Unlike policy P2, policy P3 follows from:
A6.3 c) food target 30 mm.
The reasoning now focuses on food regeneration: A9.1) food regeneration is maximized → my accumulated production will be maximized.
A9.1 recognizes that rather than the food level, it is the food regeneration that must be the highest possible to achieve maximum production. As individuals have been informed, the maximum food regeneration will be 5 mm per year when the food level is equal to 30 mm. Therefore, the highest possible number of animals have enough food, which in turn leads to maximum production. This also means that the third possibility can never happen. Therefore, not considering these two mental models cannot have a detrimental consequence in the game. Note that A9.1-Ps0 is even true when one erroneously sets a food target of 60 mm-but in that case, food regeneration will not be maximized.
A9.1-Ps1 could only happen if there are other events or influences decreasing production, which cannot happen in the simulated situation. A9.1-Ps2 is impossible in the game and failing to think of it cannot have a consequence.
If one believes to have set the correct food target, it is logical to think: In the light of the previous discussion of food target, clearly A9.2-Ps0 is true if the food target = 30 mm. Otherwise, both A9.2-Ps1 and A9.2-Ps2 will be true. For instance, food target = 60 mm will drive the system towards a food level at which natural food regeneration is not the highest possible one, which also means that for at least one food level that is unequal to the food target, regeneration will be the highest possible one.
The previous reasoning steps lead to a different decision rule connecting a recognized situation to an action:
A10a)
If food < food target → I should decrease animal target.
A10b)
If food > food target → I should increase animal target.
We assume that decision-makers will not act counter to the rules they have elaborated through all the reasoning steps. Therefore, we do not discuss the logical possibilities of this assertion. Consider now how the intensity of adjustments is determined:
A11)
If food approaches the food target as quickly as possible → accumulated production will be maximized.
A11-Ps0-Possible (food approaches the food target as quickly as possible & accumulated production will be maximized) A11-Ps1-Possible (food approaches the food target as quickly as possible & ¬ accumulated production will be maximized) A11-Ps2-Possible (¬ food approaches the food target as quickly as possible & accumulated production will be maximized) A11-Ps3-Possible (¬ food approaches the food target as quickly as possible & ¬ accumulated production will be maximized) The idea that food can approach the food target at varying speeds implies that if there is too little food. Hence, "something" needs to be done to enable food to reach the desired level. It is necessary to reduce consumption, and this leads to the need to decrease the number of animals, accepting that this will also decrease production. A11-Ps0 is true and A11-Ps1 and A11-Ps21 are impossible in the game only when the food target = 30. This is not the case of policy P2, where A11-Ps0 is false and A11-Ps1 as well as A11-Ps2 are true.
The causal diagram in Figure 6 shows a relevant difference compared to policy P2: the food target depends on food regeneration, and its value is assigned according to assertion A6.3. Decision-makers might reconsider this rule in response to surprising outcomes. This would happen over the reiterated decisions and likely result from of revisions to the previous reasoning steps.
The MMDS beneath all three policies have many common elements, as illustrated in Figure 7. They all aim at driving production such as to maximize accumulated production, and they all account for the possibility of having so many animals that there will be starvation due to a food deficit. However, they go different ways to drive the animal target. Policy P1 does not use the food target but uses the yearly food net change to determine the animal target, therefore depending on information already revealed by the system's behavior.
Policies P2 and P3 use inferences drawn from the briefing information to mentally "jump" to the final food level. They then set the food target to different values because they use a reasoning which pays attention to different variables: P2 relies on food, whereas P3 considers food regeneration. Arriving at P3 takes some reasoning that is not directly framed by the salient MMP, implying an increased mental effort.
It is important to see how the different degrees of the salience of the possibilities in assertions A3, A4, and A5 lead to different policies. We summarize this in Figure 8, which presents the respective sequences of assertions (referenced by their respective identifiers) from assertion A1 to the decision rules. Figure 8 shows that policy P1 is a line of reasoning that has the same origin as P2. But it takes a markedly different direction as compared to policies P2 and P3. The differences between P2 and P3 are less blatant. One difference is the value assigned to food target. The other difference is that P2 is based on the food level (assertions A8.8, A82a, and A8.2b), whereas P3 accounts for food regeneration (A9.1, A9.2, A10a, and A10b). Note that in P2 and P3, the same decision rules process different values of the food target; of course, two distinct food targets can lead to distinct decisions. Food levels between 31 and 59 mm will trigger the condition "food < food target" for policy P2. In contrast with this, policy P3 will classify the same food levels as "food > food target". One should expect different performances of these policies. The decision rules are easy to carry out by basic arithmetic operations. v
Figure 8: A decision tree to determine which policy will be applied
To reduce animal target as quickly as possible: animal targety+1 0.
To increase animal target as quickly as possible: food surplus food -food target; animal target increase food surplus / food consumption per animal; animal targety+1 animal targety + animal target increase.
The behavior and performance of the policies The context of the simulations
These policies have been inserted in a system dynamics model (see supplementary material for the model documentation; interested readers may also interact with the model through a simple user interface at: https://exchange.iseesystems.com/public/martin-schaffernicht/herd-managementmodel). The trajectories of the herd (animals) and food and the performance in terms of accumulated production have been simulated under various initial conditions (foodinit, animalsinit) because the behavior and performance of the policies can be sensitive to the initial conditions.
This assures that all initial endowments of food are simulated with the optimal number of animals when food has the optimum thickness (1,250 animals), and that all initial herd sizes are tested with the optimum food level (30 mm).
Concerning the assessment of performance, accumulated production is problematic. In the "reindeer experiment", players had to maximize production by maximizing the number of reindeer slaughtered.
Whenever the initial herd size exceeds the optimum, this would generate a windfall benefit because the downward correction of the herd size will increase production. Production does not capture sustainability, except in the special case when an excessive herd size annihilates food, and then all animals starve. We, therefore, measure performance based on the relationship between food regeneration and food consumption per animal over time:
= ∫ ℎ
Whatever quantity of food is added to the stock after the animals have consumed their part at the end of a year defines how many additional animals are sustainable in the beginning period. This performance indicator combines both aspects of the decision-maker's goal to maximize production while remaining sustainable. The following box chart shows each policy's range of performances computed as the number of animals that could graze without starving, given the food regeneration: Figure 9: Box chart of the performance of all policies under 12 different initial conditions The first policy in Figure 10 represents the policy discussed in the original "reindeer experiment" as benchmark policy: "if food < 30 mm → animal target 0, else animal target 1,250." Note that this policy was intended for cases where the initial stock of food < 30 mm. However, when food > 30 mm, the herd size of 1,250 animals will consume more food than the net regeneration compensates. The reasons for these differences in performance become clear when looking at the behavior of animals under these policies. To maximize production, any policy should steer the number of animals so that food quickly converges to 30 mm, which assures the maximum food net regeneration of 5 mm/year and allows the highest sustainable number of animals: 1,250-regardless of the initial conditions. Consequentially, one can assess the policies' respective goodness based on how quickly the herd size approaches 1,250 from varying initial conditions and how stable this development is over time.
Consider nine initial conditions combining animalsinit (650, 1,250, and 1,850) and foodinit (20, 30, and 40). The following Table 1 shows the nine combinations together with the resulting relationship between food and animals. The first column displays the three possible values for food, followed by the implied net regeneration in the second column. The top row shows the initial values for animals, followed by the total yearly consumption each value implies. The nine cells tell us what the relationship between food net regeneration and consumption means for the immediate future behavior of food. The sign means that food will increase, and means that food will decrease. We have equilibrium at the start of the simulation when there are 30 mm of food and 1,250 animals. In conclusion, the initial combinations ensure that policies are tested with all three possible food environments for the herd manager: food may increase, decrease, or keep the current value due to the initial number of animals. Figure 11 shows Figure 13 shows that policy P3 also makes animals oscillate. The policies' reaction to the initial food levels is the same (): starting with 40 mm of food, the herd size first increases to almost 4,500, then overshoots (food < food target) and is decreased, from where it increases back and then oscillates. An initial food level of 20 mm leads to the opposite movement but then turns into very similar oscillations. A start with 30 mm of food entails one year of stability for all initial herd sizes; however, only if the herd has 1,250 animals at the beginning, the curve is flat at the optimum until the end ().
In contrast to the behavior under policy P2, this time the average (and goal) value is the correct one: 1,250.
Figure 12 The behavior of the herd size under policy P3
Policy P3 outperforms policy P2 because of the correct food target-which it received from previous reasoning steps. Both policies generate oscillations because they do not account for the delay between food regeneration and herd size adjustment. However, both consumption and food regeneration happen in the year prior to detecting a food gap and adjusting the animal target cause desired herd size adjustment (see Figure 7). The balancing feedback structure is a second-order negative loop between food and animals. By driving decisions as if it were a first-order negative loop, policies P2 and P3 cause the oscillations. This failure to perceive a feature of the situation is a type 1 error that makes decision-makers overlook the implied possibilities (type 2 error).
Two types of mental model error in the three policies
Consider now the mental model errors in certain assertions leading to the three policies (a summary table is included in the supplementary material). Three possibilities following from assertions A1 and A2-shared by all policies-are not salient but possible in the game: • A1-Ps1-Possible (more animals & ¬ more production) facilitates the error to overlook the danger of overpopulation. Two other MMP errors are found in A7:
herd size is sustainable for that specific food level)
Failure to consider this would lead Policy P1 to avoid jumps and therefore not search a sustainable situation with higher production.
The use of mental models of dynamic systems and possibilities
The ability Policies P1-P3 are artificial. They exemplify how naïve individuals can analyze the herd management situation and formulate a policy. This limitation notwithstanding, the ability to identify these errors and classify them is a step beyond detecting underperformance in end results and behaviors with undesired consequences which suggest "misperception of feedback"; it opens the way to directly linking articulated mental models, articulated policies to the observed behaviors and end results. This allows to research specific mental errors and develop mitigating interventions.
The concepts and methods of the theory of mental models are well established and add to the concepts and methods toolset so far developed in system dynamics: empirical studies with human participants come insight.
Several research questions can be addressed using this combined mental model approach. • Stress: there can be a tension between the experimental decision situation and the personal knowledge of participants. Experimental decision task simplify reality, and some simplifications may contradict the MMDSs of participants who have domain-specific knowledge. This can trigger negative affect and emotional resistance. This may reduce the willingness to make a cognitive effort or divert cognitive resources from reasoning to impulse control. Then the question arises if specific mental errors should be attributed to the individual or to the system consisting of the individual plus the experimental situation. This will certainly require additional discussion.
• Cognitive load, working memory and cognitive dissonance: the numerous simplifying assumptions needed for a relatively simple decision situation are usually introduced at the outset.
Participants must keep them in their working memory during the experiment. Since the brain has only limited resources, it should be expected that a higher demand than working memory diminishes the attention given to reasoning (Brunyé and Taylor, 2008). If this can be confirmed, does making such assumptions salient just-in-time during the iterations decrease this phenomenon? When participants have prior MMDS in similar situations (see the previous paragraph), it will become more demanding for them to retain contradictory assumptions in their working memory. This may lead to MMDS errors that are induced by the decision task. If this is empirically confirmed, one could argue that some experimental situations trigger artificial mental model errors and improve the experimental settings to avoid such problems.
• Dynamic complexity: the herd management game is arguably the simplest situation involving a dynamic system. Other games like fish banks or versions of the "market growth and underinvestment" situations include more feedback loops and more delayed relationships. Thus, results observed in studies dealing with the previous questions can be examined in increasingly complex decision tasks.
• Transferability of insights: decision tasks may vary in their superficial features, but they may also vary in the complexity of the underlying causal structures (see the previous point). To the extent where participation in experimental games makes individuals learn something, the question arises if this new knowledge can be transferred from one decision task to another one. Would some kinds of MMDS become less frequent? Would some MMP errors decrease?
There remain questions regarding elicitation.
• Elicitation methods: the authors' experience suggests that elicitation methods like questionnaires, comprehension tests or card sorting are useful for eliciting the most accessible parts of recognized MMDS. However, some less accessible aspects and the MMP are only articulated when participants are confronted with an unexpected problem during the game or inquiring questions of an interviewer. Therefore, recording a briefing and a debriefing semistructured interview and thinking aloud during the experiment appear as the adequate elicitation approach.
• Prompts: would decision-makers commit less MMDS errors when the eliciting researcher includes questions about non-salient possibilities in the debriefing interview?
Such research will provide insights into the cognitive reasons behind phenomena like the misperception of feedback, and therefore contribute to the system dynamics literature. At the same time, cognitive scientists gain access to a type of integrative decisions and reasoning that concentrates on dynamic behaviors rather than assertions concerning certain states or certain events and one-off decisions.
Conclusions
This article introduces a way to analyze the structure of and the reasoning with mental models of dynamic decision situations, leading to the identification of mental errors belonging to two different but interrelated types of errors. Two different types of mental models are used in combination: (1) mental models of dynamic systems (MMDS)-well known in the system dynamics field but seldom applied-contain the mental representation of the decision situation, and (2) The dynamic decision situation used to test this is a variant of the well-known "reindeer experiment".
This article identifies specific mental errors of both types in the MMDS and MMP underlying three naïve policies. The two main behavior flaws were that (1) either any sustainable constellation of food and animals is taken as "the" solution (policy P1) or (2) overshooting corrections lead to unproductive oscillations. These flaws could be avoided by overcoming the identified mental errors.
Our results suggest that combining mental models of dynamic systems with the theory of mental models is fruitful. It provides the possibility to represent how decision-makers reason with their MMDSs, and to pinpoint the errors committed due to, for instance, the misperception of feedback.
These errors are what makes a flawed policy seem correct to a decision-maker. We hope this perspective may motivate researchers to incorporate the articulation and analysis of mental models in experimental studies.
To our knowledge, no empirical studies have been carried out to test this claim. It is now time to include real individuals as decision-makers. Some directions for empirical research have been delineated, and we hope that this article may encourage empirical studies in this area.
Summary of the policies and their mental model errors
The main text discusses the assertions beneath each policy in detail, including the diverse possibilities which may be relevant for the game but unprocessed. This section provides a summary of the assertions and then presents a synoptic table with the respective mental model errors.
Assertions made by all policies: A1) I have more animals, → I will have more production.
A2.3) food decreases each year → this violates sustainability.
A2.4) Therefore: I have more animals, → I will need more food (to be sustainable).
A3)
I have more food, → I can sustain more animals.
A4)
From A3) it follows that: I have the most food, → I can sustain the most animals.
A5)
From A4) and A1) it follows that: I have the most food, → I will have the largest production.
A7.2) foody = foody-1 → animalsy-1 is the sustainable number of animals given foody-1, but for other food levels, the sustainable number of animals might be different.
Decision rule:
If food has changed (foody <> foody-1), then I will change animal target in the same direction.
Otherwise, I will slightly increase animal target.
Assertions made by policy P3: A9.1) food regeneration is maximized → my accumulated production will be maximized.
A10b)
If food > food target → I should increase animal target.
A11)
If food approaches the food target as quickly as possible → accumulated production will be maximized.
Both policies P2 and P3 follow the same rule when it is possible to increase the herd size because food > food target.
Decision rule of policies P2 and P3: Decision rule: If I should change the number of animals → I should change them the quickest possible: To reduce animal target as quickly as possible: animal targety+1 0.
To increase animal target as quickly as possible: food surplus foodfood target; animal target increase food surplus / food consumption per animal; animal targety+1 animal targety + animal target increase.
A11.2)
If food approaches the food target as quickly as possible ® accumulated production will be maximized.
A11.2-Ps1-Possible (food approaches the food target as quickly as possible & ¬ accumulated production will be maximized) 1 1 A11.2-Ps2-Possible (¬ food approaches the food target as quickly as possible & accumulated production will be maximized) 1 1
The simulation model
The model has been implemented using STELLA Architect 1.9.5 and can be freely used through a simulator interface at: https://exchange.iseesystems.com/public/martin-schaffernicht/herd-management-model This model has been developed for a thought experiment: to test the hypothetical decision policies in a dynamic decision task. Therefore, there is no reference data to compare simulation data against: no exogenous variables, and no pseudo-random streams.
Important parameters for testing the policies: • Policy switch: values from 1 to 4 to select which policy will be simulated. 1 through 3 correspond to the policies discussed in the article. The fourth possibility is to use a policy following Erling Moxnes' discussion of the "reindeer experiment".
• Animals INIT: INITial number of animals in the herd (between 0 and 1,900).
• Food INIT: INITial stock of food (between 10 and 60).
• Policy multiplier: values from 0 -1 for adjusting how strongly the policy is applied.
• The discussion in the article used data from 12 distinct simulation experiments based on combining diverse INITial stock levels for animals and food, generated by STELLAS's "sensitivity runs" for policies P1, P2 and P3 with the following combinations: o Animals INIT: incremental in 3 steps from 650 to 1,850.
o Food INIT: incremental in 4 steps from 10 to 50.
Other parameters to adjust the simulation: • Animals knockout switch: allows to simulate the model without the herd an its management (0|1).
• Herd management switch: allows to simulate the model with the herd but without its management (0|1).
Accumulated_performance(t) = Accumulated_performance(t -dt) + (annual_performance) * dt
INIT Accumulated_performance = 0 UNITS: Animal DOCUMENT: Sum of the yearly "sustainable" herd sizes -the more, the better given the goals of the game. UNITS: unitless DOCUMENT: The term 1-((Food-optimum_food)/optimum_food)^2 expresses the distance of the current food level from thew optimum level in relative terms and then squares it to avoid negative values, then subtracts the square from 1. Effect: when food = optimum, the distance will be 1, but the further food is away from the optimum, the smaller the value becomes. This will be used as a multiplier in the regeneration flow equation.
USED BY: food_regeneration food_regeneration_max = 5 UNITS: mm/Years DOCUMENT: When food level is optimal, this will be the yearly food net regenertation. DOCUMENT: Same as policies 2 and 3: the distinct decisions taken be these two policies re the consequence of the distinct values of the food target. This variable only serves readability: it is fed into animal target by policy, and I think it is important to avoid confusion by using names like "policies 2 and 3". Endnotes i A conditional is a "sentential connective" with two clauses: the if-clause or antecedent, and the then-clause or consequent. In classical logic, a conditional can only be false under one circumstance: when the if-clause is true and the then-clause is false (see, e.g., Jeffrey, 1981). Its usual structure is "if p, then q". A "sentential connective" is a connective linking two clauses or sentences; for example, the conditional (if…then…), conjunction (…and…), or disjunction (either…or…). ii A conjunction is a "sentential connective" with two clauses named "conjuncts." In classical logic, a conjunction is true only when its two conjuncts are true at once. Its usual form is "p and q" (see, e.g., Jeffrey, 1981). iii Sometimes only the first model is identified. However, in those circumstances, individuals can know that more models can be deployed. This can be represented, for example, in this way: "Possible (p & q)…", where the dotted line points out the possibility to display more models. iv Decision-makers will also wonder if the goal of maximizing the production over 15 years requires them to maximize each year's production or if there is a better alternative. In the remainder of this article, they are assumed to believe that the production of one year can affect the largest possible production for the following year, and therefore a "sacrifice" (a sub-maximum production) in one year may be more than offset by the maximum production in the following year. Believing that each year's production must be maximized leads to slightly different policies, but these differences only have little impact on the variables" behaviors and the performance in the game (the reader will find a discussion of the corresponding policies in the supplementary material).
v The following operations describe implementation of the decision rule: To reduce animal target as quickly as possible: animal targety+1 0.
To increase animal target as quickly as possible: food surplus food -food target; animal target increase food surplus / food consumption per animal; animal targety+1 animal targety + animal target increase. | 2021-07-01T20:41:04.452Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "c449c7cf92b517e53c00dda7b0dd68676b122aae",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202107.0004/v1/download",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c449c7cf92b517e53c00dda7b0dd68676b122aae",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
120540746 | pes2o/s2orc | v3-fos-license | D-scan measurement of the ablation threshold and incubation parameter of optical materials in the ultrafast regime
Machining with ultrafast laser pulses demands the selection of correct conditions to obtain precise, and yet, efficient material extraction, in which the control of volume and physical state of the matter being etched is fundamental. Usually, the production of volumetric structures needs overlapping of many pulses, and the incubation effects and its dependence on process parameters are of prime importance. Hence, in this work this parameter and the damage thresholds for many different pulses overlapping were measured for some optical glasses with an alternative method which is faster than the traditional one, and is closer to the real machining condition. © 2011 Published by Elsevier Ltd. Selection and/or review under responsibility of Bayerisches Laserzentrum GmbH
Introduction
Sculpting complex structures with femtosecond laser pulses is becoming commercially feasible mainly due to the development of new systems with improved average power. In such commercial framework, not only batch production needs a high throughput but, also the first prototypes need to be quickly produced and optimized. This is only possible when a process chart has already been developed for that specific material and design. It is fundamental to know the exact response of the material to the process parameters being used, not only to obtain the better dimensional accuracy, but also to improve cosmetic appearance and decrease collateral effects.
Hence, the damage threshold in the high and low fluence regime ranges as function of laser parameters must be known, and also related to the effects caused to the part being processed. Due to incubation effects however, these relations change as laser shots are accumulated on a unique spot. This is the case of milling surfaces where pulses superposition along with pulse density determinate the area and depth of the ablated volume. Even this incubation effect may vary for different process parameters, like temporal pulsewidth and energy density.
With the aim of studying the influence of some laser parameters on the incubation factor, and at the same time obtaining fast acquisition of related data, this work has developed a method suitable to measure the ablation threshold F th as function of the number N of overlapped pulses closely reproducing the case of real machining.
Ultrashort pulses ablation is a nonlinear process [1], [2], [3] that also depends on the presence of impurities, defects, excitons, etc. [4], [5], [6], which either create intermediate levels in the bandgap or modify the local electronic density and lower the ablation threshold fluence F th values. These defects are frequently produced when processing solids with superimposed shots which lower the F th for subsequent pulses. These are cumulative phenomena, known as incubation effects [7], [8], [9], and modifications in F th induced by them, change the ablated volume as function of N, and therefore must be taken into account when machining a material.
Experimental
This work used the Diagonal Scan (D-Scan) technique [10], [11], instead of the traditional "zero damage" method [12], to measure the ablation threshold as function of the pulses overlapping N. In this case, the sample is placed with its surface orthogonally to a TEM 00 Gaussian beam, and is moved in two directions simultaneously, parallel and perpendicular to the beam axis. In a position close to the beam waist, an etched profile like shown in Fig. 1 will appear. The ablated track exhibits a minimum width at the focus position and two maximum lobes with width 2 max , symmetrically located before and after it. If there are no significant heat effects, and the experiment is performed above certain intensity, it can be shown [10] that for TEM 00 Gaussian beam, the ablation threshold is directly related to the max dimension and the laser pulse energy E 0 , through the very simple expression: To account for incubation effects, the superposition of N different shots is considered as the ratio between the summation of the intensities produced at ( , max ) by every pulse that hits the sample during its movement, and the intensity generated by the pulse centered at ( ,0). Under this assumption, it can be shown [13] that: ( 2 ) where 3 is the Jacobi elliptic theta function of the third kind, f is the laser repetition rate and y is the sample transversal translation speed. For high repetition rates and low translation speeds, Eq. 2 can be approximated to: The number N only accounts for shots in the immediate vicinity of , since shots relatively far from this position do not contribute for the ablation process. Due to the relative movement, the superposition N is different as the one used in the traditional method, but reproduces much more closely the real situation occurring during real machining.
Results and Discussion
The experiments reported here were performed with a CPA Ti:Sapphire laser system (Femtopower Compact Pro CE-Phase HP/HR from Femtolasers), continuously generating 25 fs (FWHM) pulses centered at 775 nm with 40 nm of bandwidth (FWHM), at a maximum repetition rate of 4 kHz. These pulses were used to measure F th and the incubation parameter k for sapphire and the optical glasses BK7 and Suprasil. All irradiations were done in air, and after etching the samples were cleaned with isopropyl alcohol in an ultrasonic cleaner to remove redeposited ablation debris. After cleaning, the samples were observed and photographed on an optical microscope, and the ablation dimensions were measured in the micrographs. Fig. 2 (a) shows D-scan profiles in a BK7 sample surface for three different energies (109, 184 and 61 μJ, from top to bottom). Pictures show top and lateral views of the etched tracks; in this case it is possible to observe the depth related to the focus position. It is clear that the deepest etch it is not produced when the focus is on the surface, but in a position slightly below it. Fig. 2 [14], typical of ablation by Coulombian explosion, evidence that thermal effects were absent at this position. Fig. 4 shows data for F th as function of N measured in BK7 surface by the traditional and by the D-Scan methods, and in both cases, the laser beam was focused by a 38 mm of focal length lens. The number of superposed shots ranged from 1 to 1020; in the still sample case, the energies used were 2.9, 5, 7, 8.5, 12, 14 and 18.5 μJ, for D-Scan three energies were used, 31, 71 and 134 μJ. Different superposition conditions were obtained by combining the repetition rate with the sample transversal displacement speeds; these frequencies were 50, 100, 500, 1000, 2000 and 4000 Hz; v y were 6, 12, 25, 50 and 100 mm/min. The longitudinal v z displacements speeds, were chosen according to v y to produce an elongated etched profile.
The results shown in Fig. 4 for the ablation thresholds for many pulses measured by the two methods shows a good agreement, validating the D-Scan method for F th (N) measurements. An incubation effects model that considers a saturation of the defects accumulation in dielectrics, is given by [15]: where k is the sample incubation parameter, F th (N) and F th,1 are the ablation thresholds for N and single pulses, respectively, and F is the ablation threshold for infinite pulses, when saturation occurs. During the accumulation of some pulses, less than 10 in both cases, a decrease in the damage threshold is not evident, but after that, a strong incubation effect takes place and F th decrease steadily with the number of overlapped shots. The figure is the same for both materials up to the appearance of saturation, which happens after approximately 80 pulses for Suprasil and for more than 200 pulses for Sapphire, reflecting lower k parameter for the last one. It is evident from these numbers, that machining involving overlapping of pulses, mainly in the range where N< ~100, must take into account the variation of F th (N). This not only assures a higher accuracy, but also helps to keep the processes far from the high fluence condition. Based on these concepts, we proposed a method to "mill" dielectrics [ 16] in which engraving is performed by layers of different depths, where in each one the machining parameters are adjusted according to the number N previously stricken on the surface. Fig. 6 below shows an example of machining microchannels in BK7 where this method was used. In conclusion, we demonstrated that D-Scan is a valid and useful method for the measurement of the ablation thresholds and incubation parameters for the superposition of ultrashort pulses for dielectrics. The use of these values to adjust process parameters during machining progression helps to avoid unwanted heat effects and to obtain increased dimension accuracy. | 2019-04-18T13:08:18.295Z | 2012-11-10T00:00:00.000 | {
"year": 2012,
"sha1": "56e3e4934e475febfe97daf464e0e548b23a84eb",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.phpro.2012.10.084",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0036d93dabed61f8ebee0433a41621bba00ba0c4",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
2573969 | pes2o/s2orc | v3-fos-license | Enumerating consistent sub-graphs of directed acyclic graphs: an insight into biomedical ontologies
Abstract Motivation Modern problems of concept annotation associate an object of interest (gene, individual, text document) with a set of interrelated textual descriptors (functions, diseases, topics), often organized in concept hierarchies or ontologies. Most ontology can be seen as directed acyclic graphs (DAGs), where nodes represent concepts and edges represent relational ties between these concepts. Given an ontology graph, each object can only be annotated by a consistent sub-graph; that is, a sub-graph such that if an object is annotated by a particular concept, it must also be annotated by all other concepts that generalize it. Ontologies therefore provide a compact representation of a large space of possible consistent sub-graphs; however, until now we have not been aware of a practical algorithm that can enumerate such annotation spaces for a given ontology. Results We propose an algorithm for enumerating consistent sub-graphs of DAGs. The algorithm recursively partitions the graph into strictly smaller graphs until the resulting graph becomes a rooted tree (forest), for which a linear-time solution is computed. It then combines the tallies from graphs created in the recursion to obtain the final count. We prove the correctness of this algorithm, propose several practical accelerations, evaluate it on random graphs and then apply it to characterize four major biomedical ontologies. We believe this work provides valuable insights into the complexity of concept annotation spaces and its potential influence on the predictability of ontological annotation. Availability and implementation https://github.com/shawn-peng/counting-consistent-sub-DAG Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Ontologies have become a common means of concept annotation in computational biology and related fields (Robinson and Bauer, 2011). A protein's molecular function (Ashburner et al., 2000), an effect of a genetic variant (Vihinen, 2014), or a patient's diagnosis (Robinson and Mundlos, 2010) are typical examples wherein biomedical entities such as macromolecules, mutations, or individuals are associated with sets of mutually dependent descriptors. The dependencies between these descriptors are often hierarchical, leading to the use of directed acyclic graphs (DAGs) as concept space representations.
A DAG is a pair (V, E), where V is a set of vertices (nodes) and E is a set of directed edges (links) between vertices such that no cycles can be formed. Each vertex in the graph is associated with a unique concept (term, description) and each edge is associated with a particular type of relational tie. For example, when annotating proteins as biomedical entities using the Gene Ontology (GO) graph (Ashburner et al., 2000), the terms 'nucleic acid binding' and 'DNA binding' are linked by edges of the type is-a asserting that DNA binding is a more specific form of nucleic acid binding. Other types of relational ties include part-of, regulates and so on.
A typical biomedical entity is associated with a set of terms determined experimentally such as through a molecular assay or a diagnostic procedure. A protein, for example, may be assigned terms 'DNA binding' and 'RNA binding', neither of which is a generalization of the other. To avoid annotation inconsistencies, this protein must also be annotated by the terms such as 'nucleic acid binding' and all others that generalize either of the experimentally determined terms. More broadly, this implies that a biomedical object can only be annotated by a set of terms that respect the hierarchy--a consistent sub-graph of the ontology. Unfortunately, (manual) experimental annotation is resource-intensive and often incomplete (Poux and Gaudet, 2017), giving rise to an entire field of computational prediction (Jiang et al., 2016;Radivojac et al., 2013).
The development of computational prediction methods presents its own challenges. Although it can be performed by building a separate binary classifier for each concept in the ontology, this approach is currently competitive only for specialized ranking tasks; e.g. diseasegene prioritization (Moreau and Tranchevent, 2012), since it does not exploit relationships between the terms. On the other hand, a more complete characterization is via learning structured outputs (Sokolov and Ben-Hur, 2010) in which a method takes an object (e.g. a protein) and is asked to provide the totality of concepts with which this object might be associated (i.e. a consistent sub-graph). However, the structured-output formulation generally falls under the extreme classification umbrella because the size of the output space is often exceedingly large. This poses problems in measuring similarity between annotations, evaluating accuracy of classification models and optimization when solving the 'argmax problem' (Clark and Radivojac, 2013;Joachims et al., 2009;Joslyn et al., 2004;Lord et al., 2003;Pesquita, 2017;Verspoor et al., 2006).
We identify now what we believe is an open problem in computational biology and computer science; that is, efficiently determining the exact number of consistent sub-graphs in a given ontology. This problem has a linear-time solution for rooted trees (Ruskey, 1981), but to our knowledge no such algorithm exists for DAGs. This paper therefore proposes a practical solution to this enumeration problem, proves its correctness, analyzes run-time complexity and introduces various computational speedups. Using this new approach, we analyze four often-used ontologies from the biomedical domain and explore the space of possible annotations. We believe that the algorithms, software and analysis carried out in this work will lead to better insights into concept annotation spaces and facilitate ontology quality assurance.
A motivating example
A growing number of concept annotation problems are formulated as the manual or computational assignment of a set of mutually related textual descriptors to some objects of interest. One of such problems is the computational prediction of protein function (Friedberg and Radivojac, 2017), which can be broadly operationalized as follows: Given: (i) an amino acid sequence with auxiliary data such as structure, expression, interactions, etc. of a protein p with unknown or incomplete function; (ii) training data that includes sequences, structures, or systems data corresponding to a (large) set of proteins, some of which have their true biological functions available; (iii) a GO; i.e. a concept hierarchy used to represent biological functions of proteins in a structured and easy-tocompute-on form.
Objective: provide a set of GO terms that are most likely to be the true (experimental) annotation of p.
The objects of interest here are proteins and the set of textual descriptors of protein function is given by GO--an ontology with a DAG structure where each node represents a textual descriptor and each edge represents a particular type of a relational tie between two descriptors (Ashburner et al., 2000).
An example of such an annotation is shown in Figure 1, where eight terms from the molecular function domain have been assigned to this protein. Due to the hierarchical organization of GO, both the set of experimentally determined terms and the set of computationally predicted terms must respect this hierarchy. As shown in this example, the annotation by the term 'DNA binding', implies the annotation by all the other GO terms that conceptually generalize it; e.g. 'nucleic acid binding', 'binding', etc. Similarly, 'sequence-specific DNA binding TF activity' further adds 'nucleic acid binding TF activity' to its annotation graph. Typically, the ontology used to represent the annotation space of proteins contains thousands to tens of thousands of terms, whereas the true annotation of a protein consists of tens to at most hundreds of terms. Because the task of a prediction algorithm is to find the most likely annotation, it must devise an efficient procedure to search through the space of all possible annotations.
Most biomedical ontologies have grown over the years to contain a large number of terms. Computationally selecting a single 'winning' annotation; i.e. a set of terms, or providing a short list of most likely annotations, is a significant challenge (Joachims et al., 2009;Sokolov and Ben-Hur, 2010). This prediction problem belongs to a so-called extreme classification scenario because the number of possible (discrete) annotations the algorithm must consider is astronomically large. In fact, we noticed that it is not possible to give an exact number of annotations available for a protein, even when the ontology is restricted to a fixed low depth. Therefore, an answer to a simple question 'What is the number of possible GO annotations a protein can be assigned?' requires the development of a practical counting algorithm. The resulting counts can, in turn, give insight into the nature and the difficulty of the computational function annotation of biological macromolecules (Reasonable approximations can be provided by calculating the lower and upper bounds, as we have done later in Section 6. Neither of those, however, provides a full intellectual satisfaction when an exact count can be computed).
It is important to mention that the annotation of biological macromolecules is one of the most interesting examples of concept annotation, primarily because of its biomedical significance but also because of the sizes and the complexity of the available ontologies. Similar situations, however, arise beyond computational biology, as in the fields of text mining (Grosshans et al., 2014) and computer vision (Movshovitz-Attias et al., 2015).
Basic concepts and notation
Þbe a directed graph, where V is a set of vertices representing concepts and E V Â V is a collection of ordered pairs (u, v) representing directional relationships, u ! v, between two concepts. A sequence of vertices u 1 ; u 2 ; . . . ; u k is called a walk if u i ; u iþ1 ð Þ2 E for i ¼ 1; 2; . . . ; k À 1. A walk of distinct vertices except for the identical starting and ending vertices is called a cycle. A directed graph that does not contain cycles is referred to as DAG.
Given two vertices u; v 2 V in a DAG, u is said to be an ancestor of v and v is said to be a descendant of u if there exists a walk from u to v. We denote a set of all ancestors of v as A v ð Þ and a set of all descendants of u as D u ð Þ. We next define A þ v ð Þ ¼ fvg [ A v ð Þ as the set of extended ancestors of v and D þ u ð Þ ¼ fug [ D u ð Þ as the set of extended descendants of u. Finally, if u; v ð Þ2 E, the vertex u is said to be a parent of v, whereas v is said to be a child of u. We denote the set of all parents of v as P v ð Þ and the set of all children of u as C u ð Þ.
Transitivity of relational ties
When an object is annotated with ontological concepts, it is often considered that all ancestors of those annotated concepts should be automatically assigned to the object. For example, annotating the function of a protein with 'enzyme binding' also implicitly annotates it with 'protein binding', 'binding' and, finally, the root term 'molecular function'. This type of reasoning requires all involved relationships between concepts to be transitive. Biomedical ontologies, however, usually contain various types of relationships between concepts, some of which are not transitive. Therefore, we only consider is-a and part-of relationships, both of which maintain transitivity and permit reasoning about ancestral concepts. It is also worth noting that we define the direction of edges to be pointing from the general terms to specific so that the depth of a node aligns with the increasing resolution of the descriptors. We show in Section 5.2 that the directionality of edges has no impact on the total count. Throughout this work, we consider an ontology O ¼ Þto be a DAG, where edges represent transitive relationships.
Consistent sub-graphs
Þ be an ontology and S V a set of vertices. A subgraph S; E S À Á is said to be induced from the original graph O by S if E S is the largest subset of pairs (u, v) from E such that both u; v 2 S. We denote such vertex-induced sub-graph as O S ½ . We also use O ÀS ½ ¼ V À S; E VÀS À Á to denote the sub-graph induced by vertices other than S.
Problem specification
Given an ontology O ¼ V; E ð Þ, our goal is to develop a practical algorithm that enumerates all consistent sub-graphs of O. We allow the graph to have more than a single root (a vertex with no incoming edges) as well as to be disconnected. An example of the enumeration problem is shown in Figure 2.
We generally observe that the number of consistent sub-graphs is bounded from below by 2 ' , where ' is the total number of leaf vertices (those with no outgoing edges) and from above by 2 jVj . The structure of the graph, however, determines the exact count and its proximity to either of the bounds. If the input graph is a chain of jVj vertices (' ¼ 1), the total number of consistent sub-graphs equals jVj þ 1. On the other hand, if the original graph is a set of jVj disconnected vertices (' ¼ jVj), there are 2 jVj ¼ 2 ' consistent sub-graphs. This analysis suggests that enumerating consistent subgraphs has a straightforward intractable solution of listing all 2 jVj vertex-induced sub-graphs of the ontology and checking for the consistency of each such sub-graph.
We use cdag O ð Þ to denote the desired function that takes a DAG O as input and returns the number of consistent sub-graphs in that graph. We use ctree T ð Þ and cforest F ð Þ for the special cases where the input graph is a rooted tree T or a forest F , respectively.
Counting sub-trees of trees
We first discuss a special case where the input graph is a rooted tree; that is, when each non-root vertex has a single parent. In this case, there exists a linear algorithm in the number of vertices; see Lemma 1 in Ruskey (1981). We provide this solution in Algorithm 1 with a minor modification resulting from the fact that our algorithm includes an empty tree in the total count. This algorithm naturally extends to collections of rooted trees. One can enumerate sub-trees for each tree and take the product as the total count. We refer to this extended algorithm as cforest (not shown).
Algorithm 1 recursively traverses a tree in a pre-order manner. For any sub-tree rooted at vertex v, the number of consistent subtrees that contain v equals the product of all sub-counts from its sub-trees rooted at each child. Additionally, we add 1 for the only consistent sub-tree that does not contain v; i.e. the empty tree. The recursion terminates at the empty tree whose count is one.
Counting consistent sub-graphs in DAGs
DAGs generalize trees in that they allow for multi-parent vertices. Such vertices, however, break Algorithm 1 because the recursive Fig. 2. Consistent sub-graphs of an ontology O ¼ ðV ; EÞ with jV j ¼ 7 vertices and jEj ¼ 7 edges, shown in the upper left-hand corner. There are 15 consistent sub-graphs of O, as shown by coloring the appropriate groups of vertices in blue (the first graph represents the ontology and the empty sub-graph at the same time). Observe that the reversal of all edges in the graph would lead to a reversed graph with the same number of consistent sub-graphs (white vertices; Theorem 5.1) branches are no longer independent. Algorithm 2 circumvents this problem by recursively decomposing a graph into two strictly smaller sub-graphs according to a selected pivot vertex. We will show in the next section that the number of consistent sub-graphs in the two smaller graphs adds up to be the number for the original graph (Line 6, Algorithm 2). The algorithm continues recursive enumeration until the graph becomes a forest, in which case it calls cforest. Figure 3 illustrates the process of graph decomposition with respect to the pivot vertex u. We note that any vertex can serve as pivot and will discuss the selection of pivots and how they impact the run time in Sections 5.3 and 6.1.
Correctness and complexity of the algorithm
We first observe that the size of the problem in the number of vertices is guaranteed to decrease during recursive calls, thus ensuring that the algorithm terminates after a finite number of iterations. Next, we justify the equation corresponding to the Line 6 in Algorithm 2, Lemma 4.1.
Let cdag Oj:u ð Þ be the number of consistent sub-graphs in O that do not contain u. We have Proof.
The equal cardinality of the two sets of consistent subgraphs is demonstrated by showing that both sets are contained in each other. For any Conversely, for any consistent sub-graph induced by S such that u 6 2 S, we have 8v 2 D þ u ð Þ; v 6 2 S by the definition of consistency. Therefore, S also induces a consistent sub-graph Let cdag Oju ð Þ be the number of consistent sub-graphs in O that contain u. We have cdag Oju ð Þ¼ Proof.
As in Lemma 4.1, for any Also, for any consistent sub-graph induced by S and u 2 S, we have A þ u ð Þ S by the definition of consistency. Note that the uniqueness of S implies the uniqueness of S À A þ u ð Þ. We can see that the sub-graph induced by Given an ontology O ¼ V; E ð Þ and any u 2 V, the number of consistent sub-graphs in O equals the sum of the numbers of consistent sub-graphs Proof.
Equation (1) To analyze complexity of the algorithm, let n be the number of vertices in the graph and m be the number of multi-parent vertices. Assuming a multi-parent vertex is always selected as pivot, we can express the run time complexity T(n) via the following recurrence where f(n) incorporates the time to select the pivot, split the graph and add two large integers. Let us further assume that the larger of the two graphs after decomposition contains n À n=k elements, where 2 k n. It is now straightforward to show that We can now see that the algorithm is exponential in the worst case; however, it reduces to a polynomial algorithm when Assuming linear time to conduct graph decomposition and a constant time for addition/multiplication, we obtain T n ð Þ ¼ O n 2 À Á .
Accelerations
The run-time of the algorithm heavily depends on the structure of the ontology and the selection of pivots. Here, we discuss several practical considerations aimed at accelerating Algorithm 2. Once we conclude this discussion, the full method will be presented in Algorithm 3 (Section 6).
Pruning branching components
It is easy to observe that when the ontology consists of multiple connected components, these components can be independently and, if needed, simultaneously processed. We take this reasoning a step further to consider a special scenario of nearly disconnected graphs where (i) the two components are connected via a single vertex and (ii) all vertices in one component are descendants of this vertex.
Given a graph O ¼
Vertex u is called a branching vertex. Figure 4a gives an example in which u is a branching vertex, since the removal of u disconnects D u ð Þ (i.e. the branching component, O br ) from the rest of the graph. We refer to the remaining part of the graph as the stem component, O st . More generally, Figure 4b shows a graph with a component-wise tree structure, where branching vertices serve as hinges of branching component to their corresponding stems. We will use ðO st ; O br ; uÞ to denote the desired structure.
Algorithm 2 Counting the number of consistent sub-graphs in DAGs.
Given ðO st ; O br ; uÞ, we demonstrate that cdag O ð Þ can be decoupled into two sequential sub-problems: (i) cdagðO br Þ and (ii) cdag O st ð Þ. We use u u ð Þ for the sub-total of consistent sub-graphs in the branching component O br . We also notice that the entire branching component can be pruned once u u ð Þ is computed, making u a leaf vertex in O st . Therefore, we modify the algorithm so as to allow a sub-total count u u ð Þ for every vertex as if a branching component has been pruned from u. Notice that u u Similarly, Equation (1); i.e. Line 6 in Algorithm 2, must be modified to where u u ð Þ accounts for the fact that for any consistent sub-graph S i in the pruned O br and any consistent sub-graph The approach naturally extends to multiple (hierachical) branching components such that we compute the sub-total of consistent sub-graphs within each component and agglomerate them in a reversed topological order.
The pruning operation is preferred before each instance of decomposition for two main reasons: (i) it divides the problem into smaller non-overlapping sub-problems, while a direct decomposition usually results in substantial overlapping sub-problems; (ii) although a full parallelization over components is restricted since stem components have to be computed only after all of their branching components are finished, the unordered components can be computed simultaneously. For example, as in Figure
Reverse graphs
This contradicts u 2 V À S. Therefore, the assumption v 6 2 V À S is false and we have 8u 2 V À S; 8v 2 A R u ð Þ V À S. That is, O R ÀS ½ is consistent.
This Lemma demonstrates that all complementary white vertices in Figure 2 form consistent sub-graphs in the reverse graph.
Given an ontology
Proof.
Given Lemma 5.1, we see that the mapping f O S ½ ð Þ¼O R ÀS ½ is a bijection between the two sets of consistent sub-graphs. Therefore, the two sets are of equal cardinality. permits graph reversal at any point during the algorithm depending on which of the graphs is more likely to terminate first. For example, we can always choose the one with fewer multiparent vertices so as to greedily reduce the upper bound of recursive calls. It is worth noting that all the leaves become roots in the reverse graph. Therefore, in the final algorithm that incorporates both pruning and reversing modules, we generalize the algorithm to allow for u > 1 on roots (branching vertices in the reverse sense) in order to ensure compatibility.
Having u r ð Þ > 1 on a root indicates that all the ancestors of r have been pruned out. For trees (after pruning), we have O ÀD þ r ð Þ ½ ¼OAr ð Þ ½ . With Lemma 4.1 and Theorem 5.1, we have On the other hand, for any consistent sub-graph S containing r, S À A þ r ð Þ induces a consistent sub-graph in O D r ð Þ ½ and vice versa; thus, Hence, these two sub-totals sum to be the total count and Equation (2) remains unchanged. However, if a root r with u r ð Þ > 1 is selected to be the pivot, we have the following equation according to Theorem 5.1 and Equation (3), whereas Equation (3) remains unchanged for non-root vertices.
Pivot selection
As alluded to before, the selection of vertices used for partitioning has the potential to significantly change the computation time. It is therefore reasonable to devise a strategy for pivot selection. Besides a random selection of multi-parent vertices (mpv's), which aims at directly converting DAGs into trees one step at a time, we also consider three other pivot heuristics. The first strategy is to pick a vertex with the maximum degree, with random selection in case of ties, because decomposing the graph according to such vertices may increase the chance of having either disconnected components or branching components. The second strategy selects the pivot so as to minimize e À n þ r over the two sub-problems, where e, n and r are the number of edges, vertices and roots in the two components. We refer to this quantity as 'bound' since it is an upper bound of the number of mpv's in the graph (see Supplementary Material for the proof). Note that it is closely related to the cyclomatic number of the graph. Finally, the third strategy simulates a unit network flow for all vertices running in the direction from leaves to the roots and selects the 'bottleneck' vertex; i.e. the one that maximizes the ratio of the flow in the vertex and the number of its descendants (see Supplementary Material for this pivot selection algorithm). These strategies will be empirically compared in Section 6.
Hashing
It can occur during the recursive procedure that certain sub-graphs require repeated enumeration. In Figure 3, for example, the subgraph h-i-j is present in both sub-problems shown in Figure 3b-c. Computing the count for this sub-graph would emerge in the Figure 3b sub-problem if the ensuing decomposition were based on vertex d, although it would not emerge if the partitioning were based on vertex j. Interestingly, the sub-graph k-l would be counted twice in the Figure 3b sub-problem; i.e. when both A þ d ð Þ and D þ d ð Þ are removed, and it would then appear one more time in the Figure 3c sub-problem.
To avoid repeated enumeration, whenever a solution to a subproblem is obtained, the count for this sub-problem is stored. Then, during the recursive calls, we first check if the result is already available before further calculation. To hash a result, we use the sorted IDs of all vertices in the sub-graph as a key. Obviously, this key is unique because it corresponds to a vertex-induced sub-graph of O. For the pruned sub-graph, we store the key of the sub-graph along with the branching vertex. Whenever the ID of the branching vertex is used to generate a key, the stored key of the corresponding sub-graph is appended to the vertex's ID with parentheses around it.
Experiments and results
We empirically evaluate the enumeration procedure from Algorithm 3 and various practical speedups using randomly generated graphs.
Algorithm 3 The advanced version of Algorithm 2 with optimization modules.
We then apply this algorithm to four biomedical ontologies to gain insight into the sizes of their concept annotation spaces.
Run-time evaluation
We generated two sets of graphs to investigate the efficacy of our algorithm. Each set contained 1000 graphs with either 25 or 100 vertices. To construct each graph the vertices were added sequentially, with the proposed in-degree in-deg(v) of the k-th vertex v generated according to a Poisson distribution with parameter k. This vertex then became a child of min(in-deg(v), k -1) previously generated vertices that were themselves selected uniformly randomly. The parameter k was selected according to the C 2:0; 1:0 ð Þprior for each new graph and kept constant until the graph was completed.
With these two sets of simulated graphs, we ran our algorithm with different modules and pivot selection strategies. In particular, we evaluate pivot selection based on (i) random selection of vertices, (ii) random selection of multi-parent vertices, (iii) the degree criterion, (iv) the bound criterion and (v) the bottleneck criterion. For each pivoting strategy, we subsequently add the pruning component, then hashing and finally graph reversal. The criterion for graph reversal was the number of multi-parent vertices; i.e. a graph will be reversed at any point during the recursive process if the reversed graph contains fewer multi-parent vertices.
We report the average wall-time and average number of recursive calls over the two sets of 1000 graphs (jVj ¼ 25 in Table 1; jVj ¼ 100 in Table 2). For the smaller graphs, we also ran a brute-force algorithm that was further convenient to empirically evaluate the correctness of our algorithm. The brute-force algorithm generates each of the 2 jVj subsets of nodes and then performs a consistency check. We see that simpler schemes perform better on small graphs where the number of recursive calls per graph has not exceeded a few hundreds. On the other hand, the advanced techniques show tangible benefits on the larger graphs reducing the number of recursive calls and total computation time by orders of magnitude. It is possible to envision other variations that could result in further speedups; e.g. selecting multi-parent pivots with the highest degree. These refinements, however, were beyond the scope of this paper.
Consistent sub-graphs in biomedical ontologies
We use 02/2017 versions of GO and Human Phenotype Ontology (HPO) as the target ontologies and compute the number of consistent sub-graphs in each of them. The algorithm is applied to each of the three domains of GO (Ashburner et al., 2000): (i) molecular function ontology (MFO; 10 789 terms) (ii) biological process ontology (BPO; 29 575 terms) and (iii) cellular component ontology (CCO; 4085 terms). Together with HPO (12 167 terms), these four ontologies are widely used in annotating functional terms of gene products (Jiang et al., 2016;Radivojac et al., 2013). We further define the annotation level for each term in the ontology to be the length of the longest path to the root. Starting from the root term, we add more specific terms level-by-level to understand how the potential annotation space grows with increased granularity of functional concepts.
In addition to level-wise full ontologies, we also investigate the 'used' ontologies in which each term was retained only if at least one protein in the UniProt-GOA (Huntley et al., 2015) and HPO (Robinson and Mundlos, 2010) databases has been confidently assigned that term (confident annotations include all experimental evidence codes as well as 'traceable author statement' and 'inferred by curators'). Protein function annotations were extracted from the 02/ 13/2017 release of the UniProt-GOA database, which contains 64 362 proteins with confident MFO annotations, 84 413 proteins with BPO annotations and 79 630 proteins with CCO annotations. HPO annotations were extracted from the 02/24/2017 release of the HPO database where 6411 genes with confident annotations were extracted. Figure 5 shows the completed counts for both full and used levelwise ontologies. For each ontology, we additionally compute the lower bound (generally the larger of 2 ' and 2 r , where ' is the number of leaves and r is the number of roots) and estimate the upper bound (we convert a graph into a forest by keeping only one randomly selected incoming edge for each multi-parent vertex and then call cforest). The counts of consistent sub-graphs grow rapidly as more specific terms are included and later plateau.
Although we were not surprised by the astronomical sizes of concept annotation spaces; e.g. MFO terms up to the level of 9 create 2:036 Â 10 2616 consistent sub-graphs, it was rewarding to provide Notes: Each field in the table summarizes the per-graph wall-time over a set of 1000 graphs as well as the per-graph number of recursive calls, except for the brute-force method. The columns represent pivot selection strategies: (i) random, (ii) random multi-parent vertex (mpv), (iii) minimum bound, (iv) maximum degree and (v) bottleneck. The rows represent successive additions of practical modules for speedups: ( ) basic approach from Algorithm 2, ( ) pruning, ( ) pruning and hashing, ( ) pruning, hashing and graph reversal. Notes: The entry with an asterisk indicates that a sample of three graphs was considered (instead of a full set of 1000) due to the long run-time. The brute-force algorithm was not considered as it was not feasible to compute the count for even a single graph. exact counts whenever feasible as well as to observe an increasing difference between lower and upper bounds (in the 100 s to 1000 s of orders of magnitude) with the level of the ontology. We also find it interesting that a large number of ontological terms have never been used to annotate a gene or a protein; i.e. 31% of terms in GO and 44% of terms in HPO (Supplementary Material). Finally, using the number of recursive calls of our algorithm (Supplementary Material) as a measure of graph complexity, we observe an inverse relationship between the graph complexity and the accuracy of the top function prediction algorithms in the Critical Assessment of Functional Annotation experiments (Jiang et al., 2016;Radivojac et al., 2013). Although some complexity of the available ontologies can be attributed to the level of biological abstraction they are intended to describe (e.g. Biological Process), it is reasonable to consider that the structure of the ontology itself is a contributing factor to a lower prediction accuracy. As an example, we note that both Molecular Function and Cellular Component annotations correspond to relatively straightforward biological concepts, yet MFO is significantly simpler than CCO; e.g. it contains a smaller fraction of multi-parent vertices, it has lower graph edge density and, correspondingly, it had fewer recursive calls by our algorithm. In agreement with this consideration, the accuracy of concept prediction in MFO exceeds the accuracy currently observed in CCO, even when data biases are accounted for (Jiang et al., 2016).
Entropy of concept annotation spaces
The ability to enumerate sub-graphs in relatively large ontologies presents an opportunity to contrast the space of actual ontological annotations in biological databases with the space of possible ontological annotations. To investigate this, we first computed the entropy of actual annotations at different levels in the ontology, where O lvl is the truncated ontology as in Section 6.2, O lvl S i ½ corresponds to a distinct consistent sub-graph annotation observed at that level and P O lvl S i ½ ð Þis the probability that a protein is assigned annotation O lvl S i ½ . We first enumerated all observed sub-graphs from the UniProt-GOA or HPO database truncated to a particular level, calculated their relative frequencies, and then plugged these relative frequencies into the entropy formula above. On the other hand, the maximum entropy was computed as log 2 cdag O lvl ð Þ by assuming equal probability for every possible consistent sub-graph. Figure 6 shows the ratio between the two quantities for levels greater than 1, suggesting that the world of protein functions, despite great diversity, has low entropy relative to the possible maximum. Although the currently observed functional annotations are incomplete, noisy and biased (Jiang et al., 2014;Schnoes et al., 2009Schnoes et al., , 2013, this suggests considerable departure from the uniform distribution. Supplementary Table S1 for exact counts). In each panel, black þ symbols mark the exact counts for 'full' sub-graphs and grey  symbols mark the exact counts for 'used' sub-graphs. Colored boxes indicate the estimated upper/lower bounds of the actual counts, with darker boxes corresponding to 'full' ontologies and lighted boxes corresponding to 'used' ontologies at a particular level. The exact integer counts are available upon request Fig. 6. Ratio of entropies in the four ontologies. Colored circles show the ratio of the observed entropy to the maximum entropy for each level in the evaluated ontologies. Dotted lines correspond to the estimated ratios as the average of the two ratios calculated by lower/upper bound of the counts. The error bars suggest a possible placement for the actual ratio 7 Related work There exists a body of literature in enumerative combinatorics related to our work. One of the most relevant problems is the enumeration of DAGs with n distinct (labeled) nodes (Robinson, 1971). The resulting count reflects the size of the structure space of Bayesian networks with n random variables and, surprisingly, also corresponds to the number of matrices in 0; 1 f g nÂn with all eigenvalues real and positive (McKay et al., 2004). The number of labeled DAGs with n nodes does not have a closed-form solution and is instead available as the A003024 sequence in the On-Line Encyclopedia of Integer Sequences (OEIS); https://oeis.org/A003024. The construction was originally proposed by Robinson (1971) and was further investigated by others (Gessel, 1996;Rodionov, 1992;Stanley, 1973).
Previous findings on rooted labeled trees include both the enumeration of possible number of trees and also the enumeration of sub-trees for a given tree. There are n nÀ1 labeled rooted trees with n nodes (Gross and Yellen, 2004) that provide the integer sequence A000169 in OEIS; https://oeis.org/A000169. The expansion to forests gives n þ 1 ð Þ nÀ1 using Cayley's formula (Cayley, 1889), as a single root can be added to connect a forest of unrooted labeled trees into a rooted labeled tree. The recurrence for the number of subtrees of a given tree was proposed by Ruskey (1981); see Algorithm 1. The generalization to weighted sub-trees was given by Yan and Yeh (2006). Both algorithms are linear in n assuming constant time addition and multiplication.
The research in ontology quality assurance is another related problem. These efforts typically include the analysis of irregularities and redundancy in concept descriptors and graph structure (Bodenreider, 2003;Verspoor et al., 2009;Xing et al., 2016). Our work, primarily the software we developed, contributes to this area by facilitating the analysis of the annotation space.
Conclusions
This work presents a practical algorithm for enumerating consistent sub-graphs of DAGs. We build upon the work of Ruskey (1981) and Yan and Yeh (2006), who solved the substructure enumeration problems in trees, by providing a nontrivial extension to DAGs. However, we also believe that our algorithm has practical utility for the studies of ontological annotation spaces that have recently gained popularity in structuredoutput learning in computational biology and other fields (Grosshans et al., 2014;Joachims et al., 2009;Movshovitz-Attias et al., 2015;Radivojac et al., 2013;Sokolov and Ben-Hur, 2010). Another related problem is workflow enumeration that may have implications on code analysis and debugging in distributed computing environments (Sadiq and Orlowska, 2000;Zaharia et al., 2010).
The observed outcomes on biomedical ontologies raise important questions regarding the predictability of ontological annotations because most modern algorithms are asked to provide accurate deep annotations to have practical utility. However, annotation spaces become exceedingly large almost instantaneously with the depth of the ontology, which presents an immense computational and statistical challenge for any prediction algorithm. We therefore believe that the balance between ontology size/complexity and term granularity should become an important topic for future discussions among biocurators and function prediction researchers.
Funding
This work has been supported by the National Science Foundation grant DBI-1458477 and the Indiana University Precision Health Initiative. The authors thank Kymberleigh Pagel for helpful comments.
Conflict of Interest: none declared. | 2017-12-27T20:45:31.000Z | 2017-12-27T00:00:00.000 | {
"year": 2018,
"sha1": "ccf012fbfedc698212e363037805ad79c62a1578",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/34/13/i313/25098278/bty268.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccf012fbfedc698212e363037805ad79c62a1578",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Medicine",
"Biology"
]
} |
236561114 | pes2o/s2orc | v3-fos-license | The Testimonios of System-Impacted Daughters of Color on Healing from Parental Incarceration
1 in every 25 children in the United States currently has a parent incarcerated in jail or prison. Black and Latinx children make up the majority of this population, as their parents are overrepresented in local jails and state and federal prisons. Parental incarceration affects a child’s behavior, emotional and mental health, social interaction, and financial stability. Daughters of incarcerated parents are particularly affected. This research investigates testimonios (testimonies), a narrative form of counter-storytelling, as a tool to address the traumatic effect of parental incarceration on female children of color. Testimonios give a person agency and allow them to share their unique and nuanced experiences in detail. In-depth interviews demonstrated that testimonios can be an effective healing tool for women who have been impacted by parental incarceration and can improve social service organizations directed towards families affected by incarceration. Testimonios provided space in which daughters of incarcerated parents were able to express their emotions and make sense of their experiences. The interviews also revealed shared themes in the experiences of multiple interviewees.
1 in every 25 children in the United States currently has a parent incarcerated in jail or prison. Black and Latinx children make up the majority of this population, as their parents are overrepresented in local jails and state and federal prisons. Parental incarceration affects a child's behavior, emotional and mental health, social interaction, and financial stability. Daughters of incarcerated parents are particularly affected. This research investigates testimonios (testimonies), a narrative form of counter-storytelling, as a tool to address the traumatic effect of parental incarceration on female children of color. Testimonios give a person agency and allow them to share their unique and nuanced experiences in detail. In-depth interviews demonstrated that testimonios can be an effective healing tool for women who have been impacted by parental incarceration and can improve social service organizations directed towards families affected by incarceration. Testimonios provided space in which daughters of incarcerated parents were able to express their emotions and make sense of their experiences. The interviews also revealed shared themes in the experiences of multiple interviewees. Despite having 5% of the world's population, the United States currently holds over 25% of the world's incarcerated population, with 2.3 million people currently in jail or prison (American Civil Liberties Union [ACLU], 2020). Data shows that 52-63% of individuals who are incarcerated have children, with the number of mothers rapidly increasing in recent years (Thomson et al., 2018). Nationwide, one in every 25 children currently has a parent incarcerated in jail or prison, and an estimated five to eight million children have experienced parental incarceration in their lifetime (Haskin & Turney, 2018). Parental incarceration affects entire families, but children experience higher instances of trauma and adversity as a result of parental incarceration (Arditti & Savla, 2015).
Children who have experienced the incarceration of a parent, family member, or community member are often referred to as being "system-impacted" (Cerda-Jara et al., 2019, p. 2). In this paper, "system-impacted" specifically refers to a child's experience of parental incarceration. This research focuses on female system-impacted children, referred to as daughters, because previous literature has demonstrated that daughters experience higher instances of antisocial behavior, anger, impulsivity, low self-esteem, and delinquency than sons as a result of parental incarceration (Burgess-Proctor et al., 2016). This research also focuses on system-impacted daughters of color because Black and Latinx parents are disproportionately represented in state and federal prison populations. For example, Black people make up 13% of the U.S. population, but 40% of the incarcerated population (Sawyer & Wagner, 2020). These numbers are a reflection of the disproportionate incarceration rates for the Black and Latinx populations (Western & Pettit, 2010).
Current literature has found that parental incarceration has both short-and long-term negative effects on children (Miller, 2006). In the short term, system-impacted children experience traumatic separation, loneliness, unstable childcare arrangements, and the effects of reduced family income (Murray et al, 2012). In the long term, systemimpacted children are at higher risk of experiencing intergenerational incarceration, antisocial behavior, stigmatization, poor educational performance, and stress (Murrey, 2015), as well as general anger and additional mental health problems (Wakefield, 2007).
Existing research, largely based on quantitative analyses, fails to capture the voices of system-impacted children and the nuances of their unique experiences with parental incarceration. Academics have too often lumped all system-impacted children together when researching their experiences. For example, Burgess-Proctor et al. (2016) studied the effects of parental incarceration on both daughters and sons, but failed to analyze the impact of race and ethnicity on the lived experiences of both genders. It is important that an intersectional lens is applied to fully capture experiences of children with incarcerated parents. Testimonios are intended to capture the intersectional and nuanced experiences of system-impacted children with regard to gender, race, ethnicity, socioeconomic status, and so forth.
RESEARCH QUESTIONS
This paper focuses on the traumatic effects of parental incarceration on daughters of color and demonstrates how testimonios, a form of counter-storytelling, can be used as an effective healing tool. Conversations around parental incarceration are limited due to immense stigma and shame. Family members often tell children that their incarcerated parent is on vacation, rather than in jail or prison; however, children discover their parent's incarceration through other social means, such as friends (Burgess-Proctor et al., 2016). When children are told about the incarceration, they often proceed to conceal their parent's incarceration from friends and others (Burgess-Proctor et al., 2016) due to the stigmatization that will follow them into adulthood (Sykes & Pettit, 2014). To counter the stigma and shame around parental incarceration, this research shed light on the following research questions: 1. How have daughters of color with incarcerated parents expressed themselves through storytelling?
2. Can counter-storytelling be used as an effective healing tool for daughters who have experienced parental incarceration?
Counter-storytelling is a framework used to elevate the voices of populations who are often forgotten and long silenced, making it an ideal method for addressing the needs of system-impacted daughters of color (Yosso, 2013). Counter-storytelling occurs when a person tells their life story or shares a particular experience, either informally in a conversation with another person or formally as a culturally responsive tool in a therapeutic setting. It has been found to be an effective tool for healing after trauma. For example, Native Americans who experienced forced boarding school reported emotional release and healing when sharing their stories through counter-storytelling (Charbonneau-Dahlen et al., 2016). Counter-storytelling promotes resiliency by showcasing how a person has adapted and built skills in order to overcome the systemic barriers and oppression they have faced (Hess, 2019). For instance, in response to an environment where there was an absence of nurturing roles in boarding schools, Native American fifth and sixth graders developed survival skills by becoming caregivers themselves for younger children. Most important, counter-storytelling shifts and challenges the white supremicist paradigm by illuminating patterns of racialized inequality through recounting experiences of individualized and shared racism (Yosso, 2013).
In this paper, Critical Race Theory (CRT) will be used in conjunction with counter-storytelling to elevate the voices of marginalized, underserved, and silenced system-impacted daughters of color. CRT is a theoretical framework used in the social sciences that examines the relationship between society and race, law, and power (Crenshaw et al., 1995). Using this framework will provide an in-depth look at how race and power impact populations who experience parental incarceration. CRT and counter-storytelling have been used in a variety of situations to help individuals heal from trauma and have been shown to acknowledge the resilience and survival skills of marginalized populations (Solorzano & Yosso, 2001). CRT is important for this research because most children who experience parental incarceration are people of color, creating an increase in future class and racial inequality through the negative consequences of mass incarceration on children (Wildeman & Western, 2010).
Testimonios are used strategically to give agency to daughters of color. Agency gives people the power to negotiate their needs and identify what they feel in spaces of inequality (Cushing & Lewis, 2009). This form of storytelling has been used in feminist research methodologies as a form of resistance, a tool for resilience building, and a source of hope in the midst of challenging systemic oppression (Huber & Cueva, 2012). Testimonios decolonize storytelling by giving a person agency to highlight power and oppression, and can be viewed as a genre within counter-storytelling (Medina, 2018.).
METHODOLOGY
Previous research on the experiences of system-impacted children has reduced their experiences to statistics using quantitative methods. As such, through the practice of counter-storytelling with a CRT lens, this research provides a more in-depth representation of the experiences of system-impacted daughters of color. The qualitative data comes from in-depth interviews with two women who had incarcerated parents and one employee from Homeboy Industries' Legal Services department who had worked with the interviewees for over a year. Homeboy Industries, based in Los Angeles, CA, is a nonprofit organization that assists former gang members, previously incarcerated individuals, and their families to become positive contributing members of society through providing access to job placements, tattoo removals, therapy, and legal services (Leap et al., 2011). The organization is considered a good fit for this research because of their work with system-impacted families.
The women interviewed were from Los Angeles, CA, Mexican-American, in their late twenties, and both experienced the incarceration of their fathers when they were adolescents. Respondents were asked 18 questions during the interview about how they navigated their parent's incarceration, communicated with others, and what resources they deem necessary for healing. The interview questions include: "Looking back, how would you say being system-impacted affected your trajectory?"; "As of today, do you share your narrative of being system-impacted with others?"; "How do you feel when you talk about your mother's/ father's incarceration?"; and "What services do you feel are necessary for daughters to heal from parental incarceration?" The employee interviewed was asked different questions, such as, "In your role, do you experience listening to the children's narratives/stories about their experience with parental incarceration?" These questions were constructed ahead of the interview and were open-ended to promote discussion. Additional probing questions were asked during each interview when a respondent disclosed new information. For example, when an interviewee disclosed the impact her father's incarceration had on her career choice, she was asked to elaborate. Interviews were conducted in the Homeboy Industries legal office and recorded using a phone device and deleted soon after the interview was transcribed by the researcher.
All respondents were given consent forms and informed about the study's objective beforehand. Ethical measures were taken throughout the duration of the research project and pseudonyms are assigned to each respondent to maintain confidentiality. Before the interviews, the researcher built rapport with each interviewee through legal assistance and everyday interactions at Homeboy Industries. Furthermore, Columbia University Institutional Review Board (IRB) approved this research. The data from the semi-structured interviews were thematically transcribed and analyzed. Google Drive, Google Docs, and Microsoft Excel were used for coding and tracking emerging themes. After the data collection, thematic analysis was used to identify themes and patterns in responses.
RESULTS
The objectives of using testimonios are to showcase the point of view of the person being interviewed, identify what they deem important from their experiences, and make an urgent call to action based on the themes and patterns that emerge from their intentional sharing (Reyes & Rodirguez, 2012). Themes that arose across the interviews conducted in this study included a strong sense of healing from sharing testimonios, increased willingness to share, education as an escape, financial instability, and negative feelings towards individuals who did not share their struggle. In general, daughters of incarcerated parents found that telling stories of their lived experiences was a form of empowerment.
STRONG SENSE OF HEALING
The Homeboy Industries' staff person who was interviewed reported observing a strong sense of healing from the women who shared their testimonios. Maria and Gabriela, who shared their testimonios, agreed and reported that sharing their narratives about their parent's incarceration with others was healing and therapeutic. A staff member who works in Homeboy Industries' Legal Services department focusing on family reunification, expungement, and other court services, stated: [They share] all the little details that are important to them and half the time they end up crying. It is more like a therapy session. I only end up using half of...the stuff they have already told me. Half of it is not important to the case...but it is important for me to understand where they are coming from, so I can sort of better craft those declarations for a judge that is going to read. Yeah, a lot of times them doing their legal work ends up sort of being therapeutic sessions because they get to talk to someone who is not going to judge them, who is actually doing something to help them.
The themes in the interview reveal that storytelling and full disclosure about the traumatic experience of having a parent incarcerated can be therapeutic because the speaker is given a chance to share their own experiences and emotions regarding what occurred during this vulnerable part of their lives. The legal services staff stated that when women who are impacted by the criminal justice system are given the opportunity to speak about their experiences, they find it to be therapeutic and healing, especially because they are met with no judgement. For storytelling to work as an effective strategy, the speaker must have an attentive and encouraging listener (Rosenthal, 2003). Therefore, the professional staff at Homeboy Industries fulfilled this role by creating a judgment-free environment for her participants.
AVOIDANCE BY PROFESSIONAL STAFF
Avoidance has been observed in research on parental incarceration (McGinley & Jones, 2018), as well as in this research. The employee interviewed discussed the prevalence of avoidance, or the staff member's reticence to speak of the client's parental incarceration unless they first broached the topic. When asked if she discusses with the children their experiences and feelings about having an incarcerated parent, the staff member responded, "Me no. Because the kids I usually see are five or under so they do not really understand what's going on. They will think their parents were on vacation or somewhere doing a work thing." The professional staff usually avoids mentioning the incarceration of the children's parents, allowing the children to think that their parents are away on business or vacation. This is a relatively common experience for children as their parents, teachers, and service providers shield the child from the truth of what is really happening with their parents (Burgess-Proctor et al., 2016). This is often due to the parent's shame and guilt of being incarcerated and not wanting to inflict it on their children or not knowing how to address the topic in a way that is understandable for children. However, it is important for these children to grow up and begin to ask questions about their parents. Counter-storytelling can prove beneficial for this population as it speaks directly to these issues and gives voice to them, instead of perpetuating avoidance and secrecy.
EDUCATION
Within all three interviews, education was identified as a form of healing by both staff and the daughters. When asked what is necessary for system-impacted daughters to heal, Maria stated, "I would say education. Something they can be in control of [like] college degrees." She explained that by giving girls who are dealing with their parent's incarceration something they can control, like education, they begin to feel liberated. She recalled, "I would just be at the library, reading books, or learning stuff at school. It would take me to another place, a place where you don't need money."
LONG-TERM FINANCIAL INSTABILITY
Financial instability was another common theme. Both participants shared how their parent's incarceration led to a loss of family income and an increase in financial stress. It is important to acknowledge that in addition to the economic insecurity that exists while a parent is in prison, financial instability continues beyond release. The negative consequences of having a parent incarcerated do not disappear once they return home. Gabriela, who experienced her father's incarceration in middle and high school, reflected on her dad's experience after release: "He did not have a job for five years after that. So my mom was struggling for a long time. I feel like my dad's financial instability affected my mom and our household. So I could not go to college right after high school." Gabriela's father's unemployment and inability to contribute to the family's income affected her educational trajectory by limiting her ability to seek higher education. Both Maria and Gabriela mentioned struggling with food insecurity and paying bills, as well as needing additional assistance while their parents were incarcerated and in the years following.
PRIVILEGE
Another theme that emerged was anger that the daughters had towards others whom they identified as having "privilege," or those who they saw as not having any "real" problems. Through time, however, the anger transformed into a motivation to excel. Maria expressed, "At first, it made me a bit bitter because I would see people who do not have any real problems in life… but I grew out of that." She later explained that her bitterness about her parent's incarceration turned into motivation and increased her personal resilience. The concept of resilience appeared in both of the interviews, when Maria and Gabriela discussed how they came to understand and accept their parent's incarceration and use their adversity as motivation. Previous research has shown that children who experience separation and poverty due to a parent's incarceration experience lasting negative effects. However, through the use of external resources and strength-based factors, children can showcase resiliency (Miller, 2007). Resilience and healing may arise from the practice of storytelling.
BENEFITS OF TESTIMONIOS
By sharing their narratives, Maria and Gabriela were able to open up about what they felt when having to deal with their incarcerated parents. Although there were only two interviews with system-impacted daughters and one staff interview conducted, the data supported the predicted hypotheses. Testimonios are therapeutic for children of incarcerated parents, allow for a nuanced understanding of their experiences, and provide insight for service providers about the specific needs of the people they serve.
LIMITATIONS
One of the limitations of this research was limited time. Data collection was limited to less than ten weeks. There was not enough time to recruit a larger sample size of participants and it was difficult to create a strong bond with the participants in such a short period of time. Another limitation was the structure and sensibility of the interview. The interview was recorded on a device, creating an environment where interviewees felt reluctant about the amount of information they shared and skepticism about their privacy. A further limitation is that this research was conducted independently without team support.
FUTURE DIRECTIONS
This research highlighted the positive impacts of daughters of color sharing their testimonios. Because testimonio sharing was shown to be a source of empowerment, this research demonstrates a need for more safe spaces where daughters of color can feel comfortable sharing their testimonios, and in doing so, address their needs and emotions. Safe spaces can include a support group or an after-school program where youth with similar experiences of parental incarceration can get to know each other and feel less alone. These spaces can also provide an outlet for system-impacted children to understand their emotions and process the complexity of their anger. Social services organizations and social work practitioners should strive to create educational programs and support groups for system-impacted children of color.
Through the themes revealed in the interviews, this research points towards the specific needs of system-impacted daughters of color, including financial and educational resources. Without financial support of incarcerated parents, system-impacted children should have rental assistance, food pantries, and educational school supplies available. It is essential to create educational programs that serve this population, as those who experienced parental incarceration have demonstrated that they could be a potential escape. Given the findings of this research, future research with a larger sample size that includes women from other marginalized populations, in particular Black women, is needed to argue the effectiveness of storytelling in healing from parental incarceration.
Program's Community Development and Social Justice (CDSJ), and the mentorship and guidance of Ms. Noemi Rivera-Olmedo and Dr. Alice Ho. | 2021-08-02T00:06:16.198Z | 2021-05-04T00:00:00.000 | {
"year": 2021,
"sha1": "6ea9036a72beb646c34afabf7e6c2394d5df3e99",
"oa_license": "CCBY",
"oa_url": "https://journals.library.columbia.edu/index.php/cswr/article/download/7540/4229",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d27b52e793a8f28b722a9b7bb47939deb1807ee2",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235907595 | pes2o/s2orc | v3-fos-license | Efficient interspecies transmission of synthetic prions
Prions are comprised solely of PrPSc, the misfolded self-propagating conformation of the cellular protein, PrPC. Synthetic prions are generated in vitro from minimal components and cause bona fide prion disease in animals. It is unknown, however, if synthetic prions can cross the species barrier following interspecies transmission. To investigate this, we inoculated Syrian hamsters with murine synthetic prions. We found that all the animals inoculated with murine synthetic prions developed prion disease characterized by a striking uniformity of clinical onset and signs of disease. Serial intraspecies transmission resulted in a rapid adaptation to hamsters. During the adaptation process, PrPSc electrophoretic migration, glycoform ratios, conformational stability and biological activity as measured by protein misfolding cyclic amplification remained constant. Interestingly, the strain that emerged shares a strikingly similar transmission history, incubation period, clinical course of disease, pathology and biochemical and biological features of PrPSc with 139H, a hamster adapted form of the murine strain 139A. Combined, these data suggest that murine synthetic prions are comprised of bona fide PrPSc with 139A-like strain properties that efficiently crosses the species barrier and rapidly adapts to hamsters resulting in the emergence of a single strain. The efficiency and specificity of interspecies transmission of murine synthetic prions to hamsters, with relevance to brain derived prions, could be a useful model for identification of structure function relationships between PrPSc and PrPC from different species.
Introduction transmission. To explore these possibilities, we determined the susceptibility of hamsters to infection with murine synthetic prions.
Interspecies transmission of murine synthetic prions to hamsters
Groups of hamsters (n = 5) were intracerebrally (i.c.) inoculated with either uninfected brain homogenate (UN) or with murine wild type synthetic prions (MSPs). All hamsters i.c. inoculated with UN brain homogenate remained clinically normal for greater than 500 days postinfection (dpi) (S1 Table). All of the hamsters i.c. inoculated with MSPs (n = 5) developed clinical signs of prion infection at 321±3 (days±SEM) dpi (Fig 1 and S1 Table). Western blot confirmed presence of PrP Sc in the brains of all hamsters inoculated with MSPs (S1 Fig). Progression of clinical disease was extended, with 66±3 days between onset of clinical signs and sacrifice (S1 Table). Clinical signs of MSP-infected hamsters (HaMSP) included progressive lethargy and weight gain.
MSPs rapidly adapt to hamsters
Brain material from hamsters inoculated with MSPs that developed clinical signs of prion disease was serially passaged by i.c. inoculation in hamsters. All of the hamsters inoculated (n = 5) for each serial passage developed clinical signs of prion infection with an incubation period of 129±5 (5/5), 113±3 (5/5), 115±3 (5/5) and 122±3 (5/5) dpi for the four serial hamster passages, respectively (Fig 1 and S1 Table). Disease progression remained slow even as incubation period shortened, with clinical duration of 62±5, 80±3, and 78±3 days for the initial three serial passages, respectively (S1 Table). Clinical signs in all four serial passages were characterized by a statistically significant (p<0.05) weight gain compared to mock-infected age matched controls (S1 Table). Onset of statistically significant weight gain occurred after onset of clinical signs until the third hamster passage where onset of significant weight gain occurred before onset of clinical signs (S1 Table). None of the negative control i.c. mock-infected hamsters (n = 5) included for each of the four serial hamster passages developed clinical signs of prion infection (S1 Table). To investigate if HaMSP could infect hamsters by extraneural routes of infection, groups of hamsters (n = 5) were inoculated with 4 th hamster passage MSP brain material by either the intraperitoneal (i.p.), extranasal (e.n.) or per os (p.o.) routes. All of the animals inoculated by either the i.p. or e.n. route developed clinical signs of prion infection, including weight gain, at 224±3 and 296±20 dpi respectively, while three of the five hamsters p.o. inoculated developed clinical signs of prion infection, including weight gain, at 288±3 dpi (S1 Table). Overall, MSPs adapted to hamsters on first serial hamster passage, all hamster passages had similar clinical features and HaMSPs could establish infection by several extraneural routes of infection.
Electrophoretic mobility and glycoform ratio of PrP Sc from HaMSPinfected hamster brain homogenate
Western blot analysis of proteinase K (PK) digested central nervous system (CNS) homogenate from the initial interspecies transmission and all subsequent serial hamster passages of HaMSPs identified PK-resistant PrP Sc consistent with the clinical diagnosis of prion infection (Fig 2A). PK-resistant PrP Sc was also identified in CNS homogenates from all clinically positive hamsters inoculated with HaMSP via the i.p. The unglycosylated PrP Sc polypeptide from HY-, 139H-or HaMSP-infected hamsters migrated at 21 kilodaltons (kDa), in contrast to PrP Sc from DY-infected hamsters, which migrates at 19 kDa ( Fig 2B). The unglycosylated PrP Sc polypeptide from HaMSP-infected hamsters inoculated via the i.p., e.n., or p.o. routes migrated at 21 kDa, similar to the i.c. route of infection (S2B Fig), indicating inoculation route did not affect migration of PrP Sc . Analysis of the ratio of each PrP Sc glycoform from HY, DY, 139H, and HaMSP-infected brain homogenate did not identify significant (p>0.05) differences among the strains tested, with the diglycosylated polypeptide being the most abundant glycoform in all cases (Fig 2C). The diglycosylated polypeptide was also the most abundant glycoform in HaMSP-infected hamsters inoculated via the i.p., e.n., or p.o. route (S2C Fig). Overall, PrP Sc from HaMSP-infected hamsters has similar migration and glycoform ratio properties that are not affected by the route of infection and are consistent with all currently described hamster-adapted prion strains [39,40].
Conformational stability of PrP Sc from hamsters infected with HaMSPs remains constant during adaptation
The average conformational stability [Gdn-HCl] 1/2 value of PrP Sc from CNS of hamsters infected with the brain-derived control strains HY, DY or 139H was 2.31±0.04, 1.92±0.03 and 1.86±0.01 M, respectively (Fig 3 and S2 Table). The average conformational stability [Gdn-HCl] 1/2 value of PrP Sc from CNS of hamsters infected with HaMSP was 1.84±0.02 M at initial interspecies transmission, and, in the subsequent serial hamster passages two through five was 1.96±0.02, 1.92±0.02, 1.94±0.01, and 1.95±0.01 M, respectively (Fig 3 and S2 Table). Conformational stability data is summarized as a violin plot in Fig 3. As the MSP adapted to hamsters, conformational stability of PrP Sc from HaMSP-infected hamsters did not change. The conformational stability of PrP Sc from hamsters infected with HaMSP via the i.p., e.n., or p.o. routes
Similar PMCA conversion efficiency of PrP Sc during MSP adaptation to hamsters
The average PMCA conversion coefficient (PMCA-CC) of PrP Sc from the first four hamster passages of HaMSP (HaMSP1-4) was 0.53±0.13, 0.28±0.03, 0.62±0.10 and 1.02±0.0, respectively, indicating PMCA conversion efficiency did not change as MSP underwent adaptation in hamsters (Fig 4). The PMCA conversion efficiency of PrP Sc from HaMSP-infected brain homogenate is relatively less efficient at conversion than short incubation period strains (HY and 263K, both with PMCA-CC of 20 [41]), and possesses conversion efficiency in line with other long incubation period strains, such as DY and 139H (PMCA-CC of 0.02 for both [41]). Overall, conversion efficiency of PrP Sc from HaMSP-infected brain homogenate remained stable throughout adaptation of MSP to hamsters and is consistent with other long incubation period brain-derived strains.
HaMSP-infected hamsters exhibit the neuropathological hallmarks of prion disease
Hematoxylin and eosin staining of HaMSP-infected brain sections revealed characteristic spongiosis associated with prion disease (Fig 5B) in contrast to brains from mock-infected animals which lacked spongiosis ( Fig 5A). Immunohistochemistry with the anti-PrP antibody 3F4 determined HaMSP-infected brains contained abnormal prion protein deposition ( Fig 5D) compared to mock-infected animals ( Fig 5C). In contrast to brain sections from mockinfected animals (Fig 5E and 5G), HaMSP-infected brain sections (HaMSP5) also showed astrogliosis ( Fig 5F) and microgliosis (Fig 5H) when the astrocyte marker GFAP and microglia marker Iba-1 were utilized in IHC, respectively. Overall, animals infected with the synthetically-derived HaMSP prions exhibited the neuropathological hallmarks of prion disease, similar to animals infected with brain-derived prions.
HaMSP-infected hamsters clinically resemble 139H-infected hamsters
Upon evaluating the incubation period, clinical signs, and biochemical features of hamsters infected with HaMSP, we observed similarities between HaMSP-infected and 139H-infected hamsters. To explore the extent of these similarities, groups of hamsters (n = 5) were i.c. or i.p. inoculated with either HaMSP-(HaMSP4) or 139H-infected brain homogenate. A group (n = 5) of negative control hamsters were i.c. inoculated with uninfected brain homogenate. The incubation periods of HaMSP (HaMSP5)-and 139H-infected animals were similar for both the i.c. (122±3 dpi (HaMSP) vs 127±3 dpi (139H)) or the i.p. (224±3 dpi (HaMSP) vs 225 ±3 dpi (139H)) inoculation routes (Fig 6A and 6B). Disease progression was extended in both HaMSP and 139H i.c.-inoculated animals, with a clinical duration of 32±3 and 35±3 day, Brain sections from mock-infected (UN) and HaMSP (HaMSP5)-infected animals were stained with hematoxylin and eosin (panels A, B) to observe spongiform degeneration. Immunohistochemistry was also performed using the anti-PrP antibody 3F4 (panels C, D), the astrocyte marker GFAP (panels E, F), and the microglial marker Iba-1 (panels G, H) to observe abnormal PrP deposition, astrogliosis, and microgliosis, respectively. The white schematic inset in panel A depicts the brain region imaged in every panel. Scale bar 50 μm; inset scale bar 25 μm. respectively. For the i.c. inoculation route, both HaMSP-and 139H-infected hamsters weighed significantly (p<0.05) more than uninfected controls starting at 59 dpi that continued throughout the duration of the incubation period (Fig 6C and 6E). For the i.p. inoculation route, HaMSP-or 139H-infected hamsters weighed significantly (p<0.05) more than uninfected controls beginning at 115 and 129 dpi, respectively, which continued throughout the time course of disease ( Fig 6D and 6F). Overall, 139H and HaMSP-infected hamsters had a strikingly similar clinical course of disease independent of the route of infection. For both 139H-and HaMSP-infected hamsters, onset of statistically significant (p<0.05; ANCOVA model) weight gain compared to uninfected controls occurs before the appearance of clinical signs of prion disease. 139H-and HaMSP-infected hamsters had similar (p>0.05) weights. Panels E and F display the weight data from panels C and D, respectively, as percent change in weight from day of inoculation; panels C and D display total weight. The purple square and green triangle above the graph on panels C-F indicate onset of clinical signs for 139H-and HaMSP-infected hamsters, respectively. � indicates the dpi (59) at which both 139H and HaMSP i.c.-infected animals begin to weigh statistically significantly more than uninfected controls (panel C).î ndicates the dpi (115) at which HaMSP i.p.-infected animals begin to weigh statistically significantly more than uninfected controls (panel D). # indicates the dpi (129) at which 139H i.p.-infected animals begin to weigh statistically significantly more than uninfected controls (panel D). The shaded region in panels C-F represents the standard deviation (SD). https://doi.org/10.1371/journal.ppat.1009765.g006
PrP Sc deposition patterns in HaMSP-infected brains
Previous studies in our lab utilizing anti-prion antibodies whose epitopes span the length of the prion protein identified differences in PrP Sc truncation and deposition among strains [41]. To examine PrP Sc deposition patterns, immunohistochemistry was performed on mockinfected, HaMSP (HaMSP5)-and 139H-infected brain sections utilizing three anti-PrP antibodies (8B4, 3F4, and D18) whose epitopes span the length of the prion protein (Fig 9). Using these antibodies, we failed to detect PrP Sc on negative control mock-infected hamster brain sections (Fig 9A-9C). In HaMSP-infected brain sections, PrP Sc deposits were detected in the neuropil of the vestibular nuclei using all three anti-PrP antibodies (8B4, 3F4, and D18), suggesting these deposits consist of full length PrP Sc (Fig 9G-9I). In contrast, we failed to detect intraneuronal deposition regardless of antibody used. We found similar PrP Sc deposition patterns in the vestibular nuclei of 139H-infected brains (Fig 9D-9F), with neuropil deposition detected with all three anti-PrP antibodies. For both HaMSP-and 139H-infected brains, perivascular deposition was prominent. Overall, similar PrP Sc deposition patterns were observed between 139H and HaMSP-infected hamsters and from other long incubation period strains (e.g., DY) previously investigated by our lab [41].
Pancreatic pathology shared by HaMSP-and 139H-infected hamsters
Previous studies with 139H described gross pancreatic pathology, with red-brown nodules scattered over the surface of the pancreas [42]. In the i.c. passages of 139H and HaMSP, a gross pancreatic pathology similar to that described by Carp [43]. Similar histopathological changes were noted in previous studies of pancreas
Discussion
Prion transmission that results in an incomplete attack rate with extended and variable incubation periods can be due to an inefficient establishment of infection. This is observed during interspecies transmission, where the species barrier effect can result in an extended incubation period and incomplete attack rate [22,[45][46][47][48]. Intraspecies transmission of animals with inoculum where titer is near a single LD 50 similarly results in extended and highly variable incubation periods and an incomplete attack rate compared to higher titer inoculums of the same strain [49,50]. Synthetic prions, formed from non-infectious components, when inoculated into hosts with the same PrP amino acid sequence, can cause disease with highly variable, extended incubation periods, incomplete attack rates, or can completely fail to cause disease and instead establish a subclinical infection [5,34,35,51]. In contrast, the results presented here indicate that all of the hamsters inoculated with MSPs developed clinical signs of prion disease, with the onset of clinical signs of disease occurring within 1.6% of the average incubation period. This observation suggests a relatively low species barrier exists between MSPs and hamster PrP C similar to what has been observed with other brain derived murine strains that were transmitted to hamsters [17]. We hypothesize that several factors may contribute to this observation.
The murine synthetic prions used in this study may contain bona fide PrP Sc . The incomplete attack rate and extended incubation period of synthetic prions is proposed to be a result of deformed templating. The deformed templating hypothesis posits that synthetic prions do not consist of authentic PrP Sc , but instead, are comprised of a fibrillar PrP conformation that, through an inefficient process of generating folding intermediates, results in the production of atypical PK-resistant PrP (i.e., PrP res ) prior to production of authentic PrP Sc [52,53]. Previous work determined that intraspecies transmission of MSPs to mice results in a 100% attack rate with the onset of disease at approximately 130 dpi that progresses to a terminal stage by 150 ±2.2 dpi [6]. The efficient interspecies transmission of MSPs to hamsters reported here is consistent with the previous efficient transmission of MSPs to mice. Overall, these data are inconsistent with the hypothesis that the MSPs undergo an extended, inefficient deformed templating process that generates intermediate conformational variants but are instead consistent with the hypothesis that MSPs are comprised of authentic PrP Sc .
Interspecies transmission of MSPs to hamsters results in the emergence of a single strain. Mixtures of strains present in an inoculum, or as a result of interspecies transmission, can take several serial animal passages before adaptation and emergence of a dominant strain [21,54]. Interference between strains contribute to this lengthy adaptation process [55][56][57][58]. Here we describe that adaptation of MSPs to hamsters rapidly occurred by second serial hamster passage (Fig 1 and S1 Table). Throughout all four serial hamster passages, the clinical presentation of disease was characterized by a progressive lethargy with weight gain. Additionally, the molecular weight and glycoform ratio of PK digested PrP Sc remained constant in all of the HaMSP-infected hamsters (Fig 2) and the conformational stability of PrP Sc of HaMSP remained remarkably similar in all of the hamster passages of MSP (Fig 3). This is in contrast to murine [59,60] or hamster [35] synthetic prions where the conformational stability of PrP Sc decreased, corresponding with a shortening of the incubation period as the synthetic prions adapted to the host. In addition to similarities in the biochemical properties of PrP Sc between all passages of HaMSPs in hamsters, the biological activity of PrP Sc also remained constant during adaptation as evidenced by PMCA conversion efficiency (Fig 4). Importantly, we did not observe the emergence of a short incubation, 263K-like strain, which has been reisolated several times from diverse sources, suggesting that it may be a favored conformation of PrP Sc [46,47,54,61,62]. Overall, the extraordinarily consistent clinical and biochemical features throughout the passage history to a new host suggest that transmission of MSPs to hamsters results in the emergence of one strain and that if other MPS strains were present in the original inoculum they were not pathogenic for hamster and did not interfere with the emergence of HaMSP. Overall, these observations suggest that MSPs consist of a single, or an overwhelmingly predominant conformer, versus a mixture of prion strains.
The strain that emerges in hamsters inoculated with MSPs is similar to 139H. The species barrier is strain dependent, with different strains in the same host having different zoonotic potential [48]. Previous work indicated that the murine strain 139A could establish infection in hamsters [16]. This hamster-adapted strain of 139A, termed 139H, emerged after three passages and is clinically characterized by a progressive gain in weight [16]. Comparison of the passage history of 139A to hamsters is strikingly similar to that of what is reported here for the interspecies transmission of MSP to hamsters (S1 Table and S6 Fig). Studies conducted in parallel comparing hamsters infected with either 139H or hamster-adapted MSPs failed to identify differences in the onset of clinical signs, duration of clinical disease, and the progression of weight gain, by two different routes of infection (Fig 6 and S1 Table). Both 139H and HaMSPinfected hamsters share similar pancreatic pathology that has not been described in other hamster-adapted prion strains [42,43,63] (S5 Fig). The conformational stability of PrP Sc was similar between 139H and hamster-adapted MSPs and the PMCA conversion efficiency failed to identify differences between 139H and hamster-adapted MSPs (Figs 3 and 4). The truncated species of PrP Sc identified in the neuropil and neurons of 139H and hamster-adapted MSPs is similar and they share similarities in the distribution of spongiform degeneration in the CNS in all but one location examined (Figs 7 and 8). Several possibilities exist to explain this difference. First, 139H and hamster-adapted MSP are similar, but not identical strains. Complicating this interpretation is the operational definition and subjective categorization of strains. It is unclear what phenotypic differences are required to designate a difference between strains versus natural variation between different isolates of the same strain. Second, the isolate of 139H used in the current study, during its passage history, may have accumulated substrains that contribute to the subtle differences compared to HaMSP [62,64,65]. Comparison of HaMSP to the original isolation of 139H could address this possibility. Importantly, this neuropathological difference, in combination with the failure of the mock-infected animals to develop clinical disease (S1 Table), the absence of 139A prions in the laboratories where the MSPs were generated and where the hamster bioassay occurred, and the consistency of onset of clinical disease of hamsters inoculated with MSPs are all consistent with HaMSP being caused by infection with MSP and not from contamination. Overall, the vast majority of clinical, biochemical and pathological observations suggest that HaMSP is a reisolation of 139H.
The system described here may serve as a model to better understand the mechanisms of interspecies transmission. The interspecies and intraspecies transmission of MSPs suggest that MSPs are bona fide PrP Sc with 139A-like strain properties. In total, these observations indicate a specificity and efficiency of an interspecies transmission event using a synthetic source of prions with relevance to what has been observed using brain derived prions from animals. Meaningful structure function relationships between PrP Sc and PrP C from different species may now be possible.
Ethics statement
All procedures involving animals were approved by the Creighton University Institutional Animal Care and Use Committee and comply with the Guide for the Care and Use of Laboratory Animals.
Animal bioassay
Male Syrian hamsters were inoculated with 25 μl of murine synthetic prions [6,36,67] or a 10% (wt/vol) brain homogenate by either the intracranial (i.c.), intraperitoneal (i.p.), extranasal (e.n.), or per os (p.o.) inoculation route as previously described [68]. The 139H used in this study was a generous gift from Richard Rubenstein and originated from the 139H isolated by Richard Kimberlin [16]. Hamsters were monitored three times per week for onset of clinical signs of prion disease. Incubation period was calculated as the number of days between inoculation and onset of clinical signs of prion infection. Clinical duration of disease was calculated as the number of days between onset of clinical signs and sacrifice. Individually identified animals were weighed once per week.
Tissue collection and processing
Following euthanasia, tissues were collected for use in biochemical testing and histology. Brains were cut mid-sagittal, with one half collected for biochemical testing and one half for histological testing, or collected whole for histology, collecting spinal cord (C1-C3) for biochemistry. Tissue collected for biochemical testing was immediately placed on dry ice and then stored at -80˚C. Before use in analysis, CNS tissue was homogenized to 10% w/v (100 μg/ μl) in Dulbecco's Phosphate Buffered Saline (DPBS; Corning, Manassas, VA) and stored at -80˚C. Tissue collected for histological purposes was immersion fixed with paraformaldehydelysine-periodate (PLP) for 24 hours at RT, placed in cassettes, and then stored in 70% ethanol until paraffin processing with a Tissue-Tek VIP 6 vacuum infiltration processor (Sakura Finetek USA, Torrance, CA). Thin (7 μm) sections of tissue for histology and immunohistochemistry were mounted on 25 x 75 Superfrost Plus glass slides (Fisher Scientific, Pittsburg, PA) and dried for 48 hours at 37˚C.
SDS-PAGE and western blot
Detection of PrP Sc by Western blot was performed as previously described [69]. Briefly, 5% w/ v brain homogenate was incubated with proteinase K (PK; 100 μg/mL stock; Roche Diagnostics, Mannheim, Germany) for 1 hour at 37˚C with shaking. To halt PK digestion, an equal volume of 2x sample buffer (4% w/v SDS, 2% v/v β-mercaptoethanol, 40% v/v glycerol, 0.004% w/ v Bromophenol blue, and 0.5 M Tris buffer pH 6.8) was added and the samples were boiled at 100˚C for 10 minutes. Samples were size fractionated on 4-12% Bis-Tris NuPage polyacrylamide gel (Invitrogen, Carlsbad, CA), and transferred to a polyvinylidene difluoride (PVDF) membrane (Immobilon P; Millipore Sigma, MS). The membrane was blocked with 5% w/v nonfat dry milk in 0.05% v/v tween tris-buffered saline (TTBS; BioRad Laboratories, Hercules, CA) for 30 minutes and the hamster prion protein detected by the mouse monoclonal anti-PrP antibody 3F4 (final concentration of 0.1 μg/mL, EMD Millipore, Billerica, MA). Western blots were developed using Pierce SuperSignal West Femto maximum-sensitivity substrate per manufacturer's instructions (Pierce, Rockford, IL) and imaged on a Li-Cor Odyssey Fc Imager (Li-Cor, Lincoln, NE). Migration analysis of the unglycosylated PrP Sc polypeptide was determined using NIH ImageJ Fiji (NIH, USA) lane analysis software.
Conformational stability assay
The PrP Sc conformational stability assay was performed as described previously with the following modifications [70]. Briefly, a guanidine hydrochloride dilution series was prepared by diluting 8 M Guanidine hydrochloride (Sigma-Aldrich, St. Louis, MO) into DPBS (Corning, Manassas, VA) from 0 M to 5.5 M (increasing by 0.5 M increments). Brain homogenate was diluted 1:10 (spinal cord homogenate diluted 1:5) from 10% w/v brain homogenate (100 μg/μl to 10 μg/μl) and incubated in guanidine hydrochloride (1:3) with shaking for one hour at room temperature. Guanidine hydrochloride concentration was adjusted to 0.5 M for all samples prior to plating on a 96-well filter plate with a PVDF membrane bottom (Merck Millipore, Co. Cork, Ireland). Samples were dried at room temperature for one hour followed by digestion with PK (5 μg/mL; 1:100 PK:BH) at 37˚C for one hour (5 μg/ml; Roche Diagnostics, Mannheim, Germany). PK digestion was terminated by incubation with phenylmethane sulfonyl fluoride (PMSF; MP Biomedicals, LLC, Salon, OH) for 20 minutes at room temperature. The samples were then blocked for endogenous peroxidases (0.3% H 2 O 2 in methanol) and non-specific binding (5% w/v nonfat dry milk in TTBS [BioRad Laboratories, Hercules, CA]). Hamster prion protein was detected using the mouse monoclonal anti-PrP antibody 3F4 (final concentration of 0.1 μg/mL; EMD Millipore, Billerica, MA). The membrane was developed using the Pierce SuperSignal West Femto system (Pierce, Rockford, IL) and imaged on a Li-Cor Odyssey Fc Imager (Li-Cor, Lincoln, NE). Signal intensity was analyzed using Li-cor Image Studio Software v.1.0.36 (Lincoln, NE) and denaturation curves were generated using GraphPad Prism (GraphPad Software, San Diego, CA). The point where half of PrP Sc is in the native folded state and half is in a denatured state (i.e. [Gdn-HCl] 1/2 ) was determined by calculating the log IC 50 of the non-linear curve fitted to the normalized data (GraphPad Software, San Diego, CA).
Protein misfolding cyclic amplification
Protein misfolding cyclic amplification was performed as previously described [57]. Briefly, 10% w/v brain homogenate (500 μg eq.) was 2-fold serially diluted in DPBS. Diluted samples were further diluted 1:20 into uninfected Syrian hamster brain homogenate in PMCA conversion buffer ( MO], and complete protease inhibitor tablet [Roche Diagnostics, Mannheim, Germany]) and four 100 μl aliquots made per dilution (three replicates, one frozen, unsonicated control). Samples were loaded into a Misonix 3000 sonicator (Farmingdale, NY) and subjected to one round of PMCA (cycles of 5 second sonication, 9 minute 55 second incubation for 24 hours). Following PMCA, PrP Sc was detected and quantified via Western blot as described above. For both protocols, the PMCA conversion coefficient is calculated as the reciprocal of the concentration of the highest dilution of prion-infected brain homogenate that resulted in detectable amplified PrP Sc by Western blot following one round of PMCA.
Neuropathology analysis
Tissue analyzed for the lesion profile first underwent staining with hematoxylin and eosin. Briefly, slides were exposed to xylene (Fisher Scientific, Pittsburg, PA), rehydrated using an alcohol series (100-70% vol/vol ethanol; Decon Labs Inc., King of Prussia, PA), and rinsed in water. Slides next were stained with hematoxylin (Thermo Fisher Scientific, Waltham, MA) followed by exposure to clarifier (Thermo Fisher Scientific, Waltham, MA) and bluing reagent Decon Labs Inc., King of Prussia, PA), and rinsed in xylene before being cover slipped (Slip-Rite cover glass, 24x50, Fisher Scientific, Pittsburg, PA). Uninfected and 139H-infected H&Estained brain sections served as negative and positive controls, respectively. Images of brain sections were captured using an Infinity 2 microscope camara (Teledyne Lumenera, Ottawa, ON) attached to a Nikon Eclipse 80i compound microscope (Nikon Instruments, Melville, NY) and ImageJ software and coded for blind evaluation. Five different anatomical locations (medial septum, red nucleus, vestibular nuclei, granule cell layer of the cerebellum, and deep cerebellar nuclei) were assessed for severity of spongiosis and given a vacuolation score ranging from 0 (no vacuoles) to 5 (confluent vacuoles) [71]. Brain sections from three different animals were assessed per strain (UN, 139H, MSP). Three reviewers evaluated the blinded slides and their scores averaged for each anatomical location and strain.
Statistical analysis
Differences in total body weight in grams between experimental groups was determined using separate ANCOVA models. Assumption of homogeneity of regression between baseline weight and group was tested prior to estimation of ANCOVA models. The Tukey adjustment was used as post-hoc testing to determine significance of differences at each weight point (p<0.05). Differences among groups for biochemical properties such as conformational stability and PMCA conversion efficiency was determined using one-way ANOVA (p<0.05). Differences between lesion profile scores was determined using Student's t-Test (p<0.05). The pancreases from the 139H-or HaMSP-infected hamsters exhibited small red-brown nodules scattered over the surface (panels B, C) compared to mock-infected (panel A). Islets of Langerhans in pancreases of 139H-or HaMSP-infected hamsters appear enlarged (panels E, F) compared to UN hamsters (panel D), and were characterized by hemorrhages termed blood vessel cores (arrows). These findings are consistent with pancreases from 139H-infected hamsters as described by Carp, Kim, and Callahan in 1990 [42]. Scale bars are 50 μm. [16]. Biologically cloned 139A was passaged once in C57BL mice (118±2; n = 7) before transmission to hamsters (5% w/v inoculum). The murine synthetic prions and 139A were passaged via the i.c. inoculation route. Passage number refers to passage number in hamsters. a Days post inoculation±SEM b Number of animals that developed clinical signs of prion disease / total number of animal inoculated. c Number of animals that developed clinical signs of prion disease. (TIF) S1 | 2021-07-16T06:16:30.484Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "554f42b7da59a3fa4461cba98e5e151865f55bf4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1009765&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ef0c1321d898036ef2cfcd8eaff12832a8bbffb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62900644 | pes2o/s2orc | v3-fos-license | Optimal Monetary Policy and Asset Prices : the case of Colombia ∗
The unfolding of the 2007 world financial and economic crisis has highlighted the vulnerability of real economic activity to strong fluctuations in asset prices. Which is the optimal monetary policy in an economy like the Colombian that is exposed to swings in asset prices? What is the implication in terms of Central Bank losses when it follows a standard simple rule instead of the optimal monetary policy? To answer these questions we use a Dynamic Stochastic General Equilibrium (DSGE) model with physical capital and sticky wages for the Colombian economy and derive the optimal monetary policy. Then, we explore the dynamic effects of news about a future technology improvement which turns out ex post to be overoptimistic under the optimal policy rule and alternative specifications of simple rules and definitions of output gap.
Introduction
During the last couple of decades, many monetary authorities around the world have achieved the goal of a low and stable inflation rate.However, this price stability has not came hand-in-hand with higher asset price stability.Borio and Filardo (2003), among others, document the emergence of asset prices, credit and investment booms and bust which have become a more important source of macroeconomic instability in both developed and developing countries.Financial unbalances are of great concern because when they unwind, the real economy is exposed to a substantial economic downturn and very frequently to recession.For example, many economist attribute at least some part of the 1990 recession in the United States to the preceding decline in commercial real estate prices (Bernanke and Gertler (1999).
The Colombian economy, like many other developing economies has experienced very strong asset prices and output fluctuations.Figure 1 displays the cyclical component of economic activity and asset prices for the Colombian economy during 1970-2005 1 .Two boom-bust episodes are evident, the first during the eighties and the second during the nineties.Since 2004 there was a boom phase that has been followed by an economic downturn triggered by the 2007 global financial crisis.The close correlation between asset prices cycles and output cycle and the evidence of a financial accelerator mechanism in the Colombian economy that was found by López, Prada and Rodriguez (2008), rises the question if the nature of monetary policy is able to explain the behavior of both variables.Would the boom-bust cycles be smoother if the monetary authority incorporates a response to asset prices in the simple monetary policy rule?How costly, in terms of central bank loss function, is a monetary policy that reacts only to inflation and output gap instead of taking into account asset prices?
To answer these questions, we set up a model for the Colombian economy where, as in Cristiano, Ilut, Motto and Rostagno (2008), the boom phase is triggered by a signal which leads agents to rationally expect an improvement in technology in the future but the signal turns out to be false and the bust phase of the cycle begins when people finds this out.We explore the effects of these news about a future technology improvement which turns out ex post to be overoptimistic under the optimal policy rule and alternative specifications of simple rules.
By optimal monetary policy we mean policy that minimizes an intertemporal loss 1 asset prices correspond to a weighted average of equity prices and real state prices function under commitment.The intertemporal loss function is a discounted sum of expected future period losses.We choose two alternative welfare criteria.The first is a quadratic period loss function that corresponds to flexible inflation targeting and is the weighted sum of two terms: the squared inflation gap between inflation and the inflation target and the squared output gap between output and potential output.The second measure of loss that we consider is a utility-based loss function.Svenssson et al. (2008) a key issue for a flexible inflation targeting central bank is which measure of output gap should try to stabilize.We report results from three alternative concepts of gaps used in the loss functions and the simple policy rules.One concept is deviations of output and asset prices from the hypothetical level that would exist if the economy would have had flexible prices and wages.The second is deviations from steady-state values.The third concept (used only in the simple rules) corresponds to growth rates.
Like in
The model we use is a DSGE model for a small open economy like Colombia.The model distinguishes households and entrepreneurs.Households consume and work, while entrepreneurs produce an homogeneous intermediate good using capital bought from capital producers and labor supplied by households.Entrepreneurs take bank loans facing borrowing constraints, tied to the value of collateral.In addition, there are banks who offer two types of financial assets to agents: saving and loans; retailers who set the final price of output goods; workers who supply their differentiated labor services through a union which sets wages to maximize member's utility, generating a nominal rigidity in wages à la Calvo.There is also a foreign sector which provides assets at the foreign interest rate which is positively related to the net foreign asset position of the domestic economy.Finally, there are capital producers who transform output goods into capital goods, a government and a central bank which conducts monetary policy.
The remainder of the paper is as follows.Section 2 describes the model.Section 3 presents the optimal policy problem, the different simple rules and the alternative results of a boom-bust episode.Section 4 concludes.
Consumption and saving decisions
The domestic economy is inhabited by a continuum of households indexed by i [0, 1].The representative agent i maximizes the following utility function where c pc t (i) per-capita consumption, h pc t (i) is per-capita hours worked l pc t (i) is percapita leisure time, which satisfies l pc t (i) = l − h pc t (i), with l > 0 being the total endowment of time.N t is total population which follows a stochastic process.
The discounted utility is given by with σ > 0, ς > 0 y φ > 0. Parameter ς is the inverse elasticity of labor supply with respect to real wages.Parameter σ is the constant relative risk aversion coefficient.Preferences display habit formation in consumption governed by parameter φ. χ u,h t are preferences shocks that shifts the consumption demand and leisure, A t represents productivity which follows the process ln where A t is a white noise variable.Following Prada (2008) we assume that there exist transaction costs in the economy.The exchange process requires real resources.In this process, the more transactions the higher the transaction cost and the higher the deposits held by households the lower the transaction cost: where v t (i) is deposits velocity and d h t−1 (i) deposits held by household i.
Cost per unit of transaction is given by ϑ (v t (i)), an increasing, positive, twice differentiable, convex function.In particular we assume that Households decisions have to match the following budget constraint where (a t (i)) represents Arrow-Debreu assets with price p a t (i), (d h t (i)) deposits, (τ t ) lump-sum taxes, (w t ) real wage, (tr t ) foreign transfers, (Π t ) total profits from firms and banks ownership, (i d t−1 ) interest on bank deposits and (π c t ) CPI inflation rate.Households choose consumption and the composition of their portfolios by maximizing (1) subject to (4).Given that we are assuming the existence of Arrow-Debreu assets, consumption is equalized across households and the first order conditions can be expressed in terms of effective worker: (5) along with (4), where λ t is the budget constraint Lagrange multiplier.
Labor supply and wage setting
Following Erceg et al. (2000), we assume that a continuum of monopolistically competitive households supply differentiated labor services to the production sector as an imperfect substitute for the labor services of other households.There is a set of perfect competitive labor service assemblers that combines household's labor hours in the same proportions as firms would choose.The aggregator's demand for each household's labor demand is defined as The optimal composition of this labor service unit is obtained by minimizing its cost, given the different wages set by different households.The demand for each differentiated variety of labor is given by where is an aggregate wage index and θ w > 0 is the elasticity of substitution among labor varieties.
We assume that wage setting is subject to a nominal rigidity à la Calvo (1983).
The duration of each wage contract is randomly determined: in any given period, the household is allowed to reset its wage contract with probability (1 − w ), the household is not allowed to reset its wage contract.We assume there is an updating rule for all those households that cannot re-optimize their wages.In particular, if a household cannot re-optimize during i periods between t and t + i, then its wage at t + i is given by where n ∈ N is the indexation horizon, γ k ≥ 0 is the weight assignned to inflation rate k períods earlier and 1 − n m=1 γ qm ≥ 0 is teh weight assigned to the target inflation set by the monetary authority π.This adjustment rule implies that workers who do not optimally reset their wages update them by using a geometric weighted average of past CPI inflation and the inflation target set by the Central Bank, π.
In any period of time t in which a household is able to reset its wage contract solves the problem max subject to the labor demand (8), the updating rule for the nominal wage ( 9) and the budget constraint (4).
Entrepreneurs
Entrepreneurs purchase capital in each period, (k t−1 , and use it in combination with hired labor, h t to produce the intermediate product, q s t , following a constantreturns-to-scale technology where AtNt .The intermediate product is sold in a competitive market at wholesale price p qs t .Following Christiano et al. (2008) we assume that technology, χ qs t , follows the exogenous process given by ln ( where t y e t are uncorrelated over time and with each other.This simple process allows to incorporate a boom-bust episode in the model.Throughout the analysis, we consider the following impulse.Up until period 1, the economy is in steady state.In period t = 1, a signal occurs which suggests ln (χ qs t ) will be high in period 1 + p. But, when period 1 + p occurs, the expected rise in technology in fact does not happen.
Capital stock depreciates at the rate δ > 0. Following Gerali et al. (2008) we assume that to finance capital purchases entrepreneurs have access to loan contracts offered by banks.The amount of resources that banks are willing to lend to entrepreneurs, z f t , is constrained by the value of their collateral, which is given by their holdings of physical capital.The borrowing constraint is where m f t is the 'loan-to-value' and i zf t is the interest rate paid on loans, z f t .Entrepreneur's budget constraint is where Π qs t represents the flow of profits that will be transferred to households.Given labor demand, the representative firm purchase k s t+1 units of capital at price p k t , to maximize its expected sum of profits flows, using Λ as the appropriate discount factor.The optimality conditions are given by
Retailers and Price Setting
Retailers buy output from entrepreneurs and slightly differentiate it at no resource cost.The differentiation of output gives the retailers some market power.Households and firms then purchase CES aggregates of these retail domestic good.Retailers are introduced to motivate sticky prices and we follow Calvo (1983) in introducing price inertia.Each retailer faces a demand for variety j given by q t (j) = χ qd t p q t (j) where χ qd t is an exogenous technological factor, p qd t is the output price of the aggregate basket q d t and θ q the price elasticity of demand for variety j.This parameter also define the flexible price equilibrium markup charged by firms.
Following Calvo (1983), we assume that only a fraction (1 − q ) of sellers are allowed to reset their prices.In particular, if a firm cannot set an optimal price, then it follows a non-optimal price rule where n ∈ N is the indexation horizon, γ k ≥ 0 is the weight assigned to inflation rate k periods earlier and 1 − n m=1 γ qm ≥ 0 is the weight assigned to the target inflation set by the monetary authority π.
If the firm receive a signal to optimally adjust its price it will choose p q t (j) to maximize max subject to the demand for variety j, (16 λ t as the appropriate discount factor.
Capital Producers
Capital producers purchase consumption goods as a material input, x t , and combine it with the existing capital stock (( AtNt ), to produce new capital.We assume that capital producers are subject to quadratic capital adjustment cost.The price of capital is determined by a q-theory of investment.
The aggregate capital stock evolves according to where χ k t is the marginal efficiency of investment following Greenwood et al. (1988).Capital producers' optimization problem, in real terms, consists of choosing the quantity of investment to maximize profits, so that subject to (18).The k t−1 first order condition is
Banks
The banking industry is assumed to be perfectly competitive.Since economic agents require deposits and credit, banks produce the financial services through a production technology that uses real resources from the economy as an input.Following Edwards and Vegh (1997), the production technology for banks is given by the cost function which is positive for z f t , d t > 0, convex, continuously differentiable, increasing in all arguments and homogeneous of degree one.ξ t represents an inverse measure of the total productivity of the banking intermediation sector.It is a cost scale factor exclusive of the banking sector that follows that process ln where ξ is the expected value of the cost scale factor, ρ ξ ∈ [0, 1) and ξ is white noise variables with variance σ 2 ξ .The policy of the Central Bank and the banking sector is related trough the reserve requirement which is a fixed proportion τ d t > 0 of total deposits, so the bank reserves, rb t , satisfies the constraint Banks can borrow from the central bank at a nominal rate i bc t .The net debt of a private bank with the central bank is b t .The banks also finance themselves through foreign debt f t and they pay the interest rate i f t set in the foreign market.It is assumed that the banks are the only private agents that have access to foreign resources.
The representative bank seeks the maximization of the discounted sum of profits (Π b t ).The bank's resource constraint is given by Bank´s income is given by credit interest payments at a nominal rate i zf t−1 , foreign where i is the risk free foreign interest rate, χ if t is an foreign interest rate shock , a cb t are foreign assets held by the central bank, F E is the steady state value of net foreign assets and Ω u > 0 is a scale parameter.We close the model in this way because without it net foreign indebtedness may be non-stationary, complicating the analysis of local dynamics.In steady state ( f t −a bc t )
Central Bank
Monetary authority is able to set the nominal interest rate prevailing in the interbank market i bc t following a Taylor-type rule where ρ π and ρ y are the weights assigned to inflation and output stabilization, respectively, i t is an exogenous shock to monetary policy, and y f lex t represents the hypothetical output level that would exist if the economy would have had flexible prices and wages.
The resource constraint of the Central Bank is given by where a bc t is the exogenous stock of foreign net assets and Π bc t are transfers to the government.
Government
The government obtains resources from lump-sum taxes τ t , net transfers from the central bank, the transaction costs and capital adjustment and uses this to finance public expenses g t , that follows the process ln (g t ) = (1 − ρ g ) ln (g) + ρ g ln (g t−1 ) + g t where g is the expected value of the government expenditure, ρ g ∈ (0, 1) and g are white noise with variance σ 2 g .
National accounts
Real GDP, y t , the final domestic income of the households from which we can define trade balance as where tr t represents foreign transfers.
Model Parametrization
The model is calibrated to match key steady-state ratios of Colombia.A period in the model corresponds to one quarter.
Long-run parameters
Following Mahadeva and Parra (2008), the annualized foreign steady-state real interest rate faced by the colombian economy is set at 3.42%.This implies a discount factor of β = 0.999.Following Prada (2008), the value of n is set to match the average annual rate of growth of the total population in Colombia (this rate is 1.22%), and the parameter a is calibrated to obtain an annual rate of growth of the labour-augmenting productivity of 1.5%.A value of σ = 2 is used as the constant relative risk aversion coefficient, Arias (2000).
The steady-state foreign annual inflation rate is set at 2% and the domestic annual rate is set at 3%, the long-run target of the central bank in Colombia.The parameter ς is set at 3 to obtain a Frisch elasticity of 0.33, near the value found by Prada and Rojas (2009).
The model is calibrated to produce a steady state value of h = 0.294, the share of time dedicated to the labour market.This implies a value of χ h = 146.90.We assume that the banking costs are quadratic, and set ν = 2.To match the average annualized real lending rate (7.92%) and the average annualized real deposit rate (2.01%) reported in Prada (2008), we set ν d = 6.284 × 10 −5 and ν z = 1.324 × 10 −4 .
The level of real GDP the steady-state is normalized to unity.This is achieved by setting χ qs = 0.524.The exogenous public expenditure parameter g is calibrated to obtain a steady-state ratio of government expenditure to GDP of 0.178, equal to the average of that ratio in the period 1994 : 1 − 2007 : 4.
Following Mahadeva and Parra (2008) the value of total foreign net assets to GDP is set to 1.20, and this implies a value of 1.20 for the parameter F E. The average ratio of net foreign assets of the central bank to GDP (net foreign assets, monetary sectorization -Banco de la República) is 0.454 in the period 2005 : 1 − 2007 : 4, and the parameter a cb is set to match this ratio.
The average ratio of net foreign transfers to GDP is 0.0351 and the parameter tr is set to this value.We assume quadratic transaction costs and set ϑ 1 = 2.The parameter ϑ 0 is calibrated to match the value of the average ratio of deposits which generate costs to the banks to GDP (1.20).This implies a value of ϑ 0 = 0.0126.The parameter α = 0.456 is calibrated to get the average ratio of investment to GDP (0.215) reported in Prada (2008).The steady-state leverage ratio m f is calibrated to match the average ratio of credit to GDP (2.10).This implies m f = 0.33.Following Prada (2008), τ d is set at 0.062 and a cb is set at 0.454.
Short run and additional parameters
Following Arango et al. (1998) the mark-up on production marginal cost is set at 25%, and this implies a value of θ q = 5.The same mark-up is assumed for the wage setting process.Following Bonaldi et al. (2009), the Calvo parameters that measure the degree of price stickiness are selected in such a way that, on average, the final good price is adjusted once each year ( q = 0.75) and the wage rate is adjusted once each four months ( w = 0.25).The elasticity of substitution between labour and capital is set at ρ = 0.84, as in Bonaldi et al. (2009).
In the baseline calibration it is assumed that there is no monopolistic competition in the financial system, because this assumption is not needed to explain the spread between interest rates.Then θ d → ∞ and θ z → ∞.The habit persistence φ is set at 0.5.The parameter of the adjustment cost of investment Ψ X is set at 0.7.The persistence of the exogenous processes is 0.6.The parameters of the policy rule are standard: ρ i = 0.75, ρ π = 1.25 and ρ y = 0.50.
Optimal Monetary Policy and Simple Policy Rules
We find the Ramsey-optimal allocations for our economy using the computer code and strategy used in Levin, Lopez-Salido (2004) and Levin, et al. (2005).The Central Bank minimizes an intertemporal loss function at time t: where where f lex represents the flexible price equilibrium variables and ss stands for steady state values.The first two losses are often used as a metric for capturing policymaker's preferences in studies that attempt to evaluate the trade-off between inflation variability and output variability.In addition to these losses, we consider a second measure of loss, i.e. a utility-based loss function, which we denote − util t .Following Woodford (2001), we derive util t by taking a second order log-linearization of the utility function around the steady state.We ignore the constant and first-order terms (the latter are zero in unconditional expectation) and focus on the unconditional expectation of the secondorder terms.The result is
Results for Boom-Bust
The results in Figure 2-4 show the dynamic response or our model to a t shock that occurs in period 1, followed by e t = − t+p for p = 5.Thus, there is a signal that technology will improve in the future but in the end turns out to be false.A positive signal arriving in t − p indicates households that the economy is likely to be more productive p periods ahead.Anticipating this, they try to bring to the present the future value of more production.They increase consumption and investment, in preparation for the future expected increase in productivity.To finance these activities, households increase their demand for credit and assets.Capital price rises because of the expected need for new capital in the future.This constitutes the boom stage of the cycle, based solely on expectations.But p periods ahead, when productivity is supposed to change, a surprise shock t may occur.If for instance, t = −e t−p , then productivity stays still and the expected productivity change was not realized.This may happen for instance, if a new technology resulted less efficient than expected, or if a production policy failed after generating good signals.Then households face the consequences of higher consumption and investment financed through credit, without real support.The economy enters a recession: consumption, investment, asset prices and general economic activity fall.
The boom has been burst.
We compare the dynamic properties of of output, consumption, investment, asset prices, nominal interest rate, real wages, deposits, credit and inflation in the Ramsey equilibrium with the behavior of these variables when we close the model with alternative simple policy rules.Figure 2 shows the dynamic response of these variables for the Ramsey equilibrium and for the model closed with the simple rule that reacts to output and inflation growth rate and the rule that besides reacts to asset prices growth rate, with ρ p k = 0.5.With a monetary authority that follows a simple rule, a minor fluctuation is transformed into a substantial boom-bust cycle.This happens first because the real wage rises during the boom in the Ramsey equilibrium so an efficient way to achieve a higher real wage is to let inflation drop.But, the monetary authority who follows the inflation-targeting strategy is reluctant to allow this to happen.Such a monetary authority responds to inflation weakness by shifting to a looser monetary policy stance and second when the productivity shock is not realized the central bank does not react fast enough relative to the optimal policy causing higher volatility.
Letting a reaction from central bank to asset prices gap does not improve very much the dynamics of the variables, but as we will see later when we compare the rules in terms of central bank losses there exist an important difference.
Figure 3 plots the results of the policy rule that takes into account output and asset prices deviations from the flexible economy.The boom-bust is smoother in this case because the boom is shorter than in the case of the flexible prices rules shown in Figure 2. The worse scenario occurs in the case were the monetary authority uses an instrument rule that reacts to deviations of output and asset prices from steady state values, Figure 4.In this case, the dynamic of the series is much more volatile.In addition, when the productivity shock turns out to be false, the monetary authority reacts too slowly relative to the flexible price rule.In terms of these responses this is the less desirable type of rule.The most suited policy rule, that is closer to the optimal policy, is the simple rule that reacts to output gap and asset prices gap using deviations from the flexible prices economy.
Something worth noting is that if the monetary policy is more aggressive (ρ π = 2.25) than accommodative (ρ π = 1.25) in terms of targeting inflation in the rule that uses deviations from the flexible equilibrium economy, the volatility of output, and inflation is reduced as can be seen in Figure 5. Therefore, we compute the losses for the different types of rules for both cases, the accommodative and the aggressive monetary policy.
Table 1 below shows the results for the three alternative criteria of welfare for the alternative simple rules under accommodative and aggressive policy rules.The optimal policy using deviations from the flexible prices in the loss function is the one that delivers the lower losses.
As can be seen, the lower losses are obtained with the flexible price rules with an aggressive monetary policy.Rules that perform the worst are those where the monetary authority responds to deviations of output and asset prices from steady-state values.
When the central bank follows a policy rule, an aggressive stance against inflation seems to control better the effects of the bubble, in terms of central bank losses.This happens because an aggressive stance allows a lower variability of inflation.A tighter control of prices does not allow the bubble to build up, so the relevant gap of asset prices is lower in the aggressive case.This in turn reflects in a slower growth of investment and output when the bubble is building up and generates a deeper fall of the relevant gap of these aggregates when the bubble bursts.
If the central bank does not follow an optimal policy, for the three objective functions the best results are achieved when the bank follows a rule that takes into account deviations of output and asset prices with respect to their hypothetical paths in an econ-omy with flexible prices.Since the expectational shock is real by nature, the economy with flexible prices has similar effects: an increase in gross production, consumption, investment and domestic and foreign debt.The central bank that takes into account that the flexible-price real variables are deviated as well will try harder to control prices and to make real variables behave as in the flexible-price economy.Therefore it allows a lower variability of prices and allows a faster fall of consumption, investment and credit when the productivity shock is not realized.This fast adjustment is reflected in less variability of real GDP and generates a smaller loss.
We must note that the dynamics of the economy do not change by much if the central bank takes or not into account the asset prices in the policy rule.The only case in which targeting the price of asset decreases the loss of the central bank for the unrealized productivity shock is when the policy rule looks at the flexible-prices economy.In this case the relative improvement from including asset prices is of 32 percent when the loss function uses flexible equilibrium variables.For all the remaining rules, targeting the asset prices do not decreases the loss.Just as before, if the central bank targets deviations of asset prices, then it will not allow for a fast adjustment.In the case of the flexible-prices economy the asset prices fall sharply, and the rule that follows this information will do a fast adjustment.
In conclusion, to minimize the loss of the central bank, a fast adjustment of the economy is needed when it is obvious that the productivity shock really did not happen.
Conclusions
We calibrated a DSGE model for the Colombian economy that incorporates features such as sticky prices and wages, a banking sector and a financial fragility describing balance sheet effects.We use the model to compute the optimal policy response of the economy under an expectations shock of improvement in technology that turns out to be false.The benchmark optimal-Ramsey equilibrium is used to compare simple policy rules that monetary authorities might use in the implementation of monetary policy.We find out that the simple policy rule that reacts to deviations of output from potential output defined as the hypothetical output level that would exist if the economy would have had flexible prices, is the one that delivers the lower central bank losses.This, because a fast adjustment of the economy is needed when it is obvious that the productivity shock did not happen.Adding asset prices gaps to the policy rule do not improve much the dynamics of the economy unless the central bank is able to identify asset prices misalignments.Finally, an aggressive monetary policy in terms of fighting inflation rate reduces central bank losses given that output and inflation variability are reduced.
Table 1 :
Welfare comparison for unrealized productivity shock (multiplied by 10 5 ) Model Optimal Steady State Optimal Flexible Gaps Optimal utility approx | 2018-12-21T03:29:21.322Z | 2009-12-14T00:00:00.000 | {
"year": 2009,
"sha1": "ec27cef8262fcd16c6f318f20429dab1af9d9ad0",
"oa_license": "CCBYNCSA",
"oa_url": "http://repositorio.banrep.gov.co/bitstream/20.500.12134/5600/1/be_583.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ec27cef8262fcd16c6f318f20429dab1af9d9ad0",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
249205755 | pes2o/s2orc | v3-fos-license | Cosmologies from higher-order string corrections
We study cosmologies based on low-energy effective string theory with higher-order string corrections to a tree-level action and with a modulus scalar field (dilaton or compactification modulus). In the presence of such corrections it is possible to construct nonsingular cosmological solutions in the context of Pre-Big-Bang and Ekpyrotic universes. We review the construction of nonsingular bouncing solutions and resulting density perturbations in Pre-Big-Bang and Ekpyrotic models. We also discuss the effect of higher-order string corrections on dark energy universe and show several interesting possibilities of the avoidance of future singularities.
Introduction
String theory has continuously stimulated its application to cosmology in a number of profound ways [1,2,3].It is actually very important to test the viability of string theory by extracting cosmological implications from it.In particular string cosmology has an exciting possibility to resolve the big-bang singularity which plagues in General Relativity.
The Pre-Big-Bang (PBB) model [4,5] based on the low energy, tree-level string effective action is one of the first attempts to apply string theory to cosmology.In this scenario there exist two disconnected branches, one of which corresponds to the dilaton-driven inflationary stage and another of which is the Friedmann branch with a decreasing curvature.Then string corrections to the effective action can be important around the high-curvature regime where the branch change occurs.Ekpyrotic/Cyclic cosmologies [6,7,8] have a similarity to the PBB scenario in the sense that the description in terms of the tree-level effective action breaks down around the collision of two branes in a five dimensional bulk.
When the universe evolves toward the strongly coupled, high-curvature regime with growing dilaton, it is inevitable to implement higher-order string corrections to the tree-level action.Indeed it was found that two branches can be smoothly joined by taking into account the dilatonic higher-order string corrections in the context of PBB [9,10] and Ekpyrotic [11] scenarios.In the system where a (compactification) modulus field is dynamically important rather than the dilaton, Antoniadis et al. showed that the big-bang singularity can be avoided by including the Gauss-Bonnet (GB) curvature invariant coupled to the modulus [12].
In order to test these string-motivated models from observations, it is important to investigate spectra of density perturbations and to compare them with temperature anisotropies in Cosmic Microwave Background (CMB).For example, inflationary cosmology generically predicts nearly scale-invariant spectra of density perturbations.This prediction agrees well with the recent observations in CMB anisotropies [13].While inflationary cosmology is based upon the potential energy of a scalar field with a slow-roll evolution, the kinematic energy of dilaton or modulus field dominates in PBB and Ekpyrotic/Cyclic cosmologies.Hence it is expected that the spectrum of density perturbations in the latter case is different from the prediction in inflationary cosmology.We shall address the problem of density perturbations generated in PBB and Ekpyrotic/Cyclic cosmologies by using nonsingular bouncing solutions obtained by including second-order string corrections.We note that the effect of string corrections can be important in constructing inflation models, see Refs.[14] for such possibilities.
The effect of such string corrections can be also important in the context of dark energy.From recent observations the equation of state (EOS) parameter w of dark energy lies in a narrow strip around w = 1 quite likely being below of this value [15,16] (see Refs. [17] for reviews of dark energy).The region where the EOS parameter w is less than 1 is referred as a phantom (ghost) dark energy universe [18].The phantom dominated universe ends up with a finite-time future singularity called Big Rip or Cosmic Doomsday [19,20,21].The Big Rip singularity is characterized by divergent behavior of the energy and curvature invariants at Big Rip time.Hence it is natural to account for higher-order curvature corrections in the presence of dark energy [22,23,24,25,26,27,28,29,30,31,32].In fact it is possible to avoid or moderate the Big Rip singularity when such corrections are present [22,24,25] and showed that it is possible to avoid or moderate the Big Rip singularity.We shall review cosmological solutions in secondorder string gravity in the context of dark energy and consider the avoidance of future singularities.
In what follows the effect of string corrections will be reviewed in two separate sections-(i) PBB/Ekpyrotic cosmologies (sec.2) and (ii) dark energy universe (sec.3).It is interesting to note that such corrections can play important roles for both past and future singularities.
Pre-Big-Bang and Ekpyrotic cosmologies
The PBB scenario is based upon low-energy, tree-level effective string theory using toroidal compactifications [4,5].The string effective action in four dimensions is given by where is a dilaton field that controls the string coupling parameter, g 2 s = e , and p g is the determinant of metric g .We neglect here additional modulus fields corresponding to the size and shape of the internal space of extra dimensions.The potential V S ( )for the dilaton vanishes in the perturbative string effective action.The Lagrangian L c corresponds to higher-order string corrections which we will present later, whereas L m is the Lagrangian of additional matter fields (e.g., fluids, kinetic components, axion etc.).In this section we do not consider the contribution of L m , but in Sec. 3 we will account for it as a barotropic fluid.The above action is so-called the "string frame" action in which the dilaton is coupled to a scalar curvature, R .
The dilaton starts out from a weakly coupled regime (g s 1) and evolves toward a strongly coupled region (g s > 1).The Hubble parameter grows during this stage.This "superinflation" is driven by a kinematic energy of the dilaton field, which is so-called a PBB branch.There exists another Friedmann branch with a decreasing curvature.It is possible to connect the two branches by accounting for higherorder string corrections L c to the tree-level action [9,10,33,34].This is one of the main topics in this review.
In Ekpyrotic [6,7] and Cyclic [8] cosmologies the universe contracts before the bounce because of the presence of a negative potential characterizing an attraction force between two parallel branes in an extra-dimensional bulk.The collision of two parallel branes signals the beginning of the hot, expanding, big bang of standard cosmology.After the brane collision the universe connects to a standard Friedmann branch as in the case of PBB cosmology.The origin of large-scale structure is supposed to be generated by quantum fluctuations of a field characterizing the separation of a bulk brane.It is important to construct nonsingular bouncing cosmological solutions in order to make concrete prediction of the power spectrum generated in Ekpyrotic/Cyclic cosmologies.This is actually possible by accounting for higher-order string corrections as in the PBB case [11].
The PBB model has a similarity to Ekpyrotic/Cyclic cosmologies in a sense that the universe exhibits a bounce in "Einstein frame".Making a conformal transformation ĝ = e g ; (2) the action in Einstein frame is given by where V E ( ) e V S ( ).Introducing a rescaled field ' = = p 2, the action (3) yields This is the action for an ordinary scalar field ' with potential V E .Hence it can be used to describe both the PBB model in Einstein frame, as well as the ekpyrotic scenario [35].
In the original version of the Ekpyrotic scenario [6], the Einstein frame is used where the coupling to the Ricci curvature is fixed, and the field describes the separation of a bulk brane from our four-dimensional orbifold fixed plane.In the case of the second version of the Ekpyrotic scenario [7] and in the cyclic scenario [8], is the modulus field denoting the size of the orbifold (the separation of the two orbifold fixed planes).
The Ekpyrotic scenario is described by a negative exponential potential [6] with 0 < p 1. The branes are initially widely separated but are approaching each other, which means that ' begins near + 1 and is decreasing toward ' = 0.In the PBB scenario the dilaton starts to evolve from a weakly coupled regime with increasing from 1 .If we want the potential (5) to describe a modified PBB scenario with a dilaton potential which is important when ! 0 but negligible for ! 1 , we have to use the relation ' = = p 2 between the field ' in the ekpyrotic case and the dilaton in the PBB case.
In the flat Friedmann-Robertson-Walker (FRW) metric ds 2 = dt 2 E + a 2 E dx 2 E in Einstein frame, the background equations with L c = 0 are given by where a dot represents a derivative with respect to t E and H E _ a E =a E .Here the subscript "E " denotes the quantities in the Einstein frame.The exponential potential (5) has the following exact solution [36] The solution for t E < 0 describes the contracting universe prior to the collision of branes.The Ekpyrotic scenario corresponds to a slow contraction with 0 < p 1. From Eq. ( 8) the potential vanishes for p = 1=3, which corresponds to the PBB scenario.
In string frame the scale factor a S and the cosmic time t S are related with those in Einstein frame via the relation dt S = e ' = p 2 dt E and a S = e ' = p 2 a E .Then we find [11,35] a S / ( t S ) This illustrates a super-inflationary solution with growing dilaton.Hence the contraction in Einstein frame corresponds to the superinflation driven by a kinematic energy of the field .We note that there exists another branch of an accelerated contraction (a S / ( t S ) p p ) [3], but this is out of our interest.The above solution needs to be regularized around t S = 0 (or t E = 0) in order to connect to the Friedmann branch after the bounce.In the context of PBB cosmology, it was realized that higher-order string corrections (defined in the string frame) to the action induced by inverse string tension and coupling constant corrections can yield a nonsingular background cosmology.A possible set of corrections include terms of the form [9,10] where 0 is a string expansion parameter, ( )is a general function of and is the Gauss-Bonnet (GB) term. is an additional parameter which depends on the types of string theories: = 1=4, 1=8 and 0 correspond to bosonic, heterotic and superstrings, respectively.If we require that the full action agrees with the three-graviton scattering amplitude, the coefficients c 0 i s are fixed to be c = 1 and d = 1 with ( )= e [37].
The corrections L c are the sum of the tree-level 0 corrections and the quantum n-loop corrections (n = 1;2;3; ), with the function ( )given by C n e (n 1) ; where C n (n 1) are coefficients of n-loop corrections, with C 0 = 1.There exist regular cosmological solutions in the presence of tree-level and one-loop corrections, but this is not realistic in the sense that the Hubble rate in Einstein frame continues to increase after the bounce (see Fig. 1 of Ref. [11]).Nonsingular bouncing solutions that connect to a Friedmann branch can be obtained by accounting for the corrections up to two-loop with a negative coefficient (C 2 < 0).
It was shown in Ref. [11] that nonsingular bouncing solutions exist in Einstein frame even in the presence of a negative exponential potential.When p 1 the potential is vanishingly small for ' 1, in which case the dynamics of the system is practically the same as that of the zero potential discussed in Ref. [10].In this case the dilaton starts out from the low-curvature regime j j 1, which is followed by the string phase with linearly growing dilaton and nearly constant Hubble parameter.During the string phase one has [9] a S / ( S ) 1 where _ f ' 1: 40 and H f ' 0: 62.In the Einstein frame this corresponds to a contracting Universe with On the other hand, we can consider the scenario where the negative Ekpyrotic potential dominates initially but the higher-order correction becomes important when two branes approach sufficiently.By including the correction terms of L c only for ' < 1, we have numerically confirmed that it is possible to obtain regular bouncing solutions, see Fig. 1.In this case the background solutions are described by Eq. ( 8) or ( 9) before the higher-order correction terms begin to work.
The spectrum of scalar perturbations was studied by a number of authors in the cases of PBB cosmology [38,39,40] and Ekpyrotic cosmology [41,42,43,44] (see also Ref. [45]).A perturbed space-time metric has the following form for scalar perturbations in an arbitrary gauge [46]: In this case we include the correction term L c only for ' < 1.We choose initial conditions = 15, H = 1: 5 10 3 .Prior to the collision of branes at ' = 0, the universe is slowly contracting, which is followed by the bouncing solution through higher-order corrections.
where a comma denotes the usual flat space coordinate derivative.The curvature perturbation, R , in the comoving gauge is given by [47] The power spectrum of R is defined by where k is a comoving wavenumber.The spectral index n R of curvature perturbations generated before the bounce is given by [45,41] (see also Refs.[42,43,44]): We see that a scale-invariant spectrum with n R = 1 is obtained either as p ! 1 in an expanding universe, corresponding to conventional slow-roll inflation, or for p = 2=3 during collapse [48,45].In the case of the PBB cosmology (p = 1=3) one has n R = 4, which is a highly blue-tilted spectrum.The ekpyrotic scenario corresponds to a slow contraction (0 < p 1), in which case we have n R ' 3.
The spectrum ( 16) corresponds to the one generated before the bounce.In order to obtain the final power spectrum at sufficient late-times in an expanding branch, we need to connect the contracting branch with the Friedmann (expanding) one.As we mentioned, the two branches are joined each other by including the corrections given in Eq. (11).This then allows the study of the evolution of cosmological perturbations without having to use matching prescriptions.The effects of the higher-order string corrections to the action on the evolution of fluctuations in the PBB cosmology was investigated numerically in [49,11].It was found that the final spectrum of fluctuations is highly blue-tilted (n R ' 4) and the result obtained is the same as what follows from the analysis using matching conditions between two Einstein Universes [50,51] joined along a constant scalar field hypersurface.
It was shown in Ref. [11] that the spectrum of curvature perturbations long after the bounce is given as n R ' 3 for 0 < p 1 by numerically solving perturbation equations in a nonsingular background regularized by the correction term (10).In particular comoving curvature perturbations are conserved on cosmologically relevant scales much larger than the Hubble radius around the bounce, which means that the spectrum ( 16) can be used in an expanding background long after the bounce.
The authors in [7] showed that the spectrum of the gravitational potential , generated before the bounce is nearly scale-invariant for 0 < p 1, i.e., n 1 = 2p=(1 p).A number of authors argued [41,42,43,44] that this corresponds to the growing mode in the contracting phase but to the decaying mode in the expanding phase.Cartier [52] recently performed detailed numerical analysis using nonsingular perturbation equations and found that in the case of the 0 -regularized bounce both and R exhibit the highly blue-tilted spectrum (16) long after the bounce.It was numerically shown that the dominant mode of the gravitational potential is fully converted into the post-bounce decaying mode.Similar conclusions have also been reached in investigations of perturbations in other specific non-singular models [53].Arguments can given that the comoving curvature perturbation is conserved for adiabatic perturbations on large scales under very general conditions [41,54].
Nevertheless we have to caution that these studies are based on non-singular four-dimensional bounce models and in the Ekpyrotic/Cyclic model the bounce is only non-singular in a higher-dimensional completion of the model [55].The ability of the ekpyrotic/cyclic model to produce a scale-invariant spectrum of curvature perturbations after the bounce relies on this higher-dimensional physics being fundamentally different from conventional four-dimensional physics, such that the growing mode of in the contracting phase does not decay after the bounce [56].
Dark energy
In the previous section we have discussed the role of higher-order string corrections to the tree-level action in the context of early universe.Recent observations show that the present universe is dominated by dark energy responsible for an accelerated expansion.When the universe is dominated by a phantom matter (w < 1), this leads to the growth of the energy and curvature invariants.In such a circumstance higher-order string corrections may be important when the energy scale grows up to a Planck one.We are interested in the effect of such corrections around the Big Rip singularity.In fact the Big Rip singularity can be avoided in the presence of such corrections as we will see in this section.We shall also derive cosmological solutions for an effective string Lagrangian together with a barotropic perfect fluid.
Our starting action is the generalization of (1): where is a scalar field corresponding either to the dilaton or to another modulus, and f is a generic function of the scalar field and the Ricci scalar R .!, and V are functions of .In this section we do not consider the cosmological dynamics in the presence of the field potential V .L m is the Lagrangian of a barotropic perfect fluid with energy density and pressure density p.We assume that the barotropic index, w p= , is a constant.In general the fluid can be coupled to the scalar field .The 0 -order quantum corrections are encoded in the term where a i are coefficients depending on the string model one is considering.We are most interested in the Gauss-Bonnet parametrization (a 1 = a 3 = 1, a 2 = 4 and a 4 = 1) discussed in the previous section, but we keep the coefficients general in deriving basic equations.
For a flat FRW background with a scale factor a, we obtain the Friedmann equation [49,25] where d = D 1 and Here F @f=@R , _ = , and In four dimensions (d = 3), the coefficients read c 1 = 2a 1 + 3a 2 + 12a 3 and c 2 = 4(a 1 + a 2 + 3a 3 ), while the H 4 term in c vanishes.At low energy it was shown that the unique higher-order gravitational Lagrangian giving a theory without ghosts is the Gauss-Bonnet one (a 1 = a 3 = 1, a 2 = 4, a 4 = 1).In this case, c 2 vanishes identically while c 1 = 2+ d(d 3).With fixed dilaton coupling ( = 0) equation ( 20) reduces to the standard Friedmann equation in four dimensions, in agreement with the fact that the GB term is topological when d = 3.In three dimensions (d = 2), the GB higher-derivative contribution vanishes identically except for the _ 4 term.
The continuity equation for the dark energy fluid contains a source term given by the coupling between this fluid and the scalar field .We choose the covariant coupling considered in [57]: m and Q ( )is an unknown function which we shall set to a constant later.In synchronous gauge we have while the equation of motion for the field is where the Lagrangian of the quantum correction is written as Equations ( 20)-( 25) are the master equations of the physical system under study.We note that the _ 4 term can be important even in the absence of curvature corrections in a dilatonic ghost condensate model [59].
The massless dilaton discussed in the previous section corresponds to The full contribution of n-loop corrections is given by Eq. (11).In this work we shall take only the treelevel term ( 27) into account.
Generally moduli fields appear whenever a submanifold of the target spacetime is compactified with compactification radii described by the expectation values of the moduli themselves.In the case of a single modulus (one common characteristic length) and heterotic string ( = 1=8), the four-dimensional action corresponds to [58] where is the Dedekind function and is a constant proportional to the 4D trace anomaly.depends on the number of chiral, vector, and spin-3=2 massless supermultiplets of the N = 2 sector of the theory.In general it can be either positive or negative, but it is positive for the theories in which not too many vector bosons are present.Again the scalar field corresponds to a flat direction in the space of nonequivalent vacua and V = 0.At large the last equation can be approximated as which we shall use instead of the exact expression.In fact it was shown in Ref. [60] that this approximation gives results very close to those of the exact case.
Modulus driven solution
In Ref. [25] cosmological solutions based on the action (18) without a potential (V = 0) were discussed in details for three cases-(i) fixed scalar field ( _ = 0), (ii) linear dilaton ( _ = const), and (iii) logarithmic modulus ( _ / 1=t).In the case (i) we obtain geometrical inflationary solutions only for D 6 = 4.In the case (ii) pure de-Sitter solutions exist in string frame, but this corresponds to a contracting universe in Einstein frame.These solutions are not realistic when we apply to dark energy.In what follows we shall focus on cosmological solutions in the case (iii) with a fixed dilaton.
Introducing the following new variables the equations of motion for the modulus action corresponding to Eqs. ( 20)-( 25) read While only derivatives of appear in the equations of motion for d = 3 and c 2 = 0 (GB case), there are non-vanishing contributions of itself for general coefficients c i .When c 2 = 0, the equations of motion for x and v read while the Friedmann equation is In addition c 1 can be set equal to 1 and absorbed in the definition of 0 , so that the coefficient c 2 is the only free parameter of the higher-order Lagrangian.
We search for future asymptotic solution of the form where the barotropic index w is constant and We define ~ (1=2)c 1 0 e u0 , so that in any claim involving the sign of ~ ( ) a positive c 1 coefficient is understood.In order to find a solution in the limit t !+ 1 , one has to match the exponents of tto get algebraic equations in the parameters ;! i ;c 2 and ~ .
In Ref. [58,25] the following four solutions were found.We can obtain a number of analytic solutions depending on the regimes we are in: 1.A low-curvature regime in which terms are subdominant at late times.
2. An intermediate regime where some terms in the equations of motion, either coupled to or not, are damped.
3. A high-curvature regime in which terms dominate.
4. A solution of the form ( 38)-( 40) for the full equations of motion.
Below we summarize the properties of each solution.
A low-curvature regime
In this regime the solution is given by with the constraints 2
. An intermediate regime
In this regime the solution exists only for c 2 = 0 and is given by for a non-vanishing fluid.The condition, ~ > 0, is required in order to obtain an expanding solution characterized by a(t) a 0 exp( ! 1 =t).This solution reaches Minkowski spacetime asymptotically.
If the fluid decays, then one recovers the C 1 solution of Ref. [58] with d = 3 and z = 0.
A high-curvature regime
In this regime the solution is given by together with constraint equations In the GB case (c 2 = 0) the solution corresponding to a decaying fluid ( < ! 2 4) is which contradicts the condition (45) in any dimension.Hence in the GB scenario with ! 2 6 = 4 only the marginal case = ! 2 4 is allowed.
An exact solution
An exact solution which is valid at all times is together with the constraints on ! 1 : and Eq. ( 43).
The low-curvature solution (42) and the Minkowski solution ( 44) can be joined each other if the coupling constant given in Eq. ( 28) is negative [58,61].The exact solution ( 49) is found to be unstable in numerical simulations of Ref. [25].In the asymptotic future the solutions tend to approach the low-curvature one given by Eq. ( 42) rather than the others, irrespective of the sign of the modulus-to-curvature coupling .
Constraints from the recent universe
We compare the observational constraints on ! 1 for the recent evolution of the universe with the modulus solutions found in the previous section (d = 3).The situation we study is the case in which a perfect fluid is vanishing asymptotically.The results are summarized in Table 1 at the 68% confidence level.We also address the cases in the presence of a cosmological constant .Note that the solution ( 44) in an intermediate regime is discarded.
The logarithmic modulus solution with the GB parametrization and no extra fluid does not provide a viable cosmological evolution in the current universe.In the next subsection, however, we will see that the GB case in the presence of dark energy fluid may exhibit interesting features for the future evolution of the universe.The low redshift constraint on c 2 for the high curvature solution ( 6 = 0) can be relaxed up to c 2 < 1 at the 99% confidence level.Hence we have shown that there are models which can in principle explain the current acceleration without using the dark energy fluid.
The situation becomes more complicated in the presence of a barotropic fluid.The low-curvature solution can describe the very recent universe if Q is negative and non-vanishing.The other cases crucially depend upon the interplay between all the theoretical parameters.
Dark energy universe with modulus gravity
In the universe dominated by a phantom fluid (w < 1), the energy density of the universe continues to grow and the Hubble rate eventually exhibits a divergence at finite time (Big Rip).Then the effect of higherorder curvature corrections can become important when the energy density grows up to the Planck scale.In fact it was shown that quantum curvature corrections coming from conformal anomaly can moderate the future singularities [63,22].
We would like to consider the effect of 0 quantum corrections when the curvature of the universe increases in the presence of a phantom fluid.We shall concentrate on the modulus case with given by Eq. ( 29).Our main interest is the cosmological evolution in four dimensions (d = 3) in the presence of a GB term.The dilaton is assumed to be fixed, so that there are no long-range forces to take into account except gravity.
From the discussion in subsection 3.1, the growth of the barotropic fluid is weaker than that of the Hubble rate when the condition is satisfied.This condition is not achieved for a phantom fluid when the coupling Q between the fluid and the field is absent (Q = 0).In Ref. [25] the equations of motion were solved numerically by varying initial conditions of H , and .When < 0, we numerically found that the solutions approach a Big Rip singularity for Q = 0 and w < 1 (see Fig. 2).The condition (43) can be satisfied for negative Q provided that ! 2 is positive.In Fig. 2 we plot the evolution of H and for Q = 5 and w = 1: 1.In this case decreases faster than t 2 , which Crunch singularity instead of a Big Rip one for type II strings, while the Big Rip singularity is not avoided for heterotic and bosonic strings.
Apart from string corrections, a number of authors [62,63,22,64,65] studied the effect of quantum backreactions of conformal matter around several singularities which can appear in future.They usually contain second-order curvature corrections such as the Gauss-Bonnet term and the square of a Weyl tensor.In Ref. [22] it was shown that quantum corrections coming from conformal anomaly can be important when the curvature of the universe grows large, which typically moderates future singularities.Finally we note that loop quantum cosmology leads to a modified Friedmann equation when the energy scale grows to a Planck one, which typically gives us a regular cosmological evolution without future singularities [66].
Conclusions
In this article we have discussed cosmological implications of higher-order string corrections to the treelevel effective action.In the context of Pre-Big-Bang (PBB) and Ekpyrotic cosmologies regular bouncing solutions can be constructed by including such corrections.This allows us to evaluate the spectrum of density perturbations long after the bounce.For the correction terms given by Eq. ( 11) we found that the spectra of scalar perturbations are highly blue-tilted: n R = 4 in the PBB case and n R = 3 in the Ekpyrotic case.This is different from the nearly scale-invariant spectrum (n R ' 1) observed in CMB anisotropies.As long as nonsingular bouncing solutions are constructed by using the correction terms presented in this paper, we need another scalar field (e.g., curvaton [67]) to generate nearly scale-invariant density perturbations.
We also applied second-order string corrections to dark energy.In particular we reviewed several cosmological solutions in the presence of modulus-type corrections with a fixed dilaton.In the asymptotic future the solutions tend to approach the low-curvature one given by equation ( 42) rather than the others, irrespective of the sign the modulus-to-curvature coupling .We placed constraints on the viability of modulus-driven solutions using the current observational data.The Gauss-Bonnet parametrization is excluded in any of the above mentioned regimes when a barotropic fluid is vanishing; see table 1.In the presence of a phantom dark energy fluid we discussed the effect of the modulus coupling with Gauss-Bonnet curvature invariants.It is possible to consider a situation in which the energy density of the fluid decays when the coupling Q between the field and the phantom fluid (w < 1) is present.We showed that the Big Rip singularity can be avoided for the coupling Q which satisfies the condition Q ! 2 3(1+ w )! 1 < 2 asymptotically.This is actually achieved irrespective of the sign of Q and the asymptotic solutions are described by the low-curvature one given by Eq. (42).We also briefly mentioned the effect of other forms of higher-order string corrections to future singularities.
Thus we showed that string corrections can be important in a number of cosmological situations.We hope that the development of string theory will further provide us rich and fruitful implications to cosmology. | 2014-10-01T00:00:00.000Z | 2006-04-01T00:00:00.000 | {
"year": 2006,
"sha1": "e572c2e81fe301c0a9fc91cd91ace36e58f70b4b",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/hep-th/0606040",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e572c2e81fe301c0a9fc91cd91ace36e58f70b4b",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": []
} |
67854741 | pes2o/s2orc | v3-fos-license | Arsenic in your food : potential health hazards from arsenic found in rice
License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Nutrition and Dietary Supplements 2015:7 1–10 Nutrition and Dietary Supplements Dovepress
Introduction
It is very well known that rice (Oryza sativa) is a staple food for over half of the world population.It is also the second cereal crop after maize, with an estimated production in 2013 of 745 million tons (497 million tons of rice milled for human consumption). 1he main reason for the well-established relationship between rice and arsenic (As) is the peculiar growing conditions (flooded soils) of rice.In the past, many fertilizers containing organic As (oAs) were used as pesticides or defoliants.Now, although its use is prohibited or has been minimized, the presence of As in soils and groundwater persists to this day.Therefore, flooded soils offer a unique environment for growth and nutrition of rice, but at the same time, they are the perfect scenarios for the highest possible availability of As for rice plants. 2 There are four elements, iron (Fe), phosphorus (P), sulfur (S), and silicon (Si), that interact strongly with As during its route from the soil to the plants.Plants take up arsenate [As(V)] through the phosphate transporters, and arsenite [As(III)] and undissociated methylated As species through the nodulin 26-like intrinsic aquaporin channels.Arsenate is readily reduced to arsenite in the plant, which is later detoxified by complexation with thiol-rich peptides such as phytochelatins and/or vacuolar sequestration. 3 While the maximum limit of As in water is highly regulated worldwide (10 mg L -1 ), the maximum level of As residues in highly consumed foods, such as rice, is still a pending matter; the main reason is political because none of the governmental organizations want to set up a maximum threshold for iAs, which can Nutrition and Dietary Supplements downloaded from https://www.dovepress.com/ by 54.70.40.11 on 20-Dec-2018 For personal use only.
Powered by TCPDF (www.tcpdf.org) This article was published in the following Dove Press journal: Nutrition and Dietary Supplements 9 January 2015 Number of times this article has been viewed jeopardize their rice production.Although there is serious concern about the presence of As in this cereal and the possible overexposure, no international agency dealing with food safety, such as the European Food Safety Authority (EFSA), the Food and Drug Administration of the United States, and the Food and Agriculture Organization of the United Nations (FAO), have yet established maximum limits for this pollutant in rice or rice-based products; however, all of them are intensively working on this topic, as of 2014.People's Republic of China is the country with the strictest regulation, with a maximum threshold of 150 µg iAs kg -1 .Very recently, toward the end of July 2014, the Codex Alimentarius Commission has established a maximum limit of 200 µg iAs kg -1 , which is more permissible than that of People's Republic of China.In addition, all previously cited organizations are compiling global data on total As (tAs) and iAs in polished rice and rice-based products, in order to establish an appropriate maximum residue level that reflects the actual content of this metalloid and that does not jeopardize the international rice market. 4If an exceedingly low maximum residue limit is established for iAs in rice, a global crisis can be created and the safe supply of rice worldwide could be threatened; this situation could generate a significant international crisis, especially affecting the poor segments of countries that highly depend on rice as the staple food.
As for the maximum safe intake of this toxic substance, in 1988, The Joint FAO/WHO (World Health Organization) Expert Committee on Food Additives proposed a provisional tolerable weekly intake of 15 mg kg -1 body weight (bw).But in 2011, this value was discarded because the EFSA concluded in 2009 that a single value was not appropriate because intakes between 0.3 µg kg -1 bw and 8.0 µg kg -1 bw are within the benchmark dose lower confidence limit (BMDL 01 ) for certain types of cancer.At the moment, there is no maximum safe intake value set up by any international food safety authority. 5,6
Potential health hazards and risks resulting from dietary As
Considered a metalloid, As possesses properties intermediate between metals and nonmetals, because it can form metal alloys, but also covalent bonds with carbon, hydrogen, and oxygen.It originates naturally in the environment and occupies an important place in the list of the most abundant elements: the 20th place in the earth's crust, 14th in seawater, and 12th place in the human body; moreover, it is a component of more than 245 minerals. 7,8ually, the organic compounds of metals are more toxic than the inorganic forms, eg, mercury.However, this is not the case for As; for this particular element, the organic forms are significantly less toxic than the inorganic ones.The toxicity of As decreases from inorganic compounds containing As in the trivalent form (arsenic trioxide, sodium arsenite, arsenic trichloride, etc), to inorganic compounds of pentavalent As (arsenic pentoxide, arsenic acid, lead arsenate, calcium arsenate, etc), and finally the organic compounds (oAs), which are the least toxic ones (dimethylarsinate [DMA] and monomethylarsonate [MMA]). 9he main routes of As exposure to humans are air, water, food, and soil.In general, when As is introduced into the body through the diet, the level of absorption in the gastrointestinal tract depends on the chemical species, oxidation state, water solubility, and complexity of the food matrix.For instance, about 95% of arsenite and arsenate from drinking water is rapidly absorbed after ingestion. 10eharg et al 11 claim that the bioavailability of As, both oAs and iAs, in the intestine through the consumption of rice is an open question.3][14] The only in vivo study was in animals, and it showed 90% bioavailability in blood monitoring. 15enerally, once absorbed, iAs enters the bloodstream and is distributed between plasma and erythrocytes.This iAs binds to the globin chains of the hemoglobin molecule. 5From the bloodstream, iAs can reach several target organs, including liver, kidney, spleen, and lung, and later on, it accumulates and can be found in hair, nails, and skin. 5,16,17n the human body, As experiences large biotransformation mediated by enzymes, resulting in sequential methylation.The methylation of As occurs mainly in the liver via arsenite methyltransferase enzyme (As3MT), which has been isolated from the cytosol of hepatocytes. 18In contrast, fibroblasts and urothelial cells do not express As3MT; thus, As is accumulated in these cells.Toxicity due to As in hepatocytes is confirmed by its accumulation in the nucleus and mitochondria. 19Other methylation sites are the kidney, testes, and lungs.The most recent method describing As detoxification in the human body is the one elucidated by Hayakawa (Figure 1). 20he major metabolites excreted in human urine are organic As species (oAs), basically DMA and MMA, with a typical ratio of DMA (60%-80%), MMA (10%-20%), and iAs (10%-20%) in individuals who do not eat seafood Nutrition and Dietary Supplements downloaded from https://www.dovepress.com/ by 54.70.40.11 on 20-Dec-2018 For personal use only.
Powered by TCPDF (www.tcpdf.org)(fish, shellfish, and algae). 21Fish consumption increases urinary excretion of arsenobetaine and DMA in a few days after consumption because they are rich in arsenobetaine and arsenosugars. 22][25] Cascio et al 23 carried out a biomonitoring study on the effect of rice consumption on urinary arsenicals in a general population group of UK Bangladeshis and UK Caucasians, because the Bangladeshi population still represents the largest rice consumer group in the UK, with an average rice consumption 30 times higher than that of White Caucasians.The main results showed that even if total urinary As did not significantly differ between the two groups, the sum of medians of DMA, MMA, and iAs for the Bangladeshi group was found to be over threefold higher than that of the Caucasians.Urinary DMA and iAs were significantly higher among the UK Bangladeshis than among White Caucasians.In contrast, cationic compounds were significantly lower in the Bangladeshis as compared to the same in Caucasians.Significant positive correlations were found between the levels of both iAs and DMA and the daily rice consumption.The higher DMA and iAs levels in the Bangladeshis were considered by the authors to be the consequence of higher rice consumption in this community.Rice in fact accumulates both iAs and oAs, and after ingestion, iAs can be metabolized through MMA to DMA by humans. 23urthermore, the presence of other elements affects the bioavailability of As.Zinc intake increases the concentration of metallothioneins, favoring detoxification of As.A higher As intake, as compared with that of Se, encourages competition between the elements, and As can replace Se in the Se-dependent enzymes, thereby inactivating them and increasing As toxicity.Nutritional status also influences the risk of exposure to As.Several studies indicate that high intakes of vitamin C and methionine reduce As toxicity, whereas a deficiency of vitamin A intensifies it. 26rsenic poisoning can be classified as acute or chronic.An oral intake of 100-300 mg (1-5 mg kg -1 bw) of iAs in humans usually leads to death within 1 hour, if untreated. 108][29] But for those who consume As-free water or water with As content ,10 µg L -1 , intoxication through rice and rice-based products is considered the main source of poisoning. 30he toxicity of iAs has been classified by the International Agency for Research on Cancer in group 1 of carcinogens for humans. 29This classification is based on the induction of primary skin, lung, and bladder cancers.For other cancers, such as cancers of the kidney, liver, and prostate, only a very small number of studies have been conducted and the results are certainly not conclusive.Moreover, skin (dermis) lesions, such as hyperpigmentation and palmoplantar hyperkeratosis (blackfoot disease), are sensitive indicators of chronic ingestion of iAs. 5 These typically appear after 5-10 years of consuming As-contaminated water and may evolve into carcinogenic forms on the skin (nonmelanoma skin cancer) and in the internal organs. 30long with the carcinogenic properties of iAs, a number of noncarcinogenic effects have been proposed.Exposure to As may result in neurobehavioral and neuropathic effects in adolescence, 31 effects on memory and intellectual function, 32 reproductive effects with increased fetal loss and premature delivery, 27,33 steatosis, 34 cardiovascular diseases, 35 ischemic heart diseases, 36 carotid atherosclerosis, and respiratory system effects such as chronic cough and chronic bronchitis. 37ven at concentrations as low as 0.4 µg L -1 , iAs has been reported to behave as an endocrine disruptor that is able to alter gene transcription. 38Despite the number of in vivo and in vitro studies trying to elucidate the role of As in the development of diabetes in humans, the current available evidences are not adequate to establish a causal role. 39,40
Dovepress
After adjustment for biomarkers of seafood intake, total urine As was found to be associated with increased prevalence of type 2 diabetes by Navas-Acien et al. 41 These authors reported that their findings support the hypothesis that low levels of chronic exposure to iAs in drinking water, and a widespread exposure worldwide, may play a role in diabetes prevalence.However, more recent studies seem to suggest that there is no such link between As exposure and diabetes. 42,43idence for presence of As in rice and rice-based products Rice First, Sun et al 44 proved that the pattern of tAs concentration in rice grain fractions was endosperm , whole grain , bran, with mean values being 560 µg kg -1 , 760 µg kg -1 , and 3,300 µg kg -1 , respectively, in rice samples from People's Republic of China and Bangladesh.This pattern of concentration in the different grain parts leads to the fact that brown rice has higher contents of tAs and iAs than polished/ white rice.Commercial bran can reach values as high as 1,000 µg kg -1 . 44ombi et al 45 reported large differences in the distribution and speciation between the husk, bran, and endosperm of rice (Figure 2).The high content of As in the bran is probably the most important result because nowadays rice bran is widely used as a food additive and as a major health food product. 44t should be noted that while pure rice bran is used as a health food supplement, perhaps of greater concern is soluble rice bran, which is marketed as a "superfood" and as a supplement to malnourished children in international aid programs, without adequate toxicological research.Concentrations of As increased significantly from endosperm (540 µg kg -1 ), to bran (6,240 µg kg -1 ), and to husk (12,420 µg kg -1 ).
infant feed
Milled rice is a dominant carbohydrate source for weaning babies up to 1 year of age due to its blandness, material properties, low allergenicity, and high nutritional value.As the child develops, the rice porridge is used as the basis of more complex meals, by mixing it initially (from 6 months of age onward) with puréed fruits or vegetables and later (from 8 months onward) with meat (mainly chicken) and fish (mainly hake), either home-made mixtures with baby rice or pre-prepared commercial products.This dependence on rice is exacerbated in infants with food intolerances.
Several studies have proved that high iAs levels are also present in rice products intended for babies and infants.First, Meharg et al 63 and, later, Carbonell-Barrachina et al 64 studied the contents of iAs in baby foods based on rice and cereals from Spain, the UK, People's Republic of China, and the US.The iAs contents were significantly higher in gluten-free rice than in cereal mixtures with gluten, placing infants with celiac disease at high risk.All rice-based products displayed a high iAs content, with values being .60% of the tAs content and the remainder being basically DMA.Pure rice samples for infants from Spain showed lower iAs content (85 µg kg -1 ) compared to samples from other countries such as People's Republic of China (148 µg kg -1 ), the USA (125 µg kg -1 ), and the UK (162 µg kg -1 ).The products with the highest contents of both tAs and iAs were those manufactured using organic brown rice, which nowadays has a huge demand among consumers wanting natural and/or ecological products.Hernández-Martínez and Navarro-Blasco 65 obtained similar results in Spain, where organic samples of infant foods had higher levels of As than conventional food samples.Juskelis et al 66 determined the contents of the main As species in 30 American infant cereals that were considered to be a potential health risk to the infant population.The results indicated
Celiac disease and lactose intolerance
Celiac disease is a digestive illness that damages the mucous membrane of the small intestine and interferes with absorption of nutrients from food.This illness is caused by intolerance to gluten proteins.The diet for people suffering from this disease basically consists of eliminating all foods having wheat, rye, and barley.Rice is therefore essential for the manufacture of supplies for celiac disease-affected people and reaches high percentages in their formulations. 67,68unera-Picazo et al 69,70 conducted two studies evaluating the occurrence of As in foods intended for children and adults who suffer from celiac disease.A positive relationship between rice percentage and As content was clearly observed (Figure 4).Moreover, gluten-free products that do not contain rice in their formulations do not contain As above the detection limit.The highest values of tAs and iAs found were 256 µg kg -1 and 128 µg kg -1 , respectively, and corresponded to samples of pasta for children.In adults, tAs and iAs reached contents as high as 120 µg kg -1 (pasta) and 85.8 µg kg -1 (baking flour).The daily estimated intake of iAs from the studied rice-based products ranged from 0.61 µg kg -1 bw to 0.78 µg kg -1 bw for children and 0.47 µg kg -1 bw to 0.46 µg kg -1 bw for adults.These levels are within the BMDL 01 values identified by the EFSA, and, consequently, a risk to this segment of consumers cannot be excluded.
Lactose intolerance is the inability to digest and metabolize lactose due to the lack of lactase, the enzyme required to break down lactose in the digestive system. 71An alternative to breast milk and animal milk is soybean milk or rice milk.An increase in the intake of rice products can imply an increase in the intake of As. Results of two studies on rice milk proved that all samples of the EU and the US fail the maximum residue limit established for water (10 µg L -1 ). 70,72
Ways to reduce arsenic in rice and rice-based products
The feasible options to reduce the intake of iAs through rice range from using rice varieties with restricted As uptake and upward transport of As and cultivating rice in geographical areas with low contents of soil As (this would be easily achievable through knowledge of historical contamination of rice-growing areas) to the most drastic and dramatic option of not using rice as a source of carbohydrates and proteins but using other grains, such as oat, corn, or wheat.But this last option would create a huge crisis and it is not feasible at all.Notes: Data from Australia, 46 Bangladesh, 44,46,[48][49][50][51][52] Canada, 50,53 People's Republic of China, 44,46,47,52,54 egypt, 52 europe, 50 France, 52 india, 50,52,56,61,62 italy, 50,52 Japan, 52,55 Philippines, 46 Spain, 50,52,56 Taiwan, 57,58 Thailand, 46,50,52,56 USA, 50,52,56,59 vietnam. 60bbreviations: tAs, total arsenic; iAs, inorganic arsenic.
Dovepress
While an appropriate selection of cultivars is routinely conducted and rice plants with reduced uptake are fully available to farmers, intermediate options have to be used.These include changing farming practices, pretreating rice before entering it into the normal processing chain of the food industry, and optimizing the working conditions of key unit operations to reduce the content of As in rice, if possible.
Agronomic practices
It has been shown that different growing regions of a country can produce rice with different As contents.For example, in Spain, the As contents in rice for Andalusia (Cadiz and Seville) are generally much lower than those of other areas (Valencia, Tarragona.and Calasparra). 56Therefore, the first step in reducing the content of As in rice is to determine the levels of contamination and identify areas that have low levels of As.These comparisons will allow the identification of cultivars that have been used in these geographical areas with reduced As content and also highlight farming practices that could be specific for these areas.
Studies by Norton et al 73 have clearly demonstrated that As uptake, transport, and accumulation in edible rice grain are affected by cultivar.Therefore, an appropriate selection of cultivars is the first issue to be studied.
In general, French rice has very high As content.This may be due to the specific cultivars used in the Camargue or may be due to the availability of land and/or management practices.One option to reduce this problem through farming practices could be the addition of organic matter or usage of organic matter-rich soils, leading to intensified As methylation, which may be a desirable process because, as previously mentioned, oAs forms are less toxic than iAs forms.
Finally, we should mention that the aerobic culturing of rice is starting to be considered around the world in order to increase efficiency in the management of nitrogen fertilizers and mainly because water is becoming less and less available worldwide.Compared to continuous flooding of rice fields, aerobic management significantly reduces As availability for plant uptake by conserving As-adsorbent materials such as iron hydro(oxides), and thereby reducing the final As content in edible grain. 74However, aerobic conditions may affect the availability of other toxic elements, such as cadmium (Cd).Moreno-Jiménezet al 75 conducted an experiment over 7 consecutive years, evaluating the impact of water management on accumulation of As and Cd in rice.Sprinkler irrigation was compared to traditional flooding irrigation.Successive sprinkler irrigation over 7 years decreased tAs to one-sixth of its initial concentration in the flooded system, while one cycle of sprinkler irrigation also reduced tAs by one-third.iAs concentration increased up to two-folds under flooded conditions compared to sprinkler-irrigated fields, while oAs was also lower in sprinkler system treatments, but to a lesser extent.This suggests that methylation is favored under water logging.However, sprinkler irrigation increased Cd transfer to the grain by a factor of 10.Sprinkler systems in paddy fields are able to mitigate excessive As accumulation, but this experiment showed that an increased Cd load in rice grain may result.In summary, it is desirable to reduce iAs content of rice, by altering agronomic practices, but it is essential that these changes do not detrimentally impact the nutritional value of rice, including essential minerals (Fe and Zn) and vitamins, or increase the content of other toxic elements, such as Cd. 76
Rice processing
Rice processing is a combination of several operations to convert paddy into well-milled, silky white rice, which has superior cooking quality attributes. 77,78ice can be classified, according to its different processing steps, into paddy, wholegrain brown rice, and milled white rice.During rice processing, different by-products or coproducts are produced, including hull and bran.Moreover, white rice (the final product) can also reach its final commercial stage in different forms: large broken rice, small broken rice, and rice flour. 79ignes et al 78 compared the two rice dehusking processes (removal of the external hull or husk of the rice grain) currently in use in India, wet (soaking and boiling of rice and mechanical hulling, leading to parboiled rice) and dry (mechanical hulling, leading to atab rice).The dry method was recommended if As-free water was not available; however, soaking and light boiling resulted in lower As concentrations if nonpolluted water was used.Therefore, the use of high volumes of water for washing and boiling the rice could be good ways of easily and significantly reducing the As content of rice, before starting the production of rice flour for rice-based infant foods.Later, brown or white rice can be cooked before entering the final manufacture of rice-based products. 78
Recommendations for limiting the final intake of As and for improving consumer confidence on rice and rice-based products
When consumers have already made the purchase of rice or rice-based products, there are still other options available to reduce the possible presence of As in this food at home.Mainly, these options are based on the safety of the water used for cooking, and the cooking process (temperature and time).
The three most common methods of cooking rice in Asia are known as 1) traditional, 2) intermediate, and 3) contemporary.In the traditional method, the rice is washed until the washings become clear, the washings are discarded, and the rice is boiled in excess of water until cooked; finally, the remaining water is discarded.The intermediate method of cooking is similar to the traditional one but rice is boiled using less water, and cooking is finished when no water is left.Finally, in the contemporary method, rice is not washed and it is boiled with a low water volume until there is no water left. 77Signes et al 80 simulated three cooking methods in their facilities, and the use of the traditional method is recommended (using large volumes of water for the cooking and washing steps); this method significantly reduces the content of tAs to 387-258 µg kg -1 .Similar conclusions were previously reached by Sengupta et al, 77 who cooked rice using water with low As content (,3 µg L -1 ) using traditional and modern methods and found that the traditional method (wash until clear; cook with rice: water ratio of 1:6; and discard excess water) removed up to 57% As from the initial rice.Approximately half of the As was lost in the wash water and the other half in the discard water. 77imultaneously, Signes et al, 80 cooking simulated rice with different levels of As (.50 µg L -1 ) species in the cooking water, concluded that As concentration in cooked rice was always higher than that of the raw rice and varied in the range of 227-1,642 µg kg -1 .Mondal and Polya 61 reported values of As in cooked rice (170 µg kg -1 ) from two surveys of households in West Bengal, Nadia district, India; Smith et al 81 reported As values of 350 µg kg -1 in a survey of households in Bangladesh; Bae et al 82 reported values of 270 µg kg -1 for a site survey in Bangladesh; Rahman et al 83 reported As levels of 320 µg kg -1 during a field study in Bangladesh; and Roychowdhury et al 26 reported values of 370 µg kg -1 from a household survey in West Bengal, India.Finally, Raab et al 84 systematically investigated tAs and iAs in different types of basmati, long grain, polished, and whole rice samples that had been subjected to various types of cooking processes using uncontaminated water.The effects of washing-rinsing, low volume of water (rice-to-water ratio 1:2.5), and high volume of water (ratio of water to rice: 1:6) during cooking and steaming were investigated.Rinsing and washing were effective in eliminating about 10% of tAs and iAs in basmati rice, but these were less effective for other types of rice.While steaming reduced tAs and iAs contents in rice, it did not consistently affect all types of rice investigated.The use of a large water volume for cooking effectively reduced both
Dovepress
tAs and iAs, 35% and 45%, respectively, in the long-grain basmati rice as compared to the reduction in raw rice.This study indicated that washing and rinsing with high volumes of clean water are effective in reducing the As (especially iAs) content of cooked rice.
Furthermore, Signes et al 80 demonstrated that As speciation was not significantly affected by the cooking process, probably because the temperature reached during cooking of rice, 100°C, is lower than that required for promoting exchange of species.Van Elteren and Slejokovek 85 studied the effect of high temperature on As speciation in aqueous As standard solutions and concluded that temperatures above 150°C are required to establish significant changes.A later study by Devesa et al 86 agreed with this statement and concluded that these high temperatures can be achieved in some cooking treatments in which the food surface is in direct contact with the source of heat (grilling, frying, or baking); temperatures as high as 250°C can be reached.Similar results were found by Hanaoka et al 87 and Torres-Escribano et al. 88 As the final conclusion, proper labeling is essential for this type of product.It is required to express the percentage of rice used, in addition to the variety and its origin.This information will be useful and improve consumer confidence in these products.
In summary, the rice industry has to know the possible options to limit this problem.Here are some objectives to be reached: • To identify rice varieties accumulating low levels of iAs.• To use rice cultivars that show restricted As uptake, use aerobic cultivation practices, and avoid upward As transport to the edible grain.• To facilitate increased production of rice in regions with low contents of As. • To optimize rice cooking, by using more water to facilitate the migration of As to the rinsing/washing/cooking water.• To limit the use of whole grain rice (containing bran) for the segment of population with high intake of rice, for instance, people with celiac disease.
Figure 2
Figure 2 Processing of rice.Note: Rice processing from paddy (A) to wholegrain or brown (B) and finally to white polished (C) rice.Abbreviations: a, hull; b, bran; c, polish; d, aleurone layer; e, starchy endosperm; f, embryo.
Figure 4 7
Figure 4Relationship between iAs and rice percentage in samples of rice-based foods for children and adults with the celiac disease.Abbreviation: iAs, inorganic arsenic. | 2018-12-19T02:37:56.714Z | 2015-01-09T00:00:00.000 | {
"year": 2015,
"sha1": "65a5bdd16c298f760f6453d78c6ec576d4b17b5b",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=23180",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f3c0a76918d6af8442dccd72e467498ad60b541",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
260599011 | pes2o/s2orc | v3-fos-license | Frequency of Inferior Alveolar Nerve Damage After Open Reduction and Internal Fixation in Mandibular Fractures
alveolar
Frequency of Inferior Alveolar Nerve Damage After Open Reduction and Internal Fixation in Mandibular Fractures
Surgical reduction and xation of the fracture results in damage of inferior alveolar nerve leading to sensory disturbances in the lower lip and the chin area, infection disturbed occlusion, impaired wound healing [8,9]. Inferior alveolar nerve injuries after open reduction and xation in mandibular fractures is the focus of this study. Fractures positioned amongst the mandibular foramen and mental foramen causes neurosensory variations in inferior alveolar nerve which may be due to the injury or because of open reduction and xation [10,11]. Inferior alveolar nerve Frequency of Inferior Alveolar Nerve Damage
I N T R O D U C T I O N
One of the most frequent injuries to the maxillofacial region is mandibular fracture. Numerous places experience fractures. The inferior alveolar nerve is often injured as a result of mandibular fractures. Objective: To ascertain how frequently patients in the oral and maxillofacial department of the Ayub Teaching Hospital in Abbottabad experienced inferior alveolar nerve injury following open reduction and xation of a mandibular fracture. Methods: This was a Descriptive case series carried out at Oral and Maxillofacial Department, Ayub Teaching Hospital, Abbottabad after approval from the IRB of the institution and CPSP vide number (CPSP/REU/DSG-2018-010-2532). Using the formula to evaluate proportion with absolute precision and the following premises, the sample size was determined to be 96 using the WHO software for sample size computation in health studies: The expected percentage of inferior alveolar nerve injury following xation in mandibular fracture is 45%, the con dence level is 95%, and the absolute precision is 10%.
M E T H O D S
injury is the common problem after surgical reduction and xation of mandibular fracture [4]. It might be of temporary or permanent in nature affecting the normal routine [12][13][14][15]. The main causes of neurosensory changes postoperatively include handling of fracture segments, cutting of tissue, retraction of appliances and closeness of fracture segments with the inferior alveolar nerve [16].The features that add to the nerve injury include site of the fracture, type of fracture, distance between the fragments, numbers of missing teeth and treatment used for reduction [9]. Patients with inferior nerve injury complain of sensory damage that may manifest as pain, paraesthesia, dysesthesia, hypoesthesia, hyperaesthesia and anaesthesia. Affected drinking, eating, talking abilities and lip biting are the major complains of the patients [1, 2]. The frequency of postoperative nerve damage is 0.6% to 92.3% [1-3, 7, 8]. While the reported frequency of permanent inferior alveolar nerve damage is up to 45% [2]. The study's goal was to ascertain how frequently patients in the oral and maxillofacial department of the Ayub Teaching Hospital in Abbottabad experienced inferior alveolar nerve injury following open reduction and xation of a mandibular fracture.
This was a Descriptive case series carried out at Oral and Maxillofacial Department, Ayub Teaching Hospital (ATH), Abbottabad after approval from the IRB of the institution and CPSP vide number (CPSP/REU/DSG-2018-010-2532). Using the formula to evaluate proportion with absolute precision and the following premises, the sample size was determined to be 96 using the WHO software for sample size computation in health studies: The expected percentage of inferior alveolar nerve injury following xation in mandibular fracture is 45%, the con dence level is 95%, and the absolute precision is 10%. Patients of both genders aged between 20-50 years and gone under open reduction were involved in the study while Patients reporting with pathological mandibular fracture and those who were not keen to partake were omitted from the study. Well-versed consensus was taken from the patients after ful lling the inclusion criteria. Data was collected from the Oral and Maxillofacial Surgery, ATH with the help of structured questionnaire via interview. The surgery was performed by an oral and maxillofacial surgeon. General Anaesthesia was given, mucoperiosteal ap was raised. Nerve was identi ed. Fractured segments reduction and xation was done as per requirement of the situation. After the completion of surgery the patients were followed after one week, one month and three months of duration. Statistical analysis was performed by using SPSS version 26.0. Quantitative variables like age were described as mean ± standard deviation. Categorical variables like gender, type of anaesthesia, fragment manipulation, presence of preoperative inferior alveolar nerve injury, degree of fracture segment displacement, and type of xation method were described as frequencies and percentages. Outcome variable was strati ed by gender, age groups, fragment manipulation, type of anaesthesia, degree of fracture segment displacement and type of xation method. Post strati cation Chi square test was used at 5% level of signi cance. Perioperative inferior alveolar nerve injury was observed in 56 (58.33%) patients while permanent inferior alveolar nerve injury was diagnosed in 39 (40.63%) patients (Table 2). The frequency of inferior alveolar nerve injury in study participants was 40.63%. A broad range of IAN injury has been reported in literature and it could be due to demographics of the study participants. In general, the occurrence of IAN injur y was 33.7% beforehand management and 53.8% after management, according to a study from Singapore. In this investigation, 123 mandibular sides (43 bilateral) from 80 patients were examined. The most common causes of injuries were assault (33.8%), falls (31.3%), car accidents (25.0%), and sports injuries (6.3%). All condylar fractures (13.0%) lacked NSD, and 49.6% of the fractures elaborate the posterior mandible, which bears the IAN. Open reduction and internal xation (ORIF; 74.8%), closed reduction and xation (22.0%), and no treatment (3.3%) were the available options for treatment [1]. In dissimilarity, the follow up period for our study was very short and therefore, we were unable to determine recovery of the neurosensory de cit in our study population. In another investigation, the sharp/blunt differentiation method was used to assess the inferior alveolar nerve for neurological de cit following damage. The progression of brain recovery was evaluated over the observation period. This study comprised 52 patients with mandibular fractures affecting the ramus, angle, and body. The likelihood of neural injury to the inferior alveolar nerve was 42.3%; comminuted and displaced linear fractures were related with a higher risk of neural injury to the inferior alveolar nerve and a slower rate of recovery; and 91% of patients had their inferior alveolar nerve function return. Injuries to the inferior alveolar nerve are more common in cases of mandibular fractures affecting the ramus, angle, and body, as well as comminuted and displaced linear fractures [17]. In dissimilarity; we did not determine the mode / type of trauma to mandible and did not determine its relationship with the outcome. .In difference; our investigation did not nd a statistically signi cant correlation amongst fracture displacement and IAN damage. Subjects with unilateral mandibular fracture reported within a day after injury were monitored over the course of a year in a prospective cohort study that included sixty patients cared for mandibular fracture. 52 patients (86.7%) were found to have a post-traumatic neurosensory de cit, albeit this number fell to 23.3% over the follow-up period. Angle fracture cases (33.3%) had abnormal p o s to p e r a t i ve n e u r o s e n s o r y r a t i n g s t h a t we r e substantially greater than body fracture cases (11.1%). 90% of body fracture cases had considerable recovery associated to 67% of mandibular angle fracture cases when non-recovered and recovered neurosensory scores were related by fracture location. Neurosensory recovery scores were statistically substantially higher in cases with less than 5mm fracture dislocation (90.6%) than in cases with more than 5mm fracture displacement (59.9%) [17]. In difference; the current study did not nd any statistically signi cant association amongst IAN injury and fracture displacement. We did not take into account the location of mandibular fracture and its association with the outcome in our study population. The probability of IAN injury was 35% in a Lahore-based randomized controlled experiment
D I S C U S S I O N
*Chi-square test Similarly the difference for post-surgical inferior alveolar nerve injury among type of anesthesia (p=.851), fragment manipulation (p=.370), degree of fracture segment displacement (p=.793) and xation method (p=.793) was not statistically signi cant. The details are shown in Table 4. | 2023-08-06T15:33:18.077Z | 2023-07-31T00:00:00.000 | {
"year": 2023,
"sha1": "a306b8dcbd490c79f09458fa3b6cfa29a95d68ff",
"oa_license": "CCBY",
"oa_url": "https://www.thejas.com.pk/index.php/pjhs/article/download/916/572",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3db0d6b7abea682a2d696373810023bda8bd8cb4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
236776344 | pes2o/s2orc | v3-fos-license | Solid-state polymer adsorption for surface modification: The role of molecular weight
a r t i c l e i n f o
Polymer adsorption is an attractive, yet little-investigated tool for actual film deposition or practical surface modification, as the vast literature on adsorption focuses largely on fundamental phenomena [16][17][18][19][20][21][22]. It is based on thermodynamic strive of polymer enrichment at interfaces, thereby omitting complex chemical reaction pathways, as shown in the pioneering works of McCarthy and coworkers who suggested to exploit adsorption as an actual means for surface modification for standard substrates like silica [23][24][25][26][27][28]. Recently, our group has established generic approaches for heterogeneous surface modification of lignocellulosic surfaces by polymer adsorption from aqueous and aprotic solvents [10,29,30]. The reason for the relatively esoteric status of utilizing adsorption as a modification medium lies perhaps in the outcome: polymer layers adsorbed from a solvent often exhibit incomplete coverage and their attachment on the substrate can be unsatisfactory in applications where long-term stability is required [29]. To tackle these deficiencies, we propose the utilization of lesser known solid-state polymer adsorption in the form of so-called Guiselin layers where the coverage is usually more comprehensive and the attachment firmer [31][32][33].
The formation of Guiselin layers has been well established since their existence was first proposed in 1992 [31]. A bulky polymer thin film larger than a few radii of gyration (R g ) is placed in contact with a substrate and annealed above the glass transition temperature (T g ) or melting point of the polymer, resulting in an irreversibly adsorbed ultrathin layer that remains on the substrate interface after the excess polymer has been rinsed out by successive leaching with a good solvent [16,17,22]. Guiselin layers have nearly exclusively been investigated within the realm of fundamental polymer physics, e.g., to unveil the adsorption mechanism or the formalism of the adsorption kinetics [17][18][19]. So far, the effect of the adsorbed layers on a number of materials properties has been investigated: glass transition, [22,34,35] diffusion coefficients, [36] thermal expansivity, [37] viscosity, [38] and crystallization ratio [39,40]. In this study, we want to transfer the concept of Guiselin layers into the territory of surface modification by exploring the film stability aside the standard properties, such as film thickness, coverage, and contact angle. Indeed, a crucial, yet not a fully addressed issue of Guiselin layers is their stability as ultrathin polymer films are susceptible to rupture during various treatments, e.g., elevated temperatures and solvent exposure which are inevitably present during the layer formation. This is particularly important given the issues with reproducibility of Guiselin layers, recently pointed out by Thees et al. [32] In addition, Gin et al. [17] have observed a density difference within the adsorbed polymer layer on a silicon substrate, prompting an inner area of higher density with a more flattened conformation named a 'lone flattened layer', [19] and an outer bulklike lower density area named a 'loosely adsorbed layer'. These flattened chains inhibit the penetration of free molecules causing autohydrophobic dewetting of thin polymer films at free polymer-adsorbed polymer interface, as shown in the comprehensive studies by Jiang et al. [41,42] Beena Unni et al., [43] in turn, showed that the adsorbed film stability significantly depends on the solvent used for washing out the excess polymer. As a model system in this study, we use Guiselin layers of polystyrene (PS) on standard Si/SiO x substrates. The idea is to show how Guiselin layers can be utilized as a means of stable surface hydrophobization of the silicon wafers. Concerning the polymer properties, we focused on molecular weight (M w ) and its effects on the formation and stability of PS Guiselin layers on silica substrates. The study differs from the existing stability studies on Guiselin layers [41][42][43] by its pragmatic approach as we pay particular attention to the relationship between film stability and the newly introduced hydrophobic properties.
Preparation of substrate surface
The silicon wafers were cut to approximately 10 mm  10 mm prior to use. The wafers were cleansed by successive ultrasonic cleaning in Milli-Q water, acetone, isopropanol, and Milli-Q water. Subsequently, the wafers were dried under mild N 2 purge and further cleansed in UV/ozone chamber (Bioforce Nanosciences Inc., California, USA) for 15 min. The UV/ozone cleaning procedure has been shown to be an effective method to rapidly remove a variety of contaminants from surfaces and effectively decompose hydrocarbons. [44]
Deposition of polystyrene ultrathin films
The cleaned wafers were again purged with N 2 stream for dust removal and spin-coated with fresh toluene for final cleaning prior to film deposition. The atactic PS solutions were prepared in toluene (20 g/L). Thin films (thickness > few R g , Table 2) were deposited by spin-coating (WS-650SX-6NPP/LITE, Laurell Technologies) the solutions at 4000 rpm for 90 s. The spin-coated films were annealed at 150°C for 24 h under vacuum to promote adsorption and ensure equilibrium. [22,38] Subsequently, the films were transferred to a desiccator under air for cooling to ambient temperature. Further, the toluene leaching of the spin-coated polymer films was carried out in a systematic way by immersing the films every 10 min in 20 mL with fresh toluene at room temperature. The leaching procedure was successively carried out in onehour total, i.e., for 6 consecutive 10 min spans. Afterwards, the films were dried in vacuum oven at room temperature to remove solvent residue before analyses.
X-ray photoelectron spectroscopy (XPS)
XPS was performed in AXIS Ultra instrument (Kratos Analytical, UK). The samples were mounted on a linear sample holder with UHV compatible carbon tape and pre-evacuated overnight. A fresh piece of pure cellulosic filter paper (Whatman 1) was mounted and analyzed with each sample batch as an in situ reference. [45] Measurements were performed using monochromated Al Ka irradiation at 100 W and under neutralization. Wide energy-range scans using 80 eV CAE AND 1 eV step, as well as high resolution scans of C 1 s, using 20 eV CAE and 0.1 eV step were recorded on 3-4 locations for each sample, with nominal analysis area of 400 Â 800 lm 2 . Data analysis was performed using CasaXPS software. Charge corrected wide scans were used for elemental analysis. Conditions in UHV remained satisfactory throughout the analysis. The low and stable contamination levels observed in the in situ reference sample, which was measured before and after each experiment, justified the analytical use of the CAC component in high resolution C 1 s spectra. [46]
Ellipsometry
Ellipsometry was performed on a J. A. Woollam M2000UI (Lincoln, United States) spectroscopic ellipsometer with auto retarder and rotating analyzer setup at incident angles of 60°and 70°. The measurements were performed in the spectral range from 245 to 1690 nm wavelength. The data evaluation was carried out using CompleteEASE (ver. 6.51) software. For the fitting of the real part of the refractive index n(k) in all determined layers, a Cauchy model was assumed: [47] where k is the wavelength of radiation in micrometers. A, B and C are the Cauchy coefficients and they are all fitted as positive values. The imaginary part for the complex refractive index was assumed negligible. For the complex refractive index of the silicon substrate and the silicon oxide layer, software data have been used as listed in the software database. To reduce the number of free parameters, the thickness of the oxide layer was determined before the deposition of the organic layer. The thickness evaluations of the deposited film after spin-coating and adsorbed layer after solvent leaching were performed with a fixed value of oxide layer for each sample. All measurements were carried out under mapping mode, measuring 9 discrete points for each sample. At least triplicates were done for each batch of samples. The reported average values and relative standard deviations are computed upon the abovementioned 9 points.
Atomic force microscopy (AFM)
The surface topography of the spin-coated and residual films was collected through a Multimode 8 AFM from Bruker AXS Inc. (Madison, WI, USA) in air. Images were taken with a J scanner in tapping mode using NSC15/AIBS silicon cantilevers from Mikro-Masch (radius of 8 nm, resonance frequency 325 kHz, Tallinn, Estonia). A minimum of three images were taken per sample, and scans were performed over several portions of the films. Other than a simple first order flattening, no image processing was carried out. For image analysis, the extracted images were subjected to ImageJ thresholding by Otsu's method [48] producing binary images, in which the adsorbed layers (crests) and void spaces (troughs) correspond to white and black areas, respectively.
Nanoscale infrared imaging and spectroscopy
The experiments were performed using a scattering-type scanning near-field optical microscope (s-SNOM) from Neaspec GmbH (Germany). In the setup, the atomic force microscope (AFM) works in tapping mode providing topography images of the sample surfaces. The AFM tips (Arrow NC-Pt from Nanoworld, Germany) were made of a silicon core that was coated with a Pt:Ir alloy (ra-dius~20 nm, resonance frequency 285 kHz). The tapping amplitude was maintained at 30 nm when the tip was in contact with the sample surface. The metal alloy coating of the AFM tip makes it suitable for nano-FTIR spectroscopy and infrared pseudoheterodyne imaging. A description of these methods can be found in these references. [49][50][51] Importantly, the pseudo-heterodyne detection technique allows a background-free accumulation of near-field optical images and provides images of both the ''amplitude" relating to the reflectivity and the ''phase" relating to the infrared absorption. These optical images are acquired simultaneously while obtaining the AFM topography image.
With s-SNOM the complex scattering coefficient can be obtained by a Fourier transform of the interferogram of the signal for each harmonic n of the vibration of the AFM tip, and the second harmonic was used in this study. To obtain s-SNOM images a scanning speed of 12.3 ms per pixel was employed and the AFM tip was illuminated by a quantum cascade laser (QCL) at 1500 cm À1 set to 0.3 mW output power.
The nano-FTIR spectra were acquired with a tuneable femtosecond broadband laser (at repetition frequency 80 MHz, output power 0.14 mW, spectral range 950-1950 cm À1 ), generating the broadband IR light by employing DFG crystals. The nano-FTIR spec- Table 2 Surface roughness and surface coverage after adsorption and thickness of the films after spin-coating and after adsorption by ellipsometry. tra have a spectral resolution of 12 cm À1 , the number of scans was 10 with 9.8 ms integration time per scan. To compensate for the wavenumber-dependent laser energy, water vapor, carbon dioxide, absorption in optical components in the system etc., the nano-FTIR spectra were normalized to a background spectrum acquired at a silicon wafer using a reference calibration grating TGQ1 sample. All AFM and s-SNOM images (including all modes) were levelled and plane-fitted using the Gwyddion software (v. 2.55 for Windows -Czech Metrology Institute, Czech Republic). [52]
Contact angle goniometry
Contact angles of the modified surfaces with water (static, advancing and receding) were measured using a Theta Flex optical tensiometer (Biolin Scientific, Sweden). Static water contact angle was recorded at 10 s after placing a sensile water drop on the sample surface. Quasi-static contact angles, i.e., advancing and receding contact angle, were measured using needle-in-the-sessiledrop method. [53] Contact angles were measured with two points per substrate. At least triplicates were done for each batch of samples. The reported average values and relative standard deviations were determined based on the abovementioned six points. Fig. 1a represents the XPS wide-range spectra with conspicuous carbon emission indicating the irreversibly adsorbed PS layer on Si/ SiO x substrate after annealing at 150°C and subsequent solvent leaching. High-resolution XPS data in Fig. 1b depicts the adsorbed PS on the silica surface. It is noteworthy, that the band integration for 560 K was clearly higher than for other samples. In our analysis, all the spectra were referenced to the most intense PS peak present at 284.8 eV where the prominent contribution of saturated C 1s emission is typically identifiable in hydrocarbons. [16,54,55] The shake-up peak, corresponding to the p-p* transition of the aromatic ring in PS with a binding energy between 291.0 and 293.0 eV, is also clearly discernible especially for the layer of PS560k. The quantitative analysis performed on the spectra of the C 1s band is suggesting increased adsorption onto silicon surface with different PS molecular weights. However, the PS192k intensity is slightly lower than PS35k, which is likely associated with the specific surface coverage in the XPS measuring area. Nevertheless, silicon emission (Si 2p at 100 eV) is distinctive for all samples after adsorption in the wide-scan spectra, which indicates that the adsorbed Guiselin layers are not thicker than few nanometers because the escape depth of photoelectrons in XPS is at most 3-10 nm. The minute thickness was more precisely demonstrated by ellipsometry (see, Table 2). It is noteworthy that after spincoated and annealed PS on silica -that is, before annealing to form the Guiselin layers and subsequent rinsing -the silicon emission is not observed in the XPS spectra ( Figure S2). The reason for the lack of the silicon contribution is that the spin-coated PS layers are thicker than the probing depth of the XPS, i.e., >10 nm (see, Table 2).
Surface chemistry and morphology
It is also notable that the background of Si 2p and Si 2 s emission in the region between 100 and 200 eV changes with different M w of PS after adsorption (Fig. 1a inset). According to the formalism of peak shape on surface morphology, as described by Tougaard et al., [56] the shape of the band background is reflecting the extent of PS coverage on silica surface. In Fig. 1a, the adsorbed PS layer with PS30k and PS35k represented similar results with raised silicon band background, suggesting a partial surface coverage by the adsorbed PS. Meanwhile, the silicon contribution in PS560k is also visible, but the signal is dampened by the layer of PS covering the silicon surface. Moreover, the loss of the O KLL Auger signal at around 1000 eV suggests a fully covered substrate, as the oxygen Auger electron escape depth is between 1 and 2 nm. [57] The emitted electrons are prevented from escaping by a continuous layer of PS560k compared to the PS30k and PS35k samples where the Auger signal does appear. In the same vein, the presence of Auger signal of the adsorbed PS192k is indicative of a partial coverage as the Auger electrons can still escape.
AFM topographies of the samples having the adsorbed PS after annealing and subsequent solvent rinsing with varying M w are consistently supporting the XPS observations of either partial or full coverage. Morphologically, a typical spinodal dewetting scenario, [58][59][60] i.e., the breakup of a thin film occurring through the growth of uniformly distributed surface undulations, was observed when M w values below 192 kDa were used in the process ( Fig. 2 (a-c)). Generally, the film stability increases as the polymer M w increases, that is, the observation of crests and troughs is less dominant. In contrast to low M w PS, the adsorbed layer of PS560k ( Fig. 2(d)) is a smooth film with topographic roughness at around 3 Å (Table 2). Our findings of low M w PS are at odds with several other reports on homogeneous layers after solid-state adsorption where a full coverage with a certain thickness, as measured by ellipsometry, is at least implicitly described [16,17,61,62].
As reported previously, the formation of Guiselin layers is governed by a monomer pinning mechanism (molecular motion), interfacial potential, available adsorption sites, and annealing time [32,33,61]. As the extent of adsorption increases, the space available for monomer pinning reduces. The chains adsorbed at low surface coverage create a potential opposing the growth of the interfacial layer. New chains need to stretch before diffusing through the layer formed by the molecules at surface. In this regard, the correlated reduction in the number of allowed configurations yields a severe entropy loss [63]. From the point of view of free energy, the formation of the adsorbed layers is determined by the competition between a gain in adsorption energy (monomer pinning) and the loss of conformational entropy of the chain [61,63]. Collectively, a homogeneous and flat polymer thin layer with a designated thickness after adsorption can be manipulated within the Guiselin layer construction, as confirmed by PS560k (Fig. 2 (d)). However, the conformational entropy of low molecular weight polymer in melt is greater than high-molecular-weight polymer, resulting in higher instability of the adsorbed film of PS30k, PS35k, and PS192k ( Fig. 2 (a-c)). Higher instability is also evident from the ellipsometry data, showing apparently thinner layers for lower M w PS grades (see, Table 2), in line with the AFM images of Fig. 2. A bicontinuous, partially dewetted surface of the adsorbed layers on Si/SiO x substrate was also observed by Napolitano et al. [18,64] In their studies, a short annealing time (no exact annealing time mentioned) was associated to the rupture surface. However, an annealing time of 24 h in our study followed the criterion on equilibrium, as postulated for polymer chain conformations on a solid substrate using polymers of diverse M w (123-2000 kDa) [17,22]. Periodically distributed crests and troughs were found in the adsorbed layer of monodisperse PS30k (Fig. 2(a)) after annealing and toluene solvent leaching. This is supported by Jiang et al. [41] observing the occurrence of spinodal dewetting of the adsorbed layer when the used polymer M w is below critical M w = 123 kDa. Jiang et al. postulated that the loosely adsorbed chains with short chains (M w < 50 kDa) could be easily removed during solvent leaching due to their low desorption energy upon the small number of segment-surface contacts. It is also apparent that the single crest area was bigger with polydisperse PS35k (Fig. 2(b)). This could be induced by the adsorption of long molecules (larger M w ) as a 'connector' suppressed dewetting in the case of PS35k, as reported by Raphaël and de Gennes, [65] Reiter et al., [66,67] as well as Koga and coworkers [41,42]. However, the bimodal M w distribution of PS35k with both M w are below 123 kDa, which could not aid to form a full surface coverage after solvent rinsing.
Hence, it is surprising to observe a dimple structure when a polydisperse PS with M w of 192 kDa is used, though the spinodal dewetting behavior was also observed when a monodisperse PS with M w of 136 kDa (higher than 123 kDa in Jiang's study) was used by Beena Unni et al. [43] Nevertheless, it is apparent that the continuous phase of the adsorbed layer increased which is revealed by an increased total surface coverage to around 80%. We may conjecture that M w and polydispersity both play a significant role in surface dewetting after polymer adsorption. Given the trough depth of around 5 nm in Fig. 2(c), we scrutinized the possible existence of highly dense adsorption layer in the trough area in the following section.
Nanoscale infrared imaging and spectroscopy
Nano-FTIR, a recently developed technique, with a nanoscalelevel spatial resolution by combining IR spectroscopy and s-SNOM was employed to study the surface coverage at 20 nm length scale [68]. To obtain a reference for a nano-FTIR spectrum of PS, the adsorbed layer of PS560k was studied since it formed a continuous and a relatively thick layer on the substrate in comparison to the other samples. The obtained nano-FTIR spectra (Fig. 3) displayed two characteristic IR absorbance bands of PS at around 1450 and 1500 cm À1 , demonstrating a good agreement between the band positions in nano-FTIR and conventional IR spectroscopies [69][70][71]. Despite the extremely low thickness of the PS film (ca. 5 nm), clear bands were observed, hence highlighting the capability of the instrument to provide spectra of ultrathin films with a spatial resolution on the nanoscale. Nevertheless, due to the relative minute thickness of the PS film, the signal-to-noise ratio in the acquired nano-FTIR spectra is sufficiently low to unambiguously observe other PS bands in this spectral region.
Since s-SNOM imaging offers an excellent optical spatial resolution of around 20 nm, it was possible to study the distribution of PS192k over the Si wafer. Based on the PS absorption peak in nano-FTIR spectrum of the PS560k sample above (Fig. 3), the band at around 1450 cm À1 in the spectrum is more intense in contrast to the band at 1500 cm À1 , whereas the laser energy at 1450 cm À1 is significantly lower. Therefore, s-SNOM imaging was performed at 1500 cm À1 wavenumbers to investigate the possible optical (and thus chemical) contrast representing surface coverage.
The acquired s-SNOM optical images reveal a clear heterogeneity in the amplitude (Fig. 4(c)) as well as in the phase image ( Fig. 4 (d)) between the Si wafer and polystyrene. The Si wafer is a highly reflective surface and hence the polystyrene domains in PS192k (represented by higher areas in the AFM topography image) exhibit a lower reflectivity in the amplitude image, as revealed by lower amplitude values. In contrast, these polystyrene domains have a higher phase, which is indicative of an enhanced IR absorption at 1500 cm À1 . The acquired images in Fig. 4 (AFM and s-SNOM) correspond very well to each other, thus confirming that polystyrene in PS192k is concentrated in domains of ca. 3-5 nm height (based on AFM data) separated with areas depleted of polystyrene. However, because the s-SNOM images show only relative changes, there is a possibility that trace amounts of polystyrene remain on the bottom of the troughs. In principle, nano-FTIR spectra could reveal the presence of polystyrene in the troughs, but for the PS192k film that is thinner than the PS560k film in Fig. 3, no signal above the noise level was observed neither at the domains nor in the troughs. In general, the chemical contrast seen in the obtained s-SNOM images agrees with the above XPS data and indicate that low M w polystyrene, contrary to PS560k, has tendency to disrupt the continuous layer, following the spin-coating on Si wafers and the subsequent heat and solvent treatments, as discussed previously.
Layer thickness
The thickness of the layers before and after adsorption was measured with spectroscopic ellipsometry considering a simple multilayer model, air/PS/SiO 2 /Si (substrate). More detailed parameters and fittings regarding the model are revealed in Figure S3. The accuracy of the thickness evaluation by ellipsometry was complemented by AFM scratching approach in the case of smooth, continuous films from PS560k. According to AFM, the thickness of the adsorbed PS560k was ca. 5 nm ( Figure S4), which is comparable to the value from ellipsometry, i.e., 5.4 nm. However, the high roughness value ( Table 2) of the rupture surface with lower M w might cause less accuracy of the fitted thickness data [18]. With regard to the rupture surfaces, the thickness is closer to the average thickness rather than the maximum thickness due to the existence of troughs or voids on the surface. Moreover, the sensitivity of ellipsometry is generally considered to decrease in films below 10 nm. Nevertheless, ellipsometry was able to distinguish thickness values ranging from 1 nm to 5 nm (Table 2).
Film stability
The film stability is increasingly important when considering the possible applications of the adsorbed layer. Heterogeneous dewetting may cause film instability at all stages of Guiselin layer formation, i.e., spin-coating, annealing, [34] and solvent leaching [43,72]. It is triggered by the presence of omnipresent heterogeneities, e.g., dust particles or defects at the substrate surface [73]. Thus, in this study, the silicon wafers were carefully cleansed to remove hydrocarbon contaminants and dust particles on the surface to minimize the possibility of heterogeneous dewetting. Without major heterogeneities, the films may dewet in a spinodal scenario, [58][59][60] i.e., by the amplification of capillary wave fluctuations of specific wavelength, often enhanced by the native oxide (SiO x ) layer at the substrate-polymer interface. As mentioned earlier, annealing time with temperature above the T g was agreed as one of the crucial factors to facilitate the film stability on the substrate. Apart from that, solvent washing including washing time and solvent choice is debated as another factor provoking the morphology of the adsorbed layer. For example, Beena Unni et al. [43] found that the adsorbed layers underwent spinodal dewetting at different time scales of rinsing upon solvent polarity. The use of different solvents results in varied transition layer thickness below which the flat film turns to spinodal dewetting. When toluene was used as a good solvent for PS, the experimental transition thickness was determined as ca. 2.8 nm. Conversely, Davis et al. [72] have recently demonstrated that the thickness of the adsorbed PS layer is insensitive to the used solvent type.
All in all, the film stability can be originated and mathematically described by the Lifshitz-van der Waals interaction potential. The interfacial potential is reconstructed from references [59,73] in Fig. 5 for SiO x /PS/air system with a 1.7 nm SiO x layer, i.e., similar to the thickness of a native oxide layer in silicon wafers used in this study. As established in the system of SiO x /PS/air, the interfacial potential can be written: [59,74] where, h is the film thickness; d SiOx is the thickness of silicon oxide layer, and A Si/PS/Air and A SiOx/PS/Air stand for the effective Fig. 4. A representative AFM topography map of PS192k a) 5 Â 5 mm, b) 1 Â 1 mm and c) s-SNOM 2nd harmonic optical amplitude image (a.u.); d) s-SNOM 2nd harmonic optical phase image (rad.). The s-SNOM images were acquired using the QCL laser tuned to emission at 1500 cm À1 wavenumbers.
Hamaker constants of the Si/PS/Air and the SiO x /PS/Air systems, respectively. According to literature, the A SiOx/PS/Air is not well defined, ranging from À0.22 to 1.6 Â 10 -20 J [62]. In this case, the curvature (second derivative) of the interfacial potential is normally considered, defining that at U'' (h) > 0 are metastable films and do not dewet whereas at U'' (h) < 0 are unstable [75]. In this study, the thickness of SiO x layer at ca. 1.7 nm does not cause dewetting when the PS film thickness is over 3 nm (film thickness after spin-coating in Table 2) due to the sign of U'' (h) which is positive for a metastable film (Fig. 5) [62]. As another non-negligible postulation, the residual stress generated after spin-coating is also a possible driving force for dewetting. Reiter et al. [76] found that residual stresses are estimated to be on the same order of magnitude as the acting capillary forces on the course of annealing above T g . However, the heterogeneities, the presence of the native silica layer, and the residual stress from spin-coating would only cause dewetting throughout the entire spin-coated film, instead of merely occurring in the underlying layer in the vicinity of the silica surface (i.e., in the Guiselin layer). Here, the spin-coated films after annealing were intact according to AFM topography ( Figure S5).
In contrast to the whole spin-coated film, the thinner Guiselin layer may be more labile, particularly upon its formation under elevated temperature. The adsorbed layer may play a distinct role of a mobile phase apart from the bulk material causing density variation [17,19] and surface undulations during annealing [21,37]. As a result, a property difference abounds between the adsorbing/adsorbed layer and the outer bulklike part. In other words, we must consider a system SiOx/PS (Guiselin)/PS (bulk)/ air instead of the conventional SiOx/PS/air (eq. 2) during solid state adsorption. Recently, Li et al. [77,78] showed that the chain evolution of the irreversibly adsorbed layer during annealing process caused entropic inequivalence, resulting in macroscopic dewetting of the entire thin film (thickness of 250 nm) upon prolonged annealing of 120 h. The relatively short annealing time of 24 h in this study may merely promote the dewetting of the adsorbed Guiselin layer, not the dewetting of the whole spin-coated film. According to literature, [16,18] the PS30k and PS35k will show thickness equilibrium (i.e., namely the asymptotic plateau value of the residue thickness upon long annealing times) of 2.6-2.8 nm, which is comparable to the ellipsometry study in our case. Supported by the interfacial energy, the adsorbed PS30k and PS35k with thickness below 3 nm (Table 2) presented a dewetting pattern at the presence of SiO x layer at ca. 1.7 nm, as revealed in Fig. 2(a) and 2(b), respectively. Simultaneously, it was postulated that the adsorbed Guiselin layer could partially 'migrate' to bulk phase during film cooling stage after annealing due to its greater entropy gain for lower molecular weight polymers [79]. As judged by the adsorbed layer thickness on the interfacial energy theory, the adsorbed PS192k (Fig. 2(c)) should be as stable as PS560k (Fig. 2 (d)). However, the low molecular fraction and polydispersity of PS192k ( Figure S1) may interfere with the adsorption behavior. More systematic study on the film stability of using polydisperse polymers should be considered. Furthermore, the stability of PS560k Guiselin layer after the deposition was verified to check the effect of annealing on the stability of the adsorbed layers. The PS560k Guiselin layer was post-annealed for another 24 h and found unstable with heterogeneous dewetting ( Figure S6) instead of wavelength-dependent spinodal dewetting. As the heterogeneous dewetting resulted in a relatively low abundance of troughs, however, its effect on the contact angle with water (Table 3) -i.e., on the actual extent of surface modification -was minimal. All this suggests that the adsorbed Guiselin layer is a stable and 'not dead' adsorbed layer on the substrate, which is consistent with the dewetting or instability of the formed Guiselin layer after post-annealing [62]. Therefore, the adsorbed Guiselin layer is a 'living layer' under the annealing process, which may easily promote the occurrence of dewetting in the adsorbed polymer layer especially with low molecular weight and polydispersity.
Water contact angles
The hydrophobizing effect of the adsorbed polymer layer is prominent as the water contact angles of the treated silicon wafers show in Table 3 and Figure S7. The static contact angles of the adsorbed polymers with different PS M w values are around 90°. This is a prominent comparison with the cleansed hydrophilic silicon wafer which, when untreated, is effectively wetted by water ( Figure S7a) and showing near-zero contact angles (below 5°)a fact well established [80,81]. It should be mentioned that the water contact angle of the pristine, bulky PS directly after spincoating (ca. 100 nm) is around 92°in this study ( Figure S7b, Table 3), serving as a comparative reference for the adsorbed thin layers. It is clear that the Guiselin layers of PS192k and PS560k present a value close to that of a thick pristine PS film (ca. 100 nm), indicating a successful alteration of surface hydrophobicity, indeed a reversal from hydrophilic to hydrophobic. The slight variation of the measured static contact angle of PS film in air can be associated as polymer thickness-dependent effect, governed by the longrange van der Waals forces from the underlying substrate and the topography roughness [82]. This unambiguously illustrates the efficiency of Guiselin layers, particularly with high M w PS560k, to completely alter the surface properties of silicon wafers from highly hydrophilic to hydrophobic. One could think of utilizing Guiselin layers as an alternative to self-assembled monolayers Fig. 5. Effective interfacial potential U (h) and the curvature of the interfacial potential (U'' (h)) for PS films on a silicon wafer with a native, 1.7 nm SiO x layer between PS and silicon, reconstructed according to references [59,73]. (SAMs) which generally require a reactive interface to form [83]. A comparison among different modification approach for a solid surface is listed in Table 4. Since the formation of Guiselin layers is driven by thermodynamic strive, [18] the chemical match between the substrate and the coating is irrelevant, i.e., no demanding requirement of reactive substrate. As a result, virtually any substrate/polymer combination is feasible as long as the substrate can withstand the annealing and solvent leaching. The advancing and receding contact angles were measured for the evaluation of the surface roughness and chemical heterogeneity, as revealed by hysteresis. Table 3 shows how the water contact angle hysteresis decreases as the surface coverage of PS increases with high M w . As a general trend, the water contact angle hysteresis showed a decreasing tendency with higher surface coverage upon the increase of the applied PS M w . This is a result of the surface rupture of low M w polymers as observed by both AFM (Fig. 2) and XPS (Fig. 1), and altogether the hysteresis is more sensitive to the distinction among Guiselin layers of different M w than the mere contact angles. However, as indicated by the calculated water contact angles (Table S1) with the hypothesis of Cassie-Baxter state [89] using the analyzed surface coverage, the current surface dewetting pattern should fit the Wenzel state. Therefore, we may doubt the accuracy of the obtained surface coverage of AFM images in particular with PS192k (Table S1). We may still conjecture that PS segments are probably underlying the troughs and cover the rupture area as shown in AFM images, despite intensive surface coverage studies of PS192k via surface sensitive techniques, i.e., XPS and Nano-FTIR pointing otherwise.
Conclusion
In this study, the solid-state adsorption was revisited according to the well-established Guiselin process [31] with the emphasis on application for surface modification. The existence of the adsorbed layers was established by monitoring the surface chemical composition with XPS. Contrary to several other accounts, [16,61,62] however, the adsorbed layers presented typical surface spinodal dewetting with molecular weight dependence, as revealed by AFM. The thickness of the adsorbed layers was investigated on average by ellipsometry in the range of 1.5 to 5.5 nm without considering the surface roughness. On the basis of previous investigation on the film stability upon the effect of M w in the systems of monodispersed polymers determined by interfacial potential difference and density variations within the adsorbed layer, [21,41,42,62] the spinodal dewetting during Guiselin layer forma-tion did not appear to relate to polydispersity when M w was lower than 192 kDa in this study. However, an exhaustive study with wide-spectrum M w of bimodal polymers concerning film stability would be interesting to perform in the future. The surface coverage of polydisperse PS192k was exceptionally found to be incomplete as agreed by surface sensitive imaging and spectroscopic techniques, i.e., AFM, XPS, and Nano-FTIR. Yet the inscrutably similar water contact angle of PS192k layer with a ruptured surface compared to that of PS560k layer with homogeneous coverage raised questions for future studies where, e.g., density profile analysis by neutron reflectivity could provide answers on whether minute amounts of polymer exists in the troughs of the ruptured layers [90]. Nevertheless, the prominent hydrophobizing effect of the adsorbed PS indicated that ultrathin Guiselin layers are premium candidates for surface modification. The results here indicate that Guiselin layer deposition is a viable route to modify planar surfaces and it can readily compete with, e.g., deploying specific solutionbased chemical reactions or utilizing chemical vapor deposition, as shown here by changing the properties of initially hydrophilic Si/SiO x into a clearly hydrophobic surface. Generically, a smooth surface coating or a confined surface pattern can be manipulated through solid-state adsorption with employing an appropriate polymer molecular weight with consideration of polydispersity. However, adsorption stability with consideration of Lifshitz-van der Waals interaction potential should be considered while performing surface adsorption on different substrate/polymer systems. All in all, Guiselin layers represent a new, competitive approach for surface modification of diverse solid substrates with a wide spectrum of possible polymers. It stands the test with other conceptual methods for surface modification, as elaborated in Table 4. Therefore, we foresee that the full potential of polymer adsorption and Guiselin layers, in particular as a means for surface modification and functionalization, is yet to be established. Chemical vapor deposition (CVD) [86] Wide variety of deposition techniques of gaseous substances on solid surfaces Must be reactive with the coating species Yes (for controlled deposition) Self-assembled monolayers [83] Chemisorption of surfactant head groups, followed by a tail organization Must be reactive with the head groups No Langmuir-Blodgett technology [87] Monolayer by monolayer deposition of (usually) organic material from liquid-gas interface to solid substrate
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-08-03T06:23:31.839Z | 2021-07-21T00:00:00.000 | {
"year": 2021,
"sha1": "1342f7835c26f4f383a60f65507a7354460a52c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jcis.2021.07.062",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b3fc9ab5deddbc4ded1c24a92c46892fe8a3da1c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53477247 | pes2o/s2orc | v3-fos-license | A Survey on Participatory Sensing Systems
: Wireless networks consist of a collection of large number of sensor nodes. The emergence of various sensors made the emergence of participatory sensing systems. Sensor data, in its original form, contains sensitive information about individuals. Privacy protection is very important for participatory sensing systems. In wireless networks, the nodes communicate with each other through wireless medium. Data aggregation helps in reducing the number of bits transmitted thereby reduces the total energy consumption. In this survey, we summarize different privacy preserving techniques and data aggregation protocols for wireless sensor networks. We also provide a brief description of Reed-Solomon erasure coding technique to detect and correct errors in transmission.
Introduction
Participatory sensing is an emerging paradigm where group of people contributes sensory information.With the growth in mobile devices, such as smart phones, which has multiple sensors, the demand for participatory sensing has increased.Participatory sensing systems consist of multiple mobile users gathering data in a joint way.Participatory sensing has been widely used in many applications such as health, traffic, noise and weather monitoring, community service, and many other applications.Two most important challenges with participatory sensing systems are: Privacy and quality preservation Variety of sensing data Sensing record consists of data along with spatial and temporal information.If the service provider is not honest, he may infer the private information of the user participating in participatory sensing applications from these location and time information.Because of this, many users are unwilling to contribute data for participatory sensing systems.But quality of service can be guaranteed only if there are enough number of participants.Therefore, privacy preserving is very important in participatory sensing.Variety of sensing data is another major challenge.Sensing data may include temperature, location, time, digital images, videos, etc.This paper describes various privacy preserving techniques, collaborative path hiding techniques and different data aggregation protocols for participatory sensing systems.Finally, it also describes Reed-Solomon coding for detecting and correcting errors.
Privacy Preserving Techniques
A number of privacy preserving techniques have been proposed to address the privacy and quality of sensing data for participatory sensing systems.These techniques can be classified as follows:
Randomization Technique
K. Mivule [1] proposed a noise addition technique for data privacy.This method is also called as Data Perturbation technique where the data will be modified so that it no longer represents the real world.It is also known as noise based technique where noise will be added to the original data so that the values cannot be guessed from the distorted data.Figure 1 shows a general data privacy method which can be achieved in 2 steps: Data De-Identification Noise Addition Data De-Identification is the process of removal of sensitive information such as personal identification information from the original data.In-order to ensure higher level of confidentiality, noise addition is also introduced.It is the process of adding or multiplying a randomized number to confidential quantitative attributes so that the original data cannot be guessed from the deformed data.For example, if the age attribute is 30, randomly adding a value of 50 with it converts the value to 80.One of the major disadvantages of this method is that original data cannot be reconstructed.Some of the large data collection organizations such as Census Bureau omit sensitive information by using this technique before releasing their statistics to the public.Converting a street level location value to a city level equivalent is an example of generalization.This technique is applied to participatory sensing system to implement kanonymity.By k-anonymity it means that it is difficult to distinguish each record from k-1 other records.Table 1 shows an example of 3-anonymous report.Here, the "Time" values are anonymized to get the "Generalized time".For example, 10:30 is represented by the time interval 10:00 -11:00.Before sending this report to any application, the real value of time will be removed from the report.One of the major disadvantages of this method is the need for an honest third party anonymizer for performing the anonymization technique, which is not always possible in case of a semihonest model.
Cloaking Technique
Xu, Ge [3] proposed a location cloaking method for protecting location privacy in the context of Location-Based services (LBS).Due to the emergence of smart phones, LBS have become one of the most popular mobile applications.
When user requests a service, the location details are also captured.For example, when user clicks a photo using his smart phone camera, the time, date and location where the photo was clicked will also be automatically embedded in the photo.Cloaking technique replaces the actual location value with a larger area.Users can also configure their mobile devices as to when and to whom the location information should be published.One of the major disadvantages of this method is that even though the privacy is protected, the quality of the reported data is reduced.
Cryptographic Technique
Rastogi et al. [4] proposed cryptographic method for privacy preservation.End-to-end encryption can provide high security of reported data.Before sending the report, at the sender"s side, the report is encrypted.At the receiver"s side, the report is decrypted.Cryptography protects the content of the report from being disclosed to any unauthorized entity.It ensures data integrity, accuracy and confidentiality.One of the major disadvantages of this method is that it protects data only from external attacks, such as eavesdropper attack and does not protect from internal attacks, such as service provider attack and participants" attack.Thus it fails to prevent service provider and other participants" from inferring users" sensitive data.
Collaborative Path Hiding Techniques
Christin et al. [5] proposed various exchanging strategies and reporting strategies for protecting the location privacy of the participants.Exchanging strategies deal with different ways of exchanging the sensor readings between the participants.Reporting strategies deal with different ways of reporting the sensor readings to the server.
Exchanging Strategies
In this technique, the participants collaborate to protect their privacy.Figure 2 shows different exchanging strategies.It uses the concept of path jumbling where location privacy is preserved in a decentralized way by exchanging the readings between the participants.Thus it breaks the connection between the spatiotemporal information and the identity of the user.Spatiotemporal information indicates the time and location information at which the sensor readings were taken.Different strategies to exchange the sensor readings to the application are as follows: Realistic Exchange Strategy: In this method, participants exchange their entire set of collected sensor readings at each meeting. Random-unfair Exchange Strategy: In this method, each participant randomly determines the number of reports he wants to exchange.Each participant may exchange different number of reports. Random-fair Exchange Strategy: In this method, the participants agree on a common number of n reports to exchange at each meeting.Here, the two participant exchange equal number of reports.Licensed Under Creative Commons Attribution CC BY time period, if jumbling did not happen, there is a chance that the sensor readings could not be exchanged with other participants, thereby reaching the server directly from the participant itself. Exchange Based Strategy: In this method, the sensor reports are reported to the server after every meeting.It is also known as 1-Exchange strategy.This strategy ensures that the reports are jumbled before it reaches the server.One of the major drawbacks is that, if the meeting is delayed, it could result in long reporting latency. Metric Based Strategy: In this method, the reports are reported to the server after a particular threshold value is reached.For example, if the percentage of jumbled reports reaches a given threshold (Jumbling based), or the distance between each location and the jumbled path is above a threshold value (Distance based).
Data Aggregation Protocols
Data aggregation protocols can be divided into two: Patel et al. [6] proposed data aggregation techniques which deal with collecting and aggregating data.It is been widely used in wireless sensor networks.Security is one of the major concerns of data aggregation.Cryptographic techniques are used to achieve security.Some of the protocols that provide security along with data aggregation are listed below.These protocols are designed for static networks and are not suitable for participatory sensing where network changes dynamically.
Hop-to-Hop secure data aggregation protocols
In this method, data encryption and decryption is done between each pair of nodes in the network.It implements a key based mechanism (Pair-wise keying) which ensures data confidentiality.As the intermediate nodes have to decrypt the data, it offers more chances to the attackers to get the sensor data.
End-to-End secure data aggregation protocols
This method is more flexible than hop to hop data aggregation method.Once sensor data is encrypted at the sender side, it is decrypted only at the service provider.End to end data privacy is achieved through Homomorphic encryption.It allows performing arithmetic operation on encrypted data without the need for decrypting it.As it does not require the intermediate nodes to decrypt the data, it is more secure compared to hop to hop data aggregation method.
Erasure Coding
I. Reed and G. Solomon [7] proposed Erasure coding method for participatory sensing systems.It breaks a sensing record into fragments and encodes with redundant data pieces.A stream of data in the form of 0"s and 1"s are transmitted over a communication channel.Errors can occur in the transmitting channel causing the bits to change.ie; converting the 0"s to 1"s and vice versa.In order to check whether the original data has been changed, redundancy has been introduced.It helps in recovering the original data in case of error.Each bit is sent "n" times in sequence, and the bit that occur the majority of the time is selected.For example, if a bit is sent 3 times with values 0, 1 and 0, then the actual bit is considered as 0 as it occurred twice out of 3 trials.Figure 3 shows the flow diagram for encoding and decoding technique.The sender sends the source data which is then passed through the encoder.Encoder encodes the source message into codeword, which adds redundancy in order to detect and correct errors in transmission.When the data is passed through the channel, it may introduce several errors.Decoder corrects the errors and reclaims the source message.The original data can be decoded from any k out of m encoded slices, where k is approximately equal to the size of the original record and m is the number of redundant data pieces and m>k.Finally, the receiver receives the original source message.One of the main features of Reed-Solomon codes is that, here redundancy occurs naturally.
Conclusion
In this paper, various privacy preserving techniques, collaborative path hiding techniques and data aggregation protocols are presented.Pros and cons of randomization, generalization, clocking and cryptographic techniques are discussed.We have seen that data perturbation technique with noise addition is used to provide privacy for data sets.In order to protect the location privacy of users who contribute to participatory sensing systems, a collaborative and decentralized approach is used.Depending on the nature of the application, different privacy preserving techniques are adopted.Sensor readings are exchanged between participants in order to mask their paths.Various exchanging strategies and reporting strategies are also presented here.Among all the reporting strategies, threshold based approach provides strong protection of the sensor readings.Depending on the privacy needs of the application and the degree of trust in other participants, the exchanging and reporting strategies are
Figure 1 :ISSN
Figure 1: Randomization Technique2.2 Generalization TechniqueL.Sweeney[2] proposed a k-anonymity model for protecting privacy.This method is also called as Anonymization technique.Generalization technique is the act of converting a
Figure 2 :Strategies 3 . 2
Figure 2: Exchanging Strategies3.2Reporting StrategiesDifferent ways of reporting the sensor readings to the server are as follows: Time Based Strategy: In this method, the sensor readings are periodically (hourly/daily) reported to the server.It ensures that the application receives readings on a timely manner.One of the major drawbacks is that, during the
1 .
Tree based data aggregation protocols 2. Cluster based data aggregation protocols Tree based data aggregation protocols consists of parent nodes and leaf nodes.Here, data aggregation is being performed by intermediate nodes.Cluster based data aggregation protocols consists of different clusters.Data aggregation is performed locally at each cluster. | 2018-10-16T06:32:44.651Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "9d834ec522c095bbc468ea401d88150d7b6b4d12",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v5i6.nov164370",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9d834ec522c095bbc468ea401d88150d7b6b4d12",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
118510084 | pes2o/s2orc | v3-fos-license | The light MSSM neutral Higgs boson production associated with an electron and a jet at the LHeC
We study the light CP-even neutral Higgs boson production in association with an electron and a jet at the possible CERN large hadron-electron collider within the minimal supersymmetric standard model. We investigate the possible supersymmetric effects on this process and compare our standard model numerical results with those in previous work. We present the leading-order and QCD next-to-leading-order corrected total cross sections and the distributions of the transverse momenta of the final electron, the light neutral Higgs boson, and jet in the minimal supersymmetric standard model. Our results show that the scale dependence of the leading-order cross section is obviously reduced by the QCD next-to-leading-order corrections. The K factor of the QCD correction to the total cross section at the large hadron-electron collider varies from 0.893 to 1.048 when the factorization/renormalization scale $\mu$ goes up from $0.2 m_Z$ to $3.8 m_Z$ in our chosen parameter space.
attracted the physicist's attentions. In Ref. [14] it is pointed out that the electron reconstruction in the NC process is superior with respect to that of the missing neutrino in the charged current process, e − p → ν e h 0 j + X, and the NC process has the potential to increase the overall Higgs boson signal efficiency, and there they studied the use of forward jet tagging as a means to secure the observation of the Higgs boson in the H 0 → bb decay mode and to significantly improve the purity of the signal. The QCD next-to-leading-order (NLO) corrections to the SM Higgs productions of e − p → e − H 0 j + X and e − p → ν e H 0 j + X processes at the LHeC were calculated by B. Jäger in Ref. [15]. Moreover, not only does this channel provide a spectacular signature (e − bbj), but also the lightest Higgs h 0 production in MSSM via vector boson fusion with unusual visible decays is possible [16]. The coupling strength of the lightest Higgs h 0 with Z 0 Z 0 is distinguished from the SM Higgs one with an additional factor sin(β − α), where β is related to the ratio of the vacuum expectation values and α is the mixing angle of the two CP-even Higgs states. Therefore, we may disentangle between the SM Higgs and the light MSSM CP-even Higgs by measuring the cross section for e − p → e − h 0 j + X at the LHeC when | sin(β −α)| is smaller than 1. Besides, in order to find new physics it requires sufficiently precise predictions for the new physics signals and their backgrounds with multiple final particles which cannot be separated in experimental data entirely. Therefore, the higher order QCD predictions for these reactions are necessary.
In this paper, we calculate the full QCD NLO corrections to the process e − p → e − h 0 j + X at the LHeC and estimate the capability of the LHeC to access the light MSSM CP -even Higgs boson in the e − h 0 j production. The numerical results at the leading-order (LO) are compared with those in Ref. [14]. The paper is organized as follows: We describe the technical details of the related LO and QCD NLO calculations in both the SM and the MSSM in Secs. II and III, respectively. In Sec. IV we give some numerical results and discussions about the QCD NLO corrections in the MSSM. Finally, a short summary is given. In calculating the e − p → e − h 0 j + X process in the MSSM, we neglect the u-, d-, c-, s-quark masses (m u = m d = m c = m s = 0), and do not consider the partonic processes with incoming (anti)bottom-quark due to the heavy (anti)bottom-quark suppression in parton distribution functions (PDFs) of proton. That means we involve the contributions of the following partonic processes in our LO calculations:
II. LO cross sections
where p i (i = 1, ..., 5) represent the four-momenta of the incoming electron, partons, and the outgoing electron, h 0 -boson and jet, respectively. The LO Feynman diagram for the partonic processes (2.1) is depicted in Fig.1.
The expression of the LO cross section for the partonic process e − q → e − h 0 q can be written in the form aŝ The LO total cross section for the e − p → e − h 0 j + X process at the LHeC can be expressed as There µ f is the factorization scale, s is the total c.m. energy squared of the electron-proton collision, x describes the four-momentum fraction of parton q in an incoming proton with the definitions of x = p 2 P , and P is the four-momentum of the incoming proton. G q/p (q = u,ū, d,d, c,c, s,s) represent the PDFs of parton q in proton p. The wave-function renormalization constants of the massless quarks (q = u, d, c, s) in the SM are written as where ∆ U V = 1/ǫ U V −γ E +ln(4π) and ∆ IR = 1/ǫ IR −γ E +ln(4π). The explicit expressions for the one-loop QCD wave-function renormalization constants of the massless quarks (q = u, d, c, s) in the MSSM have the forms as where the definitions for the two-point integrals are adopted from Ref. [19], and θq is the mixing angle of scalar quarks (q L ,q R ), q L =q 1 cos θq −q 2 sin θq,q R =q 1 sin θq +q 2 cos θq.
III..2 Real gluon and light-(anti)quark emission corrections
The relevant real emission partonic processes can be grouped as The IR singularities of the real parton emission subprocesses can be isolated by adopting the two cutoff phase-space slicing method [20]. In Figs.4 and 5 we present the Feynman diagrams for the real gluon emission subprocess , respectively. In adopting the two cutoff phase-space slicing method we introduce an arbitrary small soft cutoff δ s to separate the 2 → 4 phase-space into two regions, E 6 ≤ δ s √ŝ /2 (soft gluon region) and E 6 > δ s √ŝ /2 (hard gluon region), and another cutoff δ c to decompose the hard region into a hard collinear (HC) region with p 2 (p 5 ).p 6 < δ cŝ /2 and hard noncollinear (HC) region with p 2 (p 5 ).p 6 ≥ δ cŝ /2.
Then the cross sections for the real emission subprocesses e − (q, g) → e − h 0 q(g,q) can be written asσ
IV. Numerical Results and Discussion
In our numerical calculations we take one-loop and two-loop running α s in the LO and NLO calculations, respectively [9]. The QCD parameters are taken as N f = 5, Λ LO We made the comparison of our LO numerical results for the process e − p → e − H 0 j + X in the SM at the LHeC with the corresponding results read out from Fig.2 in Ref. [14], and find that they are coincident with each other within the statistic errors.
In the following LO and NLO numerical calculations, we adopt the massless four-flavor scheme and put the restriction of p j T > p cut T,j on the jet transverse momentum for one-jet events. For the two-jet events (originating from the real corrections), we apply the jet algorithm in the definition of the tagged hard jet with R = 1, i.e., if final state two partons satisfy ∆η 2 + ∆φ 2 < 1 (where ∆η and ∆φ are the differences of rapidity and azimuthal angle between the two jets), we merge them into a single jet. We use the so-called "inclusive" scheme and keep events with one or two jets. We require that there is one jet with p j T > p cut T,j , and set p cut T,j = 30 GeV by default in following calculations. Furthermore, to reduce the background of the Higgs signals, we require the final electron with the following cuts p e T > 30 GeV, |η e | < 5. (4.1) We plot the dependence of the LO and QCD NLO corrected total cross sections for the e − p → e − h 0 j +X process in the MSSM on the renormalization/factorization scale µ in Fig.6(a).
The corresponding K factor defined as K = σ NLO σ LO , versus the energy scale is presented in Fig.6(b). Figure6(a) We plot the LO and QCD NLO corrected total cross sections for the e − p → e − h 0 j + X process in the MSSM as a function of the incoming electron beam energy E e running from 50 GeV to 200 GeV in Fig.7(a), that corresponds to the c.m. colliding energy range of √ s ≈ 1.18 − 2.37 T eV . The corresponding K factors are depicted as a function of the incoming electron beam energy E e in Fig.7(b). In Fig.7(a) the full line is for the QCD NLO corrected total cross section for the e − p → e − h 0 j + X process, and the dotted line for the LO cross section. We can see from Figs.7(a) and 7(b) that the QCD NLO corrections reduce slightly the LO total cross sections for the process e − p → e − h 0 j + X in the plotted incoming electron beam energy range, and the production rate increases with E e . In Fig.7(c) we depict the K factor versus electron beam energy E e , the energy scales µ being 0.5µ 0 and 3µ 0 separately. We can see from Fig.7(c) that the K-factor uncertainty, ∆K = K(µ = 3µ 0 ) − K(µ = 0.5µ 0 ), ranges from 12.79% to 7.13% when E e goes up from 50 GeV to 200 GeV . The curves for the LO and QCD NLO corrected cross sections for the process e − p → e − h 0 j + X as a function of tan β are drawn in Fig.8(a), where the corresponding values of m h 0 are also shown on the x axis in Figs.8(a) and 8(b). The values of m A 0 and of the other parameters are those given above. In Fig.8(a), we can see that both curves go down rapidly in the region of 2 < tan β < 6 (85.52 GeV < m h 0 = 113.14 GeV ). Then the curves go up slowly after the values reach their corresponding minimal values at position around tan β ∼ 7.5. The relevant K-factor (K = σ N LO /σ LO ) versus tan β (and m h 0 ) is plotted in Fig.8(b). The K factor generally has a constant value of about 0.99. We further depict two curves for the K factors with µ = 0.5µ 0 and µ = 3µ 0 separately, as a function of tan β (and m h 0 ) in Fig.8(c).
For the comparison of the results for the processes e − p → e − H 0 j + X in the SM and e − p → e − h 0 j + X in the MSSM at the LHeC, we read out the data in the MSSM from Fig.8(a) at the positions of tan β = 3, 7, 18, 38 respectively, and list these results together with the corresponding SM ones in Table 1 In Fig.9(a) for tan β = 3, 7, 18, 38 obtained from Fig.8(a) 220 GeV , the LO and QCD NLO corrected cross sections decrease gently. The corresponding K factor versus m A 0 (and m h 0 ) is displayed in Fig.9(b). The K factor seems to be stable and has the value around 0.99. We can see that when we fix the energy scale µ = µ 0 , the QCD The distributions of the transverse momenta of the final particles at the LO and up to the QCD NLO, and their corresponding K factors for the process e − p → e − h 0 j + X are depicted in Figs.10(a,b,c), where we define K = dσ NLO dp T / dσ LO dp T . In Figs.10(a), (b) and (c), the distributions of transverse momenta and K factors are for the final electron, the light CP -even neutral Higgs boson and jet, respectively. We can find that there is no obvious distortion induced by the QCD NLO corrections for the p e T and p h 0 T distributions, while the shape distortion for the p jet T distribution is not negligible since the K factor of the p jet T distribution varies in the range of 0.865 < K p jet T < 1.049.
V. Summary
In this paper we calculate the full QCD NLO corrections to the light CP -even neutral Higgs Figure 10: (a) The LO and QCD NLO corrected differential cross sections dσ dp e T and the corresponding K factor K = dσ NLO dp e T / dσ LO dp e T for the process e − p → e − h 0 j +X . (b) The LO and QCD NLO corrected differential cross sections dσ dp h T and the corresponding K factor K = dσ NLO dp h T / dσ LO dp h T for the process e − p → e − h 0 j + X . (c) The LO and QCD NLO corrected differential cross sections dσ dp jet T and the corresponding K factor K = dσ NLO dp jet T / dσ LO dp jet T for the process e − p → e − h 0 j + X . | 2011-03-07T03:29:57.000Z | 2011-01-26T00:00:00.000 | {
"year": 2011,
"sha1": "7356663d9cbe71d24f1abe5759b1c8317808ead0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.4987",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7356663d9cbe71d24f1abe5759b1c8317808ead0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
246673904 | pes2o/s2orc | v3-fos-license | Pharmacokinetics and Bioequivalence of Two Empagliflozin, with Evaluation in Healthy Jordanian Subjects under Fasting and Fed Conditions
The current study is a randomized, open-label, two-period, two-sequence, two-way crossover pharmacokinetic study in healthy Jordanian subjects to evaluate the pharmacokinetics and bioequivalence profile of two cases of empagliflozin 10 mg under fasting and fed conditions. The plasma concentrations of empagliflozin were determined using an HPLC-MS/MS method. Tolerability and safety were assessed throughout the study. This study included 26 subjects, 26 in both fasting and fed groups.The pharmacokinetic parameters, which included the area under the concentration–time curve from time zero to infinity (AUC0–inf) and the final quantifiable concentration (AUC0–last), maximum serum concentration (Cmax), and time to reach the maximum drug concentration (Tmax) were found to be within an equivalence margin of 80.00–125.00%. The pharmacokinetic profiles show that the empagliflozin test and parent reference cases were bioequivalent in healthy subjects. The two treatments’ safety evaluations were also comparable.
Introduction
Empagliflozin ( Figure 1) is a competitive inhibitor of sodium-glucose co-transporter 2 (SGLT2) that is orally active and has an antihyperglycemic effect [1,2]. SGLT2 can be found in several anatomical areas in the body, including the proximal renal tubules in the kidneys, and it is capable of a more significant portion of reabsorption of excreted glucose out of the tubular lumen [3]. However, SGLT2 inhibitors can decrease glucose reabsorption and depress the renal threshold, elevating glucose in urine discharge [4]. Furthermore, it can be utilized to manage diabetes mellitus Type 2 (T2DM), besides food, and enhance glycemic control in individuals with type 2 diabetes [5]. Further, the use decreases cardiovascular disease hazard in a patient with T2DM and cardiovascular complications. For these purposes, it can be used as 10 mg orally every day and could be elevated to 25 mg/day if needed and afforded [6,7].
In the case of liver disability, there is no need to adjust the dose. However. in the case of kidney impairment, the following are advised: if eGFR (estimated glomerular filtration rate) ≥ 45 mL/min/1.73 m 2 , there is no necessary for dosage modification; however, if eGFR 30-45 mL/min/1.73 m 2 , there is a problem with starting therapy with this medication, but in the case that the therapy is already administered, then terminate therapy when eGFR is continuously <45 mL/min/1.73 m 2 . However, if eGFR < 30 mL/min/1.73 m 2 , treatment with this medication should be stopped [8,9]. Empagliflozin use has some contraindications. For example, it should be avoided in type 1 diabetes or diabetic ketoacidosis, where we have to evaluate kidney function before starting the medication and a specific time afterward, and in patients with volume depletion [10,11]. Several clinical studies have demonstrated empagliflozin's efficacy and safety [12]. Empagliflozin was rapidly absorbed after single and multiple oral doses (0.5-800 mg), reaching peak plasma concentrations after approximately 1.33-3.0 h before exhibiting a biphasic decline. In single rising-dose studies, the mean terminal half-life ranged from 5.6 to 13.1 h, and in multiple-dose studies, it ranged from 10.3 to 18.8 h. Increases in exposure were dose-proportional after multiple oral doses, and trough concentrations remained constant after day 6, indicating that a steady-state had been reached. Oral clearance at a steady-state was comparable to single-dose values, indicating time-dependent linear pharmacokinetics. There were no clinically significant changes in pharmacokinetics in mild to a severe hepatic impairment or mild to severe renal impairment and end-stage renal disease. Clinical studies revealed no relevant drug-drug interactions with several drugs commonly prescribed to T2DM patients, including warfarin [11].
The peak plasma time was ninety minutes, and the plasma concentration will peak after 259 nmol/L (10 mg/day); 687 nmol/L (25 mg/day) and AUC: 1870 nmol hr/L (10 mg/day); 4740 nmol hr/L (25 mg/day) [1]. Empagliflozin could be used only via the oral route, either after or before food, and it is highly protein-bound (86.2%), limiting its distribution [13]. It has partitioning to RBCs of about 36.8%. The majority is metabolized through glucuronidation by the uridine 5 -diphospho-glucuronosyltransferases (UGT2B7) by the medication's metabolism routes UGT1A3, UGT1A8, and UGT1A9 [14]. However, it has T 1/2 elimination of about 12.4 h and clears at a rate of 10.6 L/hr. Empagliflozin is divided between urine and feces, at 54.4 and 41.2, respectively. Patients are asked to increase the drinking of fluids to decline the possibility of hypotension [15]. The best storage temperature for the drug is 25 • C (77 • F), with fluctuations permitted between 15-30 • C (59-86 • F) [15].
The bioequivalence studies showed that at least several comparable dosage forms could cause absorption to the bloodstream with the same relative rate and extent [1]. Bioequivalence is usually used to assess a previously known two bioequivalence ranges. Moreover, bioequivalence studies use statistical tests of the hypothesis that evaluate the geometrical ratio and require similar times to achieve peak blood concentrations [16]. Empagliflozin does not have a primary metabolite, except for the most available glucuronide conjugates (2-O-, 3-O-, and 6-O-glucuronide) [17]. There is no comparative study of the bioequivalence of ten mg of empagliflozin of test and parent reference tablets in Jordanian fasting and fed conditions. Besides, the study aims to compare blood glucose reduction after taking empagliflozin in two cases: the original (Jardiance ® , Boehringer Ingelheim, Germany) and generic forms. Furthermore, this study is of value for introducing new generic drugs in Jordan.
Empagliflozin Analysis
Empagliflozin plasma contents were measured using a validated HPLC-MS/MS technique. As shown in Figure 2, the retention times for test and parent reference were 2.28 and 2.26, respectively.
Tolerability and Safety
Out of twenty-six subjects who participated, fifteen participants experienced AEs to study the fasting condition. Side effects were hypoglycemia, palpitations, fatigue, diarrhea, nausea, elevated bilirubin, hypotension, dizziness, hypertension, and sinus bradycardia. Three subjects developed a decrease in blood glucose levels, one of which was determined to be hypoglycemic, and after taking measures to give 20% glucose solution, hypoglycemia disappeared. In 16 subjects, there were 22 negative events in the fed condition and no adverse events related to hypoglycemia. Most treatment-emergent adverse events (TEAEs) were of the grade 1 variety, and all settled unexpectedly before the examination. The researchers considered all AEs to be mild or moderate because they were transient. There were no severe adverse events, such as death, reported. The AEs were associated with lowering the blood glucose of the two cases. No clinically substantial variations from the routine physical examination were noticed, including vital sign estimations and ECG recordings. Figure 3 depicts the findings of tracking the time plasma concentration profile of empagliflozin under fasting and fed conditions, as shown in Figure 3.
Pharmacokinetic Analysis
The pharmacokinetic study examined the mean empagliflozin drug level curves after dispensing a single dose of 10 mg tablets of two cases, the parent reference, and test, to 26 healthy Jordanian volunteers. Table 1 Upon ANOVA analysis and using in-transformed data for empagliflozin, no sequence, period, or formulation effects were observed for any pharmacokinetic property (p > 0.05). Table 2 displays the 90 percent CIs of the proportions (test vs. parent reference) for the in-transformed C max , AUC 0-t , and AUC 0-inf . For empagliflozin, the 90% CIs for C max ratios, AUC 0-t , and AUC 0-inf were 95.69% to 116.99%, 92.77% to 110.01%, and 93.19% to 106.09%, respectively. These results meet the predefined bioequivalence requirements. The relative bioavailability of the test/parent reference preparation were 108.12% (mean C max ), 98.40% (average AUC 0-t ), and 96.9% (average AUC 0-inf ), Table 2.
Discussion
The pharmacokinetics and bioequivalence of two cases of empagliflozin tablets were studied in healthy Jordanian volunteers of both sexes under fasting and fed conditions. The 90% CIs for empagliflozin were contained in previously determined bioequivalence standards of 80% to 125% for AUC and C max [2]. Pharmacokinetics parameters and bioequivalence examination of the standard tablet in the two cases indicated that singledose exposure following the two treatments was equivalent to C max , AUC 0-t , and AUC 0-inf under fasting and fed conditions. Moreover, ANOVA study on the log-scale results T max , C max , and AUC 0-t and the untransformed data for C max , AUC 0-t , AUC 0-inf , t 1/2 , and T max indicated that the sequence effects and cases or former effects for all these parameters did not affect the outcome of the study. No statistically significant differences were detected between the two cases (p > 0.05).
According to previous studies [18,19], giving empagliflozin with food resulted in mean C max and AUC being depressed by 8% and 9%, respectively. Studies with single oral doses of empagliflozin in healthy individuals and multiple oral doses in patients with type 2 diabetes resulted in maximum drug concentration (C max ) reached 2 to 3 h after administering the dose. However, this study showed that the mean C max and AUC increased by approximately 50% to 74% when empagliflozin was given high-fat, high-calorie meals. Various factors may lead to the above differences, including differences between people, ethnicity, environment, and food intake pattern. At clinically relevant intestinal and systemic concentrations, empagliflozin does not inhibit CYPs 1A2, 2B6, 2C8, 2C19, 2D6, or 3A4. In vivo, empagliflozin is not expected to be a CYP2C9 inhibitor. Furthermore, empagliflozin and its glucuronides are not thought to be irreversible CYP inhibitors. As a result, drug-drug interactions involving the investigated CYPs are regarded as unlikely. Empagliflozin does not inhibit UGT1A1 at maximum organ concentrations [20]. However, it is challenging to deduce certain affecting factors based on the current study accurately.
Researchers used various drug delivery strategies versus FDA guidelines [21] and previous studies [21,22] in glucose administration for the fasting condition study. According to FDA rules, subjects were given a test or parent reference formula in 240 mL of 20 percent glucose solution and 60 mL of 20 percent glucose solution every 15 min for 4 h to reduce the risk of hypoglycemia while fasting empagliflozin bioequivalence analysis. Conversely, clinical research feedback indicates that Jordanian subjects are intolerant to a 20% glucose solution due to nausea and vomiting, interfering with the drug's side effects. In this study, we gave empagliflozin 240 mL of 20% glucose solution and then 60 mL of 10% glucose solution every 15 min for the next four hours while closely monitoring hypoglycemia with blood glucose readings. When the subject develops hypoglycemia symptoms, 60 mL of 20% glucose solution is given.
Comparing the parent reference and test drug's blood glucose concentration monitoring results under fasting or fed conditions may correlate with drug absorption. For the study of fasting conditions, blood glucose level reached the lowest near T max . In a healthy human body, the average blood glucose level of the parent reference and the test drugs was similar, reflecting the pharmacodynamic similarity of the parent reference and test drugs to a certain extent.
In a single-dose, open-label, randomized, crossover study of empagliflozin 10 mg in healthy male subjects, it was reported that drug-related incidence of hypoglycemia of 4.35% (1/23) [23]. In our fasting group of this study, drug-related hypoglycemia incidence was 2.78% (1/36). For empagliflozin studies that did not take 20% glucose solution early, the tolerance results differed from current studies [24,25]. A single-dose, open-label, randomized, two-sequence, two-period crossover pharmacokinetic study with 10 mg empagliflozin reported that drug-related incidence of hypoglycemia was 25.0% (6/24) [23].
In another randomized, double-blind, placebo-controlled monotherapy study with a fourteen-week trial, patients already on sulfonylurea therapy undergoing a three-week washout interval were randomized to the parent reference of 10 mg, 25 mg, or placebo. Patients randomized to a reference of 10 or 25 mg undergoing forced titration from an initial dose of 10 mg to final doses, as tolerated. The overall occurrence of any hypoglycemia was 4% for empagliflozin 25 mg, 17% for empagliflozin 10 mg, and 0% for placebo [26]. According to the drug description [18], empagliflozin may cause adverse reactions to hypoglycemia when administered under fasting conditions. Therefore, it is necessary to monitor the blood glucose concentration in studying the clinical bioequivalence of drugs related to diabetes treatment.
The current study had a sufficient number of subjects to ensure adequate statistical power to prove the equivalence of the test product to the parent reference product. However, the study has some limitations. Due to limited recruitment capacity, we could only include male volunteers. Empagliflozin is not recommended in women trying to conceive because Empagliflozin is rated a safety category C in pregnancy [1], and no women who cannot conceive volunteered to participate in the study.
Participants
Twenty-six individuals participated in this research to study under fasting conditions. One subject was excluded due to unqualified vital signs in the second period. Conversely, 26 subjects were enrolled under fed conditions. Moreover, one subject was excluded from the study due to taking nonexperimental drugs because of toothache in the second period. Dropped subjects did not take the second-period trial drug in both two conditions. Table 3 shows the demographics of the participants in the research.
Study Design
Bioequivalence testing was performed using an open-label single dose, random sequence, two × two crossover approach. Through doses, a one-week washout interval assesses the bioequivalence of two brand names, empagliflozin, manufactured as 10 mg tablets by the producer Boehringer Ingelheim (Jardiance ® , Ingelheim, Germany; Batch number, 1705039) as a reference, with the other brand drug Test (Amman, Jordan; Batch number, B615).
Approval Number EC-UOP/101-12020 was obtained by adhering to the Declaration of Helsinki's ethical standards for human research and the International Conference on Harmonization's Good Clinical Practice Guideline and the NMPA's Guideline for Good Clinical Practice [27].
Subjects
This study participants were healthy Jordanian volunteers at some private hospitals (Amman, Jordan) Phase I Clinical Unit. Before starting the study, complete medical and laboratory tests were obtained to ensure health. Smokers, heavy drinkers, those who used CYP enzyme inhibitors within the previous 60 days, those who had taken any medicine within the previous four weeks, those who had a history of medication allergies, those who had participated in previous clinical studies within the previous six months, and those with any significant clinical abnormality were all ruled out.
For the study of the fasting condition, volunteer age was between 23 and 49 years, they had a mean body mass of 78.45 kg, and body mass index ranged from 18.15 to 27.60 kg/m 2 . For the fed condition study, volunteer age was between 19 and 43 years, their mean body mass was 63.26 kg, and their body mass index ranged from 18.56 to 26.71 kg/m 2 . All subjects were knowledgeable regarding details, including the current study's threats and advantages, and informed documentary agreements before starting the study. Subjects had the option to drop out of the study at any time.
Empagliflozin Dosing
This two-sequence crossover study compares the pharmacokinetics of two oral drugs of empagliflozin at a dose of 10 mg in healthy sex. A random number table was generated using SAS statistical software (version 9.130, SAS, Cary, NC, USA), and subjects were divided into T/R or R/T groups. A test or reference drug tablet containing 240 mL of 20% glucose solution was administered in a standing posture under fasting conditions. Under fed conditions, a high-fat (about 50%), high-calorie (800~1000 kcal) meal was consumed 30 min before the drug was administered, and the drugs were administered in 240 mL warm water.
Assay Method
The plasma concentrations of empagliflozin were determined using a revised version of the proven HPLC-MS/MS method [29]. The main instruments used in this study were an API-1400 mass spectrometer with a built-in waste/detector switching valve (AB-Sciex, CA, USA) and an HPLC system (Agilent Technologies, model LC-1200, Englewood, Colorado, USA) with an auto-sampler and controlled by Analyst 1.6.1 software. Bath sonicated Crest model-175T (UltraSonics CORP, Trenton, NJ, USA), Sartorius balance BP 2215, Eppendorf Centrifuge, and Windows XP SP3 Data Management Software 1.5.2-a were used. Multiple reaction monitoring transitions were observed at mass-to-charge ratios (m/z) of 461.3 → 449.2 and 465.6 → 440.9 for empagliflozin and 467.4 → 432.7 for d5empagliflozin. Data acquisition and processing were powered by the Analyst 1.6.3 software package and Watson LIMS 7.5spl. Using a mobile phase of 70% methanol and 30% a mixture of 20 mM ammonium acetate and 0.2 mM formic acid, the chromatographic conditions of LC-MS/MS were quantitatively improved for the best analytical peak quality and shortest run time. They were isocratically eluted at 1 mL min-1 through an ACETM C18 (50 2.1 mm, 5 m) column for LC-MS/MS. Both injection volumes were two microliters in size. For empagliflozin analysis in ESI negative mode, the following MS parameters were optimized: nitrogen gas one flow = 60 units, gas two flow = 75 units, curtain gas = 35 units, ion spray voltage = 5000 Volts, drying temperature = 650 • C, and collision energy = 20 Volts [30].
Quality control samples were prepared at concentrations greater than five times the upper limit of quantification (5 × uloq) and then diluted five times with blank plasma. Six samples were prepared parallel with diluted sample concentrations within the standard curve's linear range. The mean SD of the recovery rate of six parallel samples of the same concentration describes dilution accuracy. The average deviation of each concentration sample's accuracy ranges between 85.00 and 115.00 percent, with a CV percent of 15.00 percent. The stability test items included the stability of drug-containing plasma after 24 h at room temperature and 24 h of treatment in an environment ranging from 2 to 8 degrees Celsius. Furthermore, four freeze-thaw cycles of drug-containing plasma at −30 to −10 • C and −80 to −60 • C, and 66 days of long-term storage of drug-containing plasma from −30 to −10 • C and −80 to −60 • C were performed. In the conditions mentioned above, the stability of empagliflozin was acceptable. Empagliflozin had a linear range of 0.5000 to 150.0 ng/mL. The standard curve LLOQ was from −7.98% to 10.46%, the accuracy deviation of the other concentration samples except LLOQ was from −12.00% to 12.88%, and R 2 was from 0.9932 to 0.9995.
Pharmacokinetics and Statistical Analysis
The pharmacokinetics study used a non-compartmental method using Phoenix Win-01-Nonlin version 7.01 (Pharsight ® , Princeton, NJ, USA). Blood concentration-time data were collected after fasting or fed administration. The pharmacokinetics (AUC, C max , T max ) were statistically analyzed, and bioequivalence was evaluated. The AUC 0−t and AUC 0-inf for empagliflozin dugs were calculated by the trapezoidal method. The T max values of the test (T) and reference drugs (R) were analyzed by the nonparametric Wilcoxon method. The point estimates of T max , C max , and AUC of test and reference drugs R were calculated after logarithmic conversion, and the significance test of T max , C max , and AUC was carried out by single-factor analysis of variance (ANOVA). Then, the statistical treatment of double unilateral t test was carried out. If the 90% confidence interval values of AUC 0−t , AUC 0-inf , and C max geometric mean ratio were between 80.00% and 125.00% of the statistical interval proposed by the NMPA, the test drug was bioequivalent to the reference one.
Tolerability and Safety
Throughout the study, safety was evaluated using adverse events (AEs) and laboratory tests (biochemistry, hematology, and urinalysis). Potential adverse events (AEs) and vital signs (systolic and diastolic blood pressure, body temperature, and pulse rate) were assessed. Meanwhile, researchers assessed and recorded AEs in terms of seriousness, intensity, time course, outcome, and relationship to the study drug throughout the study. AEs have designated codes with a preferred term and system organ class, according to the Medical Dictionary for Regulatory Activities (version 20.0). Five description types (unrelated, unlikely, possibly, probably, or related) were recorded to confirm the relationships between AEs and drugs.
Conclusions
The current pharmacokinetics study found that the two drugs were bioequivalent when fasted and fed. According to the guidelines, the central pharmacokinetics were within the bioequivalence range (80.0 to 125.0 percent). Two empagliflozin cases were well tolerated. The bioequivalent form of the 10 mg oral tablet will provide Jordanian patients with affordable, acceptable, and beneficial access to their medication. However, the pharmacokinetic changes under fasting and fed conditions differed from previous studies. This study may provide a new way to conduct better clinical pharmacokinetics and bioequivalence research of drugs related to diabetes treatment under fasting conditions, in addition to taking a lower glucose concentration to reduce intolerance. As a take-home message, oral administration of Empagliflozin under fasting and fed conditions resulted in equivalent pharmacokinetics, and thus both products could be considered bioequivalent. Because of their predictable bioavailability and low toxicity, both drugs could be considered interchangeable for patients seeking antihyperglycemic relief. | 2022-02-09T16:27:54.063Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "e161d54dd3da07d3cc1dca821cd05795b3a97108",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/15/2/193/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b85d4563e98e1f73c5ef9f12f31e633e8d51b5c8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251630017 | pes2o/s2orc | v3-fos-license | Safety of OnabotulinumtoxinA in the [management of] chronic migraine in pregnancy
COPYRIGHT © 2022 Smirno . This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Safety of OnabotulinumtoxinA in the [management of] chronic migraine in pregnancy
Background
OnabotulinumtoxinA is an irreversible acetylcholinesterase inhibitor and a neuromuscular blocking agent first approved for use by the FDA in 1989, and most recently updated in 2010 (1). It is currently approved for the management of chronic migraine, defined by the FDA insert as 15 or more headache days per month, each lasting 4 or more hours, and by the ICHD-3 as 15 or more headache days per month, with 8 or more of those days consistent with migraine (1,2).
To explore the possible implications of using OnabotulinumtoxinA for pregnant patients with chronic migraine, we must first explore the known risks. In terms of known risks, the FDA insert carries a black box warning from postmarketing surveillance, indicating that OnabotulinumtoxinA may spread to surrounding areas to produce symptoms consistent with botulinum toxin effect, including weakness of skeletal and smooth muscles, ultimately describing reports of death and risk highest in children treated for spasticity. However, as noted later in the document, in the specific indication of use for chronic migraine, there are no known definitive serious adverse events reported in either clinical studies or postmarketing surveillance. Specific adverse reactions listed for the indication of chronic migraine included neck pain, headache, worsening migraine, muscular weakness, and eyelid ptosis.
In terms of determining risk, the FDA currently assigns a pregnancy rating of C for OnabotulinumtoxinA, indicating a lack of adequate and well-controlled studies in pregnant women. In original animal studies noted on the FDA insert, intramuscular administration of OnabotulinumtoxinA to pregnant rats during organogenesis produced decreased fetal weight and decreased fetal bone ossification at high doses only at 4 units/kg, which is, approximately, the equivalent of 1½ times the dosing for the average high human dosing for upper limb spasticity (360 units).
In notable human studies, a 24-year review of the Allergan safety database found 574 pregnancies with known OnabotulinumtoxinA exposure (3). No maternal or fetal cases of botulism were reported, and the fetal defect prevalence rate was consistent with that in the general population at 2.7%. Another prospective study looking at the use of OnabotulinumtoxinA for the management of chronic migraine in 45 pregnant patients reported no impact on pregnancy outcomes (4). Additionally, clinical case reports of women affected by botulism illness did not show adverse effects on the pregnancy, including one case where the only notable movement in the patient was the fetus, while the mother was affected by paralysis (5)(6)(7)(8)(9)(10). It is therefore reasonable to assume that, if the naturally occurring botulinum toxin, Smirno .
weighing 150 kDA, does not cross the placental barrier, then the complexed OnabotulinumtoxinA molecule which weighs 900 kDA is even less likely to. Another notable consideration for the use of OnabotulinumtoxinA in the management of chronic migraines during pregnancy is the relative lack of safety of other commonly used preventative medications. Memantine and cyproheptadine alone are listed as Category B for the preventative management of migraines, while the more efficacious beta blockers, SNRI's, and amitriptyline are listed as Category C, and topiramate, valproic acid, and nortriptyline are all listed as Category D (11). In addition to reported risks, the oral absorption and circulation of these compounds are indisputable in comparison to OnabotulinumtoxinA, which may cross into the circulation but is only administered on a quarterly basis.
In all it is a molecule too large to cross the placental barrier (12,13), and human studies thus have not shown any increase in worsening pregnancy outcomes with its use across a variety of indications in pregnant patients. Likewise, when following the PREEMPT protocol for chronic migraine (14), it is not applied in areas that could compromise respiration or cause significant weakness in the mother. Additionally, although animal studies show some negative outcomes, they are seen only in doses that far surpass those used in chronic migraine. Furthermore, given the lack of safety of most other medications commonly used for migraine during pregnancy, when used appropriately, migraine therapy with OnabotulinumtoxinA may actually reduce the use of other more potentially teratogenic compounds, as well as reduce the migraine-associated disability and potential fetal harm caused by uncontrolled pain. Based on the current data available, OnabotulinumtoxinA remains a very strong option in the management of chronic migraine in pregnancy, with the potential to significantly reduce migraine-related disability and pain and improve the quality of life of our pregnant patients. Additionally, given currently limited observational studies on the use of OnabotulinumtoxinA in pregnant patients, further larger studies looking at long-term outcomes are needed to ascertain both the safety and efficacy of this medication, which has been seen clinically.
Author contributions
The author confirms being the sole contributor of this work and has approved it for publication. | 2022-08-18T13:50:51.983Z | 2022-08-18T00:00:00.000 | {
"year": 2022,
"sha1": "86b2c4b83cabf0b1d5d1a3464459bbe5c80cb3c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "86b2c4b83cabf0b1d5d1a3464459bbe5c80cb3c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
268385258 | pes2o/s2orc | v3-fos-license | Polygenic risk score-based phenome-wide association study of head and neck cancer across two large biobanks
Background Numerous observational studies have highlighted associations of genetic predisposition of head and neck squamous cell carcinoma (HNSCC) with diverse risk factors, but these findings are constrained by design limitations of observational studies. In this study, we utilized a phenome-wide association study (PheWAS) approach, incorporating a polygenic risk score (PRS) derived from a wide array of genomic variants, to systematically investigate phenotypes associated with genetic predisposition to HNSCC. Furthermore, we validated our findings across heterogeneous cohorts, enhancing the robustness and generalizability of our results. Methods We derived PRSs for HNSCC and its subgroups, oropharyngeal cancer and oral cancer, using large-scale genome-wide association study summary statistics from the Genetic Associations and Mechanisms in Oncology Network. We conducted a comprehensive investigation, leveraging genotyping data and electronic health records from 308,492 individuals in the UK Biobank and 38,401 individuals in the Penn Medicine Biobank (PMBB), and subsequently performed PheWAS to elucidate the associations between PRS and a wide spectrum of phenotypes. Results We revealed the HNSCC PRS showed significant association with phenotypes related to tobacco use disorder (OR, 1.06; 95% CI, 1.05–1.08; P = 3.50 × 10−15), alcoholism (OR, 1.06; 95% CI, 1.04–1.09; P = 6.14 × 10-9), alcohol-related disorders (OR, 1.08; 95% CI, 1.05–1.11; P = 1.09 × 10−8), emphysema (OR, 1.11; 95% CI, 1.06–1.16; P = 5.48 × 10−6), chronic airway obstruction (OR, 1.05; 95% CI, 1.03–1.07; P = 2.64 × 10−5), and cancer of bronchus (OR, 1.08; 95% CI, 1.04–1.13; P = 4.68 × 10−5). These findings were replicated in the PMBB cohort, and sensitivity analyses, including the exclusion of HNSCC cases and the major histocompatibility complex locus, confirmed the robustness of these associations. Additionally, we identified significant associations between HNSCC PRS and lifestyle factors related to smoking and alcohol consumption. Conclusions The study demonstrated the potential of PRS-based PheWAS in revealing associations between genetic risk factors for HNSCC and various phenotypic traits. The findings emphasized the importance of considering genetic susceptibility in understanding HNSCC and highlighted shared genetic bases between HNSCC and other health conditions and lifestyles. Supplementary Information The online version contains supplementary material available at 10.1186/s12916-024-03305-2.
Table S4.Odds ratio for HNSCC and its subtypes associated with genetic risk across subgroups by age, sex, and smoking status in the UK Biobank.
Table S5.Odds ratio for HNSCC and its subtypes associated with genetic risk in the Penn Medicine Biobank.
Table S6.Odds ratio for HNSCC associated with genetic risk across different case-control ratios in the UK Biobank and Penn Medicine Biobank.
Table S7.The ancestry-specific odds ratio for HNSCC associated with genetic risk in the Penn Medicine Biobank.Table S8.Full results of HNSCC PRS-PheWAS in UK Biobank and Penn Medicine Biobank.
Table S9.Full results of OPC PRS-PheWAS in UK Biobank and Penn Medicine Biobank.
ICD-9 codes:
Union of Oropharynx and Oral cavity.
ICD-10 codes:
Union of Oropharynx and Oral cavity.
ICD-10 codes:
Oral cavity (C02.0-C02.9,C03.0-C03.9,C04.0-C04.9 and C05.0-C06.9).HapMap3 [20].Then, we performed a kernel density estimator (KDE) algorithm on all samples to determine their genetically informed ancestry.We trained a KDE using the HapMap3 PCs and used the KDEs to calculate the likelihood of a given sample belonging to each of the five continental ancestry groups.Samples were excluded from analysis if no ancestry likelihoods were greater than 0.3, or if more than three ancestry likelihoods were greater than 0.3.After exclusion, a total of 27,933 individuals considered European (non-Hispanic White) ancestry and 10,468 individuals considered African American (non-Hispanic Black) ancestry were determined eligible for the replication analyses.
Method S4.Generation of polygenic risk scores.
We constructed PRSs for HNSCC, OPC, and OC by using a Bayesian polygenic prediction method, PRS-CS [22], which infers the posterior mean effect size of each variant using the linkage ).The proportion of variance explained for PRS alone was computed as Nagelkerke's pseudo-R2.
Figure S2 .
Figure S2.Prevalence plot for significant phenotypes in PheWAS according to genetic risk groups.
disequilibrium (LD) reference panel and GWAS summary.The 1000G Project phase 3 EUR data was used to be the external LD reference panel.The posterior SNP effect sizes in PRS-CS were inferred from GAME-ON summary statistics, with default settings, and automatic estimation of the global shrinkage parameter (PRS-CS-auto).The individual PRSs were computed from beta coefficients as the weighted sum of the risk alleles by applying PLINK version 1.90 with thescore command [23].The detailed number of SNPs used in the analysis is depicted as follows (Table
Figure S2 .
Figure S2.Prevalence plot for significant phenotypes in PheWAS according to genetic risk groups.
Detailed information on the genotype data quality control and imputation procedures.
Abbreviations: GAME-ON, Genetic Associations and Mechanisms in Oncology; ICD, International Statistical Classification of Diseases and Related Health Problems; HNSCC, head and neck squamous cell carcinoma; OC, oral cavity cancer; OPC, oropharynx cancer.Method S3.Sample-level QC was performed by excluding samples on the basis of (i) mismatched sex or (ii) having second-degree or closer relatives also in the Biobank.We inferred ancestry by projecting array genotype data onto PC axes defined by individuals from the (HNSCC [5,974 cases and 4,012 controls], OPC [2,617 cases and 4,012 controls], and OC [2,958 cases and 4,012 controls]).The GWASs were performed using PLINK 1.90 with sex, age, 10 PCs, and genotyping batch as covariates.The genotype data for the oral and pharyngeal OncoArray study can be downloaded from the database of Genotypes and Phenotypes (dbGaP) under accession phs001202.v1.p1.Of note, the GWASs did not include the additional external controls (2,476 shared controls [1,453 from the EPIC study and 1,023 from the Toronto study]) beyond the GAME-ON data used by
Table .
Number of used SNPs in generating PRSs.Number of missing data for each variable in the UK Biobank.
Table S1 .
Characteristics of participants in the UK Biobank.
* P-value indicates the significance of the difference between the control and HNSCC case groups.Abbreviations: HPV, Human papillomavirus; HNSCC, head and neck squamous cell carcinoma; OC, oral cavity cancer; OPC, oropharynx cancer; SD, standard deviation.
Table S2 .
Characteristics of participants in the Penn Medicine Biobank.
* P-value indicates the significance of the difference between the control and HNSCC case groups.Abbreviations: HNSCC, head and neck squamous cell carcinoma; OC, oral cavity cancer; OPC, oropharynx cancer; SD, standard deviation.
Table S3 .
Odds ratio for HNSCC and its subtypes associated with genetic risk in the UK Biobank.
All analyses were adjusted by age, sex, genotype array, and PC 1 to 10.*The proportion of variance explained for PRS alone was computed as Nagelkerke's pseudo-R2.Abbreviations: HNSCC, head and neck squamous cell carcinoma; OC, oral cavity cancer; OPC, oropharynx cancer; PRS, polygenic risk score; SD, standard deviation; OR, Odds ratio; CI, confidence interval; PC, principal component.
Table S4 .
Odds ratio for HNSCC and its subtypes associated with genetic risk across subgroups by age, sex, and smoking status in the
Table S5 .
Odds ratio for HNSCC and its subtypes associated with genetic risk in the Penn Medicine Biobank.
*The proportion of variance explained for PRS alone was computed as Nagelkerke's pseudo-R2.Abbreviations: HNSCC, head and neck squamous cell carcinoma; OC, oral cavity cancer; OPC, oropharynx cancer; PRS, polygenic risk score; SD, standard deviation; OR, Odds ratio; CI, confidence interval; PC, principal component.
Table S6 .
Odds ratio for HNSCC associated with genetic risk across different case-control ratios in the UK Biobank and Penn Medicine Biobank.
1The UK Biobank analyses were adjusted by age, sex, genotype array, and PC 1 to 10.2The Penn Medicine Biobank analyses were adjusted by age, sex, ethnicity, and PC 1 to 10. * Controls were extracted from samples matched for age and sex with cases for each ratio using the "matchIt" R package.**
Table S7 .
The ancestry-specific odds ratio for HNSCC associated with genetic risk in the Penn Medicine Biobank.
*The proportion of variance explained for PRS alone was computed as Nagelkerke's pseudo-R2.Abbreviations: PMBB, Penn Medicine Biobank; HNSCC, head and neck squamous cell carcinoma; SD, standard deviation; OR, Odds ratio; CI, confidence interval; PC, principal component.
Table S8 .
Full results of HNSCC PRS-PheWAS in UK Biobank and Penn Medicine Biobank.
Table S9 .
Full results of OPC PRS-PheWAS in UK Biobank and Penn Medicine Biobank.
Table S10 .
Full results of OC PRS-PheWAS in UK Biobank and Penn Medicine Biobank.*Tables S8-10 are provided in Additional file 2 (as an Excel file). | 2024-03-14T15:07:55.376Z | 2024-03-14T00:00:00.000 | {
"year": 2024,
"sha1": "4622aa36f6bec4d0307eecdf2cf9751b53fb1563",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ee8e98fc68d29a772bc7cd32df37425aa899a598",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11961006 | pes2o/s2orc | v3-fos-license | Traditional invasive vs. minimally invasive esophagectomy: a multi-center, randomized trial (TIME-trial)
Background There is a rise in incidence of esophageal carcinoma due to increasing incidence of adenocarcinoma. Probably the only curative option to date is the use of neoadjuvant therapy followed by surgical resection. Traditional open esophageal resection is associated with a high morbidity and mortality rate. Furthermore, this approach involves long intensive care unit stay, in-hospital stay and long recovery period. Minimally invasive esophagectomy could reduce the morbidity and accelerate the post-operative recovery. Methods/Design Comparison between traditional open and minimally invasive esophagectomy in a multi-center, randomized trial. Patients with a resectable intrathoracic esophageal carcinoma, including the gastro-esophageal junction tumors (Siewert I) are eligible for inclusion. Prior thoracic surgery and cervical esophageal carcinoma are indications for exclusion. The surgical technique involves a right thoracotomy with lung blockade and laparotomy either with a cervical or thoracic anastomosis for the traditional group. The minimally invasive procedure involves a right thoracoscopy in prone position with a single lumen tube and laparoscopy either with a cervical or thoracic anastomosis. All patients in both groups will undergo identical pre-operative and post-operative protocol. Primary endpoint of this study are post-operative respiratory complications within the first two post-operative weeks confirmed by clinical, radiological and sputum culture data. Secondary endpoints are the operative data, the post-operative data and oncological data such as quality of the specimen and survival. Operative data include duration of the operation, blood loss and conversion to open procedure. Post-operative data include morbidity (major and minor), quality of life tests and hospital stay. Based on current literature and the experience of all participating centers, an incidence of pulmonary complications for 57% in the traditional arm and 29% in the minimally invasive arm, it is estimated that per arm 48 patients are needed. This is based on a two-sided significance level (alpha) of 0.05 and a power of 0.80. Knowing that approximately 20% of the patients will be excluded, we will randomize 60 patients per arm. Discussion The TIME-trial is a prospective, multi-center, randomized study to define the role of minimally invasive esophageal resection in patients with resectable intrathoracic and junction esophageal cancer. Trial registration (Netherlands Trial Register) NTR2452
Background
The incidence of esophageal cancer is increasing in the Western world. In the Netherlands, in the year 1990 some 807 patients were diagnosed with esophageal cancer, whereas in 2005, this number reached a staggering 1546 [1]. It is expected that this rise in incidence will continue in the years to come. This substantial increase in incidence can be accounted for by an increase in the number of adenocarcinomas diagnosed.
Approximately one third of the patients are considered candidates for a curative approach. Surgical resection with radical lymphadenectomy, usually after neoadjuvant chemotherapy or chemo-radiotherapy, remains the only curative option for resectable esophageal cancer. Surgery is considered when the tumor is staged as cT1-3 N0-1 M0. Despite the curative intent, some 30% of all resections have microscopically residual disease (R 1 ). Most patients present with stage III esophageal cancer, which has a 5-year survival of approximately 20-25% [2]. In addition, the possible value of neoadjuvant chemoradiotherapy or chemotherapy is currently being investigated. However, a meta-analysis by Gebski et al. has shown that surgery following chemoradiotherapy for both squamous cell carcinoma and adenocarcinoma has a survival benefit of 13% after 2 years. For neoadjuvant chemotherapy this survival benefit was 7% after 2 years for adenocarcinomas [3].
The three main surgical approaches utilized worldwide for intrathoracic esophageal cancer are the following: (1) the three stage transthoracic resection (i.e. right postero-lateral thoracotomy, laparotomy and cervicotomy) with a cervical anastomosis; (2) the two stage transthoracic resection (i.e. laparotomy, and right postero-lateral thoracotomy, including the Ivor Lewis approach with an intrathoracic anastomosis); and (3) the two stage transhiatal resection (i.e. laparotomy and cervicotomy with cervical anastomosis) [4]. Transhiatal esophagectomy according to Orringer is generally performed for gastroesophageal junction cancers [5]. Nevertheless, cancer of the lower esophagus metastasized, according to the Tumor-Node-Metastasis classification, in more than 45% to the lymph nodes in mediastinum and carina. Therefore patients with reasonable general condition are increasingly surgically approached transthoracically. In the randomized study by Hulscher et al. and long term follow-up, comparing the transhiatal and transthoracic esophageal resection, an important trend of better survival has been observed in the transthoracically approached patients [6,7]. This transthoracic procedure is associated with a high morbidity and mortality rate of approximately 50-70% and 5% respectively [6]. Moreover, the extensive nature of this open approach has a significant negative impact on the quality of life of these patients and is associated with a long in-hospital recovery.
Minimally invasive esophageal (MIE) resection for cancer avoiding the thoracotomy and laparotomy can reduce the amount of trauma of the required surgery with the same oncological value. This will imply a reduction of the post-operative morbidity, a shortening of the recovery time and an increase of quality of life. Evidence of the short term benefits of minimally invasive surgery over open procedures with similar oncological outcome is accumulating. Less perioperative complications, shorter hospital stay and faster postoperative recovery appear to be the main advantages. MIE involves a right thoracoscopy and laparoscopy, either with a cervical or intrathoracic anastomosis. The thoracic phase of this procedure can be performed through a lateral right thoracic approach with a right lung block by selective intubation or in prone position without selective lung block. This prone approach, with partial lung collapse, will result in lower percentage of pulmonary complications [8,9].
To date, no randomized trials have been performed comparing any modality of minimally invasive esophagectomy with an open traditional approach [10]. Given the values of postoperative morbidity, quality of life and quality of the specimen, the aim of this prospective randomized study is to compare the MIE by right thoracoscopy in prone position and laparoscopy with the open esophageal resection by right thoracotomy and laparotomy in left lateral decubitus, for those patients possessing intrathoracic resectable esophageal cancer. This comparison will provide further evidence supporting the minimally invasive and cost-effective approach for esophageal cancer.
Study objectives
The TIME trial is prospective, multi-center, randomized study comparing traditional transthoracic esophageal resection with minimally invasive resection for esophageal cancer. Patients with resectable intrathoracic esophageal cancer are randomized for either (a) minimally invasive transthoracic esophageal resection in prone position or (b) traditional open transthoracic esophageal resection. Our hypothesis is that patients undergoing a minimally invasive esophagectomy have fewer morbidity, a shorter duration of the intensive care unit (ICU) admission and a better quality of life than following the traditional approach.
Endpoints
The primary endpoint of this study concerns the respiratory complications (i.e. infections) within two weeks after the operation. This is categorized as: grade 1) initial respiratory distress after operation with continuation of mechanical ventilation; grade 2) after successful detubation, clinical manifestation of respiratory infection caused by (broncho) pneumonia, confirmed by thorax X-ray or CT scan of the thorax and a positive sputum culture; and grade 3) other thoracic infections like post-operative empyema either caused or not by leakage from the gastric conduit necessitating drainage or reoperation [modified from Hulzebos et al., [11]]. Consequences for patients range from extensive physiotherapy, involving oxygen and specific antibiotics to intubation and mechanical ventilation. Furthermore, important respiratory deterioration after extubation, involving reintubation and mechanical ventilation will lead to the necessity of a CT scan of the thorax and abdomen, and thus endoscopic examination of the gastric tube and anastomosis in order to rule out a leakage.
The secondary endpoints are operation related events (e.g. duration of operation, blood-loss and conversion to open procedure in MIE group) and re-operations. Moreover, general morbidity (major and minor) is recorded. Minor complications are defined as wound infections, venous thrombosis or other. Major complications consist of-apart from respiratory complications-postoperative bleeding, anastomosis leakage, mediastinitis, and re-operations within the in-hospital period. Furthermore, post-operative recovery data are length of ICU and hospital stay (days), type and number of analgesics needed after operation, VAS-pain-score, return to fluid and normal diet, quality of life questionnaires (SF-36 and EORTC QLQ-OES18) [12], and quality of the specimen resected (length of specimen, number and location of lymph nodes resected, and circumferential resection margin). Also hospital mortality and readmissions are recorded. Furthermore survival will be analyzed.
Power of the study
According to the published literature and own experience at the VU university medical center a difference in respiratory infections of 28% is found between the traditional open procedure (57%) and the MIE procedure (29%) [6,7,9,[13][14][15]. To demonstrate this difference of 28%, using a alpha = 0.05 and beta = 0.80, two groups of 48 patients are needed. This is based on a two-sided significance level (alpha) of 0.05 and a power of 0.80. Estimating that approximately 20% of the patients may be excluded, 60 patients will be randomized per arm.
Inclusion criteria
Candidates to be included in this study are all patients with a histologically proven squamous cell carcinoma, adenocarcinoma or undifferentiated carcinoma of the intrathoracic esophagus and Siewert I junction tumors which are surgically resectable (T1-3, N0-1, M0) and treated by neoadjuvant therapy. The age of the patients must be ≥ 18 and ≤ 75 years. Moreover, the included patients must have a European Clinical Oncology Group (ECOG) performance status of 0, 1 or 2; and their written informed consent is obligatory.
Exclusion criteria
Patients are excluded as subjects if there is a carcinoma of the cervical esophagus or have undergone prior thoracic surgery or no informed consent is provided. An exclusion list is maintained by all participating centers in order to analyze the quality of the randomization rate.
Participating surgeons and clinics
To prevent surgeon bias, the open and laparoscopic operations have to be performed by experienced surgeons in conventional esophageal resections with experience of at least 10 minimally invasive esophagectomies. Duration of operation, conversion to open surgery, and complication rate may be related as factors to experience, yearly volume and the learning curve of the participating surgeon [16]. The surgeons in the three Dutch centers have been proctored by the two experienced minimally invasive surgeons of the VU university medical center. After the fist five patients operated on by an combined team, the video's of the last two of a series of 15 patients who had undergone a minimally invasive approach were examined by the VU university medical center surgeons. Only the surgeons with sufficient experience and skill after the proctoring series are allowed in participation in the trial. The surgeons of the two other centers are already well experienced in minimally invasive esophagectomy. In order to prevent institution bias, only hospitals with high volume (>20 esophagectomies/year) will participate in this trial.
Six European academic and non-academic centers will participate in the study: Academic Medical Center, Amsterdam, the Netherlands; Atrium Medical Center, Heerlen, the Netherlands; Canisius Wilhelmina Ziekenhuis, Nijmegen, the Netherlands; Hospital Universitari de Girona, Dr Josep Trueta, Girona, Spain; I.R.C.C.S. Policlinico San Donato, Milan, Italy; and VU university medical center (Vumc), Amsterdam, the Netherlands.
Randomization
The patient will be informed about the trial at the outpatient clinic. When informed consent is obtained, the patient will be randomized at the outpatient clinic. Randomization is performed per center by an internet randomization module maintained by coordinators at the VUmc. As some heterogeneity is expected, e.g. difference in type of neoadjuvant therapy protocol, randomization will be stratified for each center. A flowchart of the study protocol is seen in Figure 1.
Data collection and statistics
Data are transcribed via datasheets on paper and sent to the VUmc by surface mail. Data are collected daily until the day of discharge. The quality of life questionnaires (SF-36 and EORTC-OES18) are completed by the patient starting preoperatively and post-operatively at 6 weeks, 3 months, 6 months, and at 1 year. There will be regular contact between the study-coordinators and the participating centers. One research fellow will monitor the data of all included patients. Using a SPSS database containing all required parameters, data analysis will be performed in accordance with the intention-to-treat principle, additional per-protocol analysis will also be performed. Groups are, where appropriate, compared using an Independent Samples T-test, otherwise a Wilcoxon test, or Chi-square test. Pain scores will be analyzed using repeated measures analysis.
Ethics
This study is conducted in accordance with the principles of the Declaration of Helsinki and 'good clinical practice' guidelines. The independent medical ethics committees of the participating centers have approved the study protocol. Prior to randomization, written informed consent will need to be obtained from all patients.
Pre-operative regimen
Pre-operative preparation has the aim of keeping patients physically and mentally as optimal as possible during the whole period of pre-operative period. Complete information about the diagnostic phase, randomization, neoadjuvant therapy and surgical intervention is very important here. Coordination of this preparation period is undertaken by a research nurse, the studycoordinator and the participating surgeons. All patients will have regular consultations by a dietician during the whole pre-operative path. Supplemental nutritional feeding can be initiated and if necessary a thin nasogastric tube can be placed for feeding purposes. Also, all patients will be consulted by a physiotherapist for exercises with emphasis on respiratory improvement. If necessary, a psychological consultation will be arranged.
Neoadjuvant therapy
All patients included will pre-operatively receive neoadjuvant therapy: chemoradiotherapy or chemotherapy alone according to local protocol. Each center will treat patients, either in category open operation or MIE, in the same way.
Esophageal resection
The open operation as well MIOE operation consists of a two-field esophageal resection with gastric tube formation followed by cervical or thoracic anastomosis.
For patients undergoing a thoracotomy, a high epidural catheter and a double tube are placed for selective intubation.
Traditional transthoracic esophagectomy
In the open group a three-stage procedure is followed. After selective intubation to block the right lung, the patient is placed in a left lateral decubitus position. The first stage is started with a right posterolateral thoracotomy. The esophagus and its overlying mediastinal pleura is mobilized with mediastinal and carina lymphadenectomy. For the second stage, the patient is turned to a supine position. Through a supra-umbilical laparotomy the stomach is mobilized with special care of the gastroepiploic vessels and a lymphadenectomy of the celiac trunk is performed. The dissection finalizes at the hiatus with anterior extension and careful dissection of the gastro-esophageal junction along the planes. For the last stage a cervical incision is made and the esophagus dissected free. Retrieval of the specimen through the laparotomy wound is performed and a gastric conduit created. No pyloroplasty is usually performed. A jejunostomy catheter is placed for feeding purposes. A gastric tube-esophageal anastomosis is then established in an end-to-side fashion.
If an thoracic anastomosis is made, the first phase of it commences with an abdominal approach involving the patient in supine position. The second phase (thoracic) is performed with the patient in a left decubitus position. The retrieval of the specimen will be achieved through the thoracotomy wound. The anastomosis will be made high in the thorax, proximal at the level of the divided vena azygos.
Minimally invasive transthoracic esophagectomy
The MIE also has a three-stage procedure. The difference from the procedure of the open operation is that there is no need for selective intubation with the exception of patients in whom an thoracic anastomosis is planned (a Fogarty balloon catheter is placed under bronchoscopy view in the right primary bronchus and inflated only during the anastomosis phase). After anesthesia, the patient is turned to a prone position. Four trocars are placed along the medial edge of the scapula. Modest insufflation using CO2 will raise the intrathoracic pressure between 6 and 8 mmHg. Radical esophagectomy is performed along the pericard sac, pulmonal veins, right bronchus, aorta resecting the esophagus with the mediastinal pleura and lymphadenectomy (peri-esophageal, lower posterior mediastinal, carina and right paratracheal). After completion of this phase the thorax is drained and the trocar sites are closed.
The patient is then placed in a supine position. After introduction of four trocars the mobilization of the stomach is performed similar to the traditional procedure (with paracardiac left and right, lesser and greater curvature and celiac trunc lymphadenectomy). After dissection of the esophagus at cervical level, the specimen is retrieved through a well protected trans-umbilical minilaparotomy (6-8 cm). The esophageal-proximal stomach resection is performed extra-corporeally and a small gastric conduit (3-4-cm) created, then conducted to the cervical wound and there anastomosed. A jejunostomy catheter is placed.
If a thoracic anastomosis is made (two phase procedure), the first phase commences with a laparoscopy involving the patient in supine position. The only difference with the three stage MIE (abovementioned) is the laparoscopic creation of the gastric tube and hiatal dissection. A jejunostomy catheter is placed. The second phase (thoracic) is performed with the patient in prone decubitus position. The esophagus is dissected free up to the distal trachea, the azygos vein is divided and lymphadenectomy performed. After division of the esophagus, a purse string is placed at the proximal esophagus. A posterior mini-thoracotomy (6 cm) is performed, the lung is blocked, and a 25 circular stapler anvil placed in the proximal esophagus. The specimen is retrieved, resected and through it the circular stapler is placed and an end-to-side anastomosis performed. The rest of the loop is resected by an endoscopic stapler and the thoracic cavity drained.
Post-operative management
Patients in both groups will receive similar post-operative treatment. All patients will after surgery be admitted intubated at the intensive care unit (ICU). After stabilization and detubation, the patient will if indicated be admitted to the general surgical ward or to the medium care unit (MCU). In the first days after surgery analgesics are administered by the epidural route. In the event of epidural failure, post-operative pain will be treated intravenously by a 'patient controlled analgesia' (PCA); when necessary through a pump with morphine. Patients will be instructed about the PCA pump by an anesthesiologist and morphine doses will be noted. Patients will have a nasogastric tube in-situ for at least 5 days as some gastric conduit distension is expected. All patients will receive postoperative physiotherapy for breathing-exercises the day after surgery. To regain early mobilization, starting day 1, patients are encouraged to sit out of bed in the general surgical ward. Enteral feeding is commenced day 1 after operation through the jejunostomy and increased to optimal feeding at day 3. At day 5, after gastrographine swallow X ray, the nasogastric tube is retired and started with liquids. Normal diet can be progressively resumed while jejunostomy feeding is decreased. Patients will be discharged when they are able to eat normal food, can walk and are comfortable with oral analgesia. Delay to "social" reasons will be noted. Completion of the feeding over the jejunostomy may be continued after discharge. Patient follow-ups are carried out at the outpatient clinic at 6 weeks, 3 months, 6 months and 1 year after discharge. During these visits, the quality of life questionnaires (SF-36 and EORTC QLQ-OES18) will be completed. The regular follow-up will continue up to 5 years after surgery.
Discussion
Surgery on cancer of the esophagus is considered to be one of the most extensive and traumatic oncological surgical procedures. Open resection not only involves a long operation time and large incisions but also necessitates post-operative care in the intensive care unit, a long in-hospital recovery with decreased quality of life and carries a significant risk of morbidity and death.
MIE can reduce the post-operative morbidity, in particular the respiratory complications which are most encountered. Different landmark studies have reported significantly low pulmonary complications rates using the minimally invasive transthoracic approach. Palanivelu et al. report in their minimally invasive series of 130 patients in prone-position, 2.3% pulmonary complications [9] whereas Luketich et al. report in their series of 222 patients in left lateral decubitus MIE, 18% pulmonary complications [13]. Other authors report a similar incidence in pulmonary complications with probably a slight advantage for the thoracoscopic resection in prone position, being in the experience of the VUmc around 25%. In contrast, Hulscher et al. observed 57% pulmonary complications in patients undergoing the traditional three-stage transthoracic esophagectomy [6]. Furthermore, median length of ICU stay was 1 day in the series of Palanivelu and Luketich whereas in the traditional series of Hulscher the ICU stay was 6 days. Oncologically, the type of resected specimen and lymph nodes are comparable with the open series and diseasefree and overall survival reported for MIE and traditional resection are quite comparable. These aforementioned landmark studies favor minimally invasive esophagectomy in terms of pulmonary complications and recovery.
Despite the advantages of the procedure, still only a small percentage of all esophageal resections for cancer are performed minimally invasive. Although, important MIE series have demonstrated feasibility and important short term advantages, yet to date the beneficiary effects of minimally invasive esophagectomy have not been proven by a randomized trial [10]. Therefore, a randomized comparison between traditional esophagectomy and minimally invasive esophagectomy is necessary. This randomized trial can provide further evidence supporting the minimally invasive and cost-effective approach for esophageal cancer.
Abbreviations VAS-pain score: Visual Analogue Scale Pain score | 2014-10-01T00:00:00.000Z | 2011-01-12T00:00:00.000 | {
"year": 2011,
"sha1": "1f350f88a7dce2351224920272fae226aecfe39d",
"oa_license": "CCBY",
"oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/1471-2482-11-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c1e51c697ae7f7830d5cfed19e577643cbcf4b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254351564 | pes2o/s2orc | v3-fos-license | Performance Evaluation of Phishing Classification Techniques on Various Data Sources and Schemes
Phishing attacks have become a perilous threat in recent years, which has led to numerous studies to determine the classification technique that best detects these attacks. Several studies have made comparisons using only specific datasets and techniques without including the most crucial aspect, which is the performance evaluation of data changes. Hence, classification techniques cannot be generalized if they only use specific datasets and techniques. Therefore, this research determined the performance of classification techniques on changing data through a subset of schemes in a dataset. It was conducted using unbalanced and balanced phishing datasets, as well as subset schemes in ratios of 90:10, 80:20, 70:30, and 60:40. The thirteen most recent classification techniques used in preliminary phishing studies were compared and evaluated against ten performance measures. The results showed that the proposed schemes successfully uncover the maximum and minimum performance obtained by a classification technique. These comparisons can provide deeper insights into phishing classification techniques than related research.
I. INTRODUCTION
Phishing is a perilous threat to cybersecurity and according to The National Institute of Standards and Technology, it is attempts to get sensitive data, such as bank account numbers, or access to larger computerized systems by sending fraudulent requests through emails or websites. On average, the chances of being exposed to this attack in various sectors is 11% [1]. Phishing is also a socially engineered attack that tends to inflict physical or psychological harm on individuals and organizations [2]. The corporate sectors include technology, energy or utilities, retail, and financial services. These organizations are highly vulnerable to phishing. Therefore, cyber security-based measures are needed to prevent these attacks [3].
Several studies have been carried out on phishing prevention, one based on its identification and classification.
The associate editor coordinating the review of this manuscript and approving it for publication was Vlad Diaconita .
However, only few studies have compared phishing classification techniques, such as [8], [18], [22], [23], and [24]. This comparative research is generally divided into four main parts, including phishing, the type of dataset, performance evaluation, and the techniques used. The data sources used by [8], [18], [22], [23], and [24] were obtained from a phishing website and URL, while [24] used raw emails sourced from Apache SpamAssassin and Nazario. The dominant performance evaluations are accuracy, precision, and F-measure. Random forest, SVM, and Naïve Bayes are the most widely used techniques. This comparative research has a gap, which is how the existing techniques affect various public datasets, including the balanced and unbalanced ones.
Interestingly, this research is based on the performance evaluation of the classification technique when using a specific unbalanced dataset for certain phishing types. This is similar to the processes adopted by studies that did not compare these classification techniques. Vaitkevicius and Marcinkevicius [18] used two balanced and one unbalanced datasets. It was reported that they obtained better results than previous comparisons. Gana and Abdulhamid [23] only used unbalanced public datasets, and it was proven that the classification performance changes in accordance with its subset scheme. This research is engineered by several studies that failed to prove how performance evaluation influences the techniques used to classify various subsets of dataset schemes. Some only described the limited impact of this performance on commonly used schemes, such as 90: 10 This research adopted three public datasets, namely, MDP-2018, UCI Phishing website, and Spambase. MDP 2018 is a balanced dataset, whereas the UCI Phishing website and Spambase datasets are unbalanced. The distribution of features in each dataset are as follows: MDP-2018, UCI Spambase, and Phishing website have 48, 58 and 31 features, respectively. In addition, thirteen of the most frequently used classification techniques, namely, Random forest, SVM, Logistic regression, MLP, C4.5, Bayesian Network, REP-Tree, Naïve Bayes, P.A.R.T, ABET (AdaBoost.M1 and Extra trees), ROFET (Rotation Forest and Extra trees), BET (Bagging and Extra-trees) and LBET (LogitBoost and Extra trees), were adopted. A subset scheme was established to ensure quality classification techniques were employed.
The subset scheme was derived from a proportion of each phishing and legitimate data class. This research utilized the 90:10, 80:20, 70:30, and 60:40 subset schemes, which were also applied to the legitimate and phishing data. For example, the UCI Phishing website dataset comprises 6157 phishing and 4898 legitimate websites, at a subset of 90:10 subset simply implies 90% of phishing and 10% of legitimate websites. The subset scheme was designed to match the actual conditions, and similar results were obtained from the experiment carried out, which was applied later. To ensure that the resulting classification model is excellent and reliable, a 10-fold cross-validation approach was adopted.
Relying only on accuracy as a performance evaluation measure is not advisable [18], [24]. This led to the use of ten performance evaluation measures, namely accuracy, F-measure, precision, TPR, ROC, FPR, PRC, BDR, MCC and G-Mean Section II is a literature review on comparative phishing classification techniques. Section III describes the experimental methodology used. Conversely, the results were analysed in Section IV and conclusions were drawn in Section V.
II. RELATED WORKS
Comparative research on phishing classification techniques is indispensable to determine the most appropriate procedure. Irrespective of the fact that several preliminary studies have been carried out, there are still gaps. One of such issues is the impact of balanced and unbalanced datasets and subset schemes on classification techniques. Therefore, there is a need to carry out comparative research that can resolve this gap. Generally, the most recent analysis comprises four main parts, namely, phishing, dataset type, performance evaluation, and the adopted techniques. This research creates opportunities for one to gain deeper insights into phishing detection.
The studies carried out by [18], [22], and [23] compared phishing websites' classification techniques as well as analysed its impact [24] on phishing emails and [8] on URLs. These were tested on an unbalanced dataset, however, only Vaitkevicius and Marcinkevicius [18] added a balanced dataset to their experiment. Karabatak The numbers of classification techniques used in comparative research are stated as follows: 17 [23], 13 [22], eight [18], [24], and seven [8]. The comparison between Vaitkevicius and Marcinkevicius [18] shows that MLP, random forest, gradient tree boosting, and AdaBoost techniques were effective. Gana and Abdulhamid [23] also obtained similar results that random forest was exceptional. On the contrary, Karabatak and Mustafa [22] stated that MLP, JRip, P.A.R.T., J48, random forest, and tree were ineffective. This was because the selected dataset features affected the performance of the classification technique. According to Karabatak and Mustafa [22], BayesNet, SGD, lazy.KStar, R.F.Classifier, LMT, and ID3 have the best performances. This contradicts the results obtained by Vaitkevicius and Marcinkevicius [18]. The naïve Bayes classification technique was ineffective in the experiments by Vaitkevicius and Marcinkevicius [18]. It was presumed that the difference in schemes can affect the performance of the classification technique. This simply indicates that sometimes, its performance is good, whereas, in some other circumstances, it tends to reduce [22]. It is also evidenced by the research carried out by Karabatak and Mustafa [22] that if random forest is not used on a dataset with feature selection, it turns out to be the most superior technique among all others used in the experiment. Special investigations are required to explore this gap further.
The use of features when evaluating classification performance on multiple datasets is also another point to consider. Gangavarapu et al. [24] used a feature extraction technique against a raw email. This generated 40 features without compromising the information contained in the raw email. The feature extraction technique was also employed by Sahingoz et al. [8] when constructing new datasets from URLs acquired from PhishTank. The website was crawled using a search engine with specified keywords. Sahingoz et al. [8] obtained a large number of features, relatively 102, besides some techniques were employed to obtain the optimal ones. References [18], [22], and [23] adopted an entirely different procedure than Sahingoz et al. [8] and Gangavarapu et al. [24]. They used the dataset as a medium to test their proposed method rather than the feature extraction techniques. Gana and Abdulhamid [23] and Karabatak and Mustafa [22] used a similar dataset with 31 features. However, Karabatak and Mustafa [22] evaluated feature reduction on classification performance. The dissimilarity discovered in the studies carried out by [8], [18], and [22] is the use of varying datasets and features. Vaitkevicius and Marcinkevicius [18] utilized the UCI 2015, UCI 2016 and MDP 2018 datasets with 30, nine, and 48 features, respectively. The varying dataset and features provide in-depth insights into the performance of the proposed classification techniques.
Comparative studies on classification techniques employed varying performance evaluation measures, such as, four [8], 10 [23], seven [24], and one [18], [22]. The more the performance measures, the more insights will be gained from these classification techniques. Gana and Abdulhamid [23] reported that random forest excels in all performance measures, namely, accuracy, precision, recall, F-measure, Area Under ROC Cure (AUC/AUROC), kappa statistics, rootmean-squared error, True Positive Rate (TPR), False Positive Rate (FPR), and root-relative-squared error. It is also effective in all performance evaluations (precision, sensitivity, F-measure, and accuracy), especially when the natural language processing feature is used [8]. Random forest performance evaluation using accuracy, precision, recall, F1-measure, Matthews correlation coefficient (MCC), AUROC, and area under the precision-recall curve (AUPRC), was functional in the experiment [24]. The experiment conducted by Vaitkevicius and Marcinkevicius [18] and Karabatak and Mustafa [22] also exhibited good accuracy performance. However, not all classification techniques reported are effective in the diverse evaluation measures, especially the random forest. Gana and Abdulhamid [23], and Gangavarapu et al. [24], reported that it excels in all performance evaluations of the defined schemes.
Several recent studies are similar to the research carried out by Priya et al. [25], Indrasiri et al. [26], Ozcan et al. [27], Bu and Kim [28], Zeng et al. [29], and Aassal et al. [17], which evaluated the performance of classification techniques and their impact on various datasets. However, these were limited to phishing websites, in contrast to this research, which involved email and website phishing. Various datasets were evaluated to ensure that the proposed technique or method is known for its performance. Diverse studies employed different performance evaluations, such as Priya et al. [25] used TPR, MCC, Recall, Precision, f-measurements, Indrasiri et al. [26] adopted Precision, Accuracy, F1-Score, Recall, Ozcan et al. [27], utilized TPR, FPR, Precision, Accuracy, F1-Score, Bu and Kim [28], only used Accuracy and Recall. El Aassal et al. [17], and Zeng et al. [29], utilized performance accuracy, precision, recall, F1-Score, Geometric Mean, Balanced Detection Rate, Area Under Curve and Matthew's Correlation Coefficient. Meanwhile, this research used Accuracy, F-Measure, Precision, TPR, ROC, FPR, PRC, MCC to obtain a detailed performance evaluation of the classification technique.
However, certain studies have successfully described the proposed technique's performance, while others, such as Indrasiri et al. [26], and Bu and Kim [28] evaluated the impact of feature selection on various datasets. It reduces the dimensions of the dataset because there is a process of selecting the relevant key features in each category [30]. Indrasiri et al. [26] and Bu and Kim [28] used cross-validation to ascertain the authenticity of the model generated from the dataset that had undergone the feature selection process. Bu and Kim [28] also used the same method as the one obtained from feature extraction. Ozcan et al. [27], used cross-validation directly on the model formulated from the proposed technique and the dataset.
Ozcan et al. [27] evaluated the parameters to obtain maximum performance against the proposed technique. The studies carried out by Priya et al. [25], Indrasiri et al. [26], and Bu and Kim [28] were centred on improving its performance. Ozcan et al. [27] failed to state the performance of the proposed technique before and after the parameters were evaluated. Therefore, when parameter evaluation is employed, the process of increasing or reducing its performance is unknown. More detailed information is needed, such as the performance before and after the proposed method was applied.
Generally, the experiments carried out in this research are similar to those of Indrasiri et al. [26] namely, comparing the performance before and after using various parameters as well as analyzing the proposed technique. Indrasiri et al. [26], evaluated the performance of accuracy and ROC_AUC on models with various cross-validation values, such as 10, 20, 30, 40 and 50. Hyper-parameter tuning and feature selection were carried out to boost the performance of the proposed technique.
However, the difference between this research and that conducted by Indrasiri et al. [26], lies in using datasets, proposed techniques, performance evaluation, data retrieval, and subset schemas. This research adopted three datasets, namely MDP-2018, UCI Phishing Website, and UCI Spambase. Indrasiri et al. [26], performed feature selection and hyper-parameter tuning to obtain the maximum performance, while this research used a subset of schemes such as 90% Phishing:10% Legitimate, 80% Phishing:20% Legitimate, 70% Phishing:30% Legitimate, 60% Phishing:40% Legitimate, and 50% Phishing:50% Legitimate (balance), 90% Legitimate:10% Phishing, 80% Legitimate:20% Phishing, 70% Legitimate:30% Phishing, 60% Legitimate:40% Phishing, and 50% Legitimate:50% Phishing (balance) to find maximum performance. It simply shows that if phishing data is distributed more than the legitimate ones or vice versa, then there is a need to pay attention to the impact of the resulting performance. This research contributed to the performance evaluation by altering the data distribution, thereby significantly affecting its increase or decrease. It aids future studies to better understand data distribution, enabling them to perform hyper-parameter tuning to get maximum performance in detecting phishing attacks [26].
Several studies, such as El Aassal et al. [17] and Zeng et al. [29], adopted a similar concept. They used PhisBench, whereas this research used Weka to test the classification techniques against the proposed one, which involves reducing the datasets sourced from both websites and emails to 75%, 50% and 25% of their original sizes. Meanwhile, this research is based on the comparison between (subset scheme) 90:10, 80:20, 70:30 and 60:40. For example, the comparison at 90:10 simply implies 90% and 10% of the data are from phishing and legitimate, respectively, data from the MDP-2018 dataset, UCI Phishing website and UCI Spambase. The order of these datasets, such as 90% Legitimate and 10% Phishing, were also compared to ascertain the Engineering performance under various conditions. El Aassal et al. [17], and Zeng et al. [29], reduce data from the dataset regardless of the performance quality of the discarded ones. On the contrary, this research evaluated the performance of unused data. It is believed that any of them can potentially affect the detection of a phishing attack. El Aassal et al. [17], and Zeng et al. [29], adopted various legitimate and phishing data sources, such as Enron, Wikileaks, Nazario, Bluefin, SpamAssassin, PhishTank, OpenPhish, Alexa, DMOZ and Yahoo Directory. They also generated several new datasets due to the complexity of their sources. The use of the development made by Zeng et al. [29], as a data source does not allow comparisons to be made with related studies. Similarly, El Aassal et al. [17], also encountered certain problems building models from a combination of their datasets and was only able to report the comparison made with the results of related studies.
The apparent difference between the research carried out by El Aassal et al. [17], and Zeng et al. [29], and the present one is the adoption of standardized public datasets and commonly used performances. This present research employed datasets from the UCI Phishing website and Spambase, including MDP-2018. Several studies widely used these to test the performance of the proposed technique. To compare standardized datasets to performance. The techniques proposed by Alsariera et al. [31], namely ABET (AdaBoost.M1 and Extra trees), ROFET (Rotation Forest and Extra trees), BET (Bagging and Extra-trees) and LBET (LogitBoost and Extra trees) were tested for the subset schema of the dataset. Therefore, the present research explained the performance when the proposed subset scheme is used.
Some studies reported that any unbalanced dataset needs to be balanced because it is bound to affect performance [32]. Therefore, this analysis proves how unbalanced data is converted into balance and vice versa. The research aims to determine the extent the performance of phishing attack detection techniques increases or decreases.
Therefore, it is crucial to uncover gaps that have not been resolved by previous studies [8], [18], [22], [23], [24]. Vaitkevicius and Marcinkevicius [16] and Karabatak and Mustafa [20] used accuracy to evaluate the performance of classification techniques, which only verifies its ability to classify the acquired data. There is a need for more performance evaluation measures to get better insights. This includes the use of four [8], 10 [23], and seven measures [24]. Although, these are limited to the use of unbalanced datasets, thereby causing the classification techniques' performance on the balanced datasets to remain unknown. Coincidentally, how do the various subset schemes employed by Gana and Abdulhamid [23] affect phishing classification techniques?
III. METHODOLOGY
This section describes the experimental research methodology, selection of datasets, subset schemes, classification techniques, and performance evaluation.
Several studies used public datasets as the benchmarks for the proposed technique. A variety of metrics were used to measure its performance. However, the evaluation performance is limited to the use of the entire dataset. It was further stated that the proposed technique results are better than those dependent on the dataset.
Some studies also employed additional techniques such as feature selection to improve the performance of the proposed one. This only focuses on improving technical performance, while the features in the dataset provide a solid relationship. Its function significantly affects performance, especially in terms of detecting phishing attacks. These features are adjustable, especially the ones generated from the extraction technique, and their importance tends to differ from the various studies. Therefore, the role of public and standardized datasets serves as a bridge to measure the performance of the proposed technique. The use of both standard and public datasets makes it easier for one to compare the proposed technique.
Therefore, this research evaluates the dataset's quality, openness, difference, and evaluation matrix. Its quality is evaluated by dividing each of the acquired data into a subset scheme, namely 90:10, 80:20, 70:30 and 60:40. This also includes the conversion of the unbalanced dataset into a balanced one, and its performance is generated. The openness of the dataset makes it easier to obtain maximum results. Assuming the dataset used is private, obstacles are bound to be encountered compared to the proposed technique. The UCI Spambase and Phishing Website, including the MDP-2018 datasets, were selected because several studies tend to use them to test the performance of the proposed technique. Innumerable matrices such as accuracy, TPR, precision, F-measure, FPR, PRC, ROC, BDR, MCC and G-Mean, were also used for performance evaluation.
A. SELECTION OF DATASET
Fortunately, three public datasets, namely MDP-2018, UCI Phishing website, and Spambase, were used to test the classification techniques. The UCI Phishing website and Spambase datasets have an imbalanced class distribution, whereas that of the MDP-2018 is balanced. It [33] comprises 5000 phishing and legitimate websites, respectively. The MDP-2018, has 48 features, while the UCI Spambase comprises 58 features with distributed records, namely, 2,788 legitimate and 1,813 phishing emails. The UCI Phishing website comprises 31 features with distributed records of 6,157 phishing and 4,898 legitimate websites.
B. PROPOSED SUBSET SCHEMES
The proposed subset schemes were established by dividing the acquired datasets by the available ones, thereby obtaining the following sizes 90:10, 80:20, 70:30, and 60:40. Furthermore, the under-sampling technique was used to generate the schema subsets, including balancing the unbalanced datasets such as UCI Spambase and Phishing websites. This procedure reduces the sample to a specific size [34], a subset scheme. For example, in a balanced dataset (MDP-2018) with a total of 5000 phishing and legitimate records, the 90:10 subset scheme that was established comprises 90% of phishing and 10% of legitimate records. The under-sampling technique is used because it is free from overfitting problems as experienced by oversampling because oversampling duplicates data in minority data classes [35]. The present research tested how the subset scheme was constituted of 90% and 10% legitimate and phishing records, respectively. This is also applicable to balanced and unbalanced datasets. A crossvalidation approach was employed to ensure that the resulting model is of high quality and to avoid overfitting the subset schemes [13]. It is also used to ensure that the performance of the classification technique is reliable [36]. And, The performance evaluation uses the Iteration value against the cross-validation technique, which is 100. This follows the recommendations from [17] and [19] to obtain maximum performance results accurately. The experimental setting of the subset scheme is shown in Table 1.
The experiments conducted by Gana and Abdulhamid [23] showed a change in the performance of the classification technique when the 70% data-taking test scheme was utilized. This led to the proposition of different subset schemes and datasets used to prove that schema changes affect the classification techniques' performance.
This research further employed some classification techniques that are rarely used. These include the Bayesian Network [37], decision tree, and P.A.R.T. The essence is to prove that these less frequently used techniques are effective. VOLUME 11, 2023 Furthermore, these were implemented using the Waikato Environment for Knowledge Analysis (Weka) version 3.8.4 with default parameters [38].
D. EVALUATION METHOD
The five most frequently employed performance evaluation procedures, such as accuracy, F-measure, precision, TPR, and ROC, were used to carry out the experiments. FPR and PRC, were included to obtain more information regarding the classification performance. Moreover, PRC displays precision and recall information in different probabilities. It is better than ROC and tends to measure the performance of classification techniques in a dataset with an imbalanced class distribution [39]. Based on these evaluations, the best and worst techniques were selected during each experiment. Then, the ones that performed best in all evaluations were ranked. The seven most widely used measures in the phishing classification technique's performance evaluation includes accuracy, TPR, precision, F-measure, FPR, and PRC and ROC, was adopted. PRC is the precision value for the corresponding sensitivity (recall) [39], while ROC involves the plotting of TPR against FPR using various threshold settings [40].
Furthermore, several evaluation performances such as Geometric Mean (G-Mean), Balanced Detection Rate (BDR), and Matthew's Correlation Coefficient (MCC) were used to compare recent research. These tend to add insight to the conducted experiments. G-Mean is the geometric mean of True Negative Rate (TNR) and Recall [17]. BDR Performance is a Metric used to measure the number of correctly classified minority class instances and to appropriately penalize them [41]. The MCC considers both positive and negative or false values, generally regarded as an unbalanced procedure that can be used even if the classes have diverse measures [25].
The following is the performance formula for BDR, MCC and G-Mean:
IV. EXPERIMENTAL RESULT AND DISCUSSION
The results of the experiment carried out based on the methodology are presented in this section. First, the datasets were selected and classified as described in the previous section. Then, the 90:10, 80:20, 70:30, and 60:40 schemes were generated. Furthermore, the previously balanced dataset was made unbalanced and vice versa. This was realized by adjusting the number of records for the smallest class. The essence is to show how this dataset schema affects the performance of the classification technique. Afterward, it was tested on the scheme and dataset using Weka. A 10-fold cross-validation procedure was adopted to ensure that the model generated by this classification technique remains profitable.
Next, a training and testing session was performed on all these datasets, which had been schematically assigned. It was executed using seven classification techniques, namely, accuracy, F-measure, precision, TPR, ROC, FPR, and PRC. Table 2 shows that random forest and P.A.R.T. are the most favoured approaches in the MDP-2018 dataset, UCI Phishing website, and Spambase. Random forest performance is better in the MDP-2018 dataset than in the UCI Phishing website and Spambase. It has an accuracy, F-measure, precision, TPR, and ROC of 98.37%, 0.984, 0.984, 0.984, and 0.999, respectively in the MDP-2018 dataset. Meanwhile, Naïve Bayes did not get the best performance in every evaluation, except for ROC. The lowest ROC lies in the SVM, 0.939, 0.936 and 0.891 for the MDP-2018, UCI Phishing website, and Spambase datasets, respectively.
Afterwards, experiments were conducted on the UCI website Phishing and Spambase datasets. As is well known, these two datasets have an imbalanced number of data classes. Therefore, the data on the most prominent class was adjusted to suit the smallest one. The data generated are on the UCI Phishing website to 4898 and 1812 for each class of legitimate and phishing websites as well as legitimate and phishing emails on the UCI Spambase.
However, when Tables 3 and 2 were compared, it was discovered that the random forest's performance had changed significantly. On the imbalanced UCI Phishing website, its performance had an initial accuracy, F-measure, precision, TPR and ROC of 97.259%, 0.973, 0.973, 0.973, and 0.996, which were altered to 97.396%, 0.974, 0.974, 0.974, and 0.996, respectively. It is interesting that only the ROC remained the same when balancing data. In contrast to the balanced UCI Spambase, the Random forest's performance of each measure was increased. Its classification technique's performance is presumed to handle both balanced and imbalanced data classes in the dataset.
Naïve Bayes remains the classification technique with the lowest performance in both balanced and imbalanced datasets. It has reduced accuracy, F-measure, precision, and TPR in the two balanced datasets. On the contrary, ROC Naïve Bayes has increased performance only on the UCI Spambase balanced dataset, whereas the Phishing website dataset does not progress without experiencing certain changes. ROC Naïve Bayes on the UCI Spambase dataset is not balanced and balanced at 0.937 and 0.951. Both balanced and imbalanced datasets influence the performance of the classification technique.
A portion of the data, 90% legitimate:10% phishing (90:10), 80% legitimate:20% phishing (80:20), 70% legitimate:30% phishing (70:30), and 60% legitimate:40% phishing (60:40), on the MDP-2018 dataset was selected. The essence is to show how the performance was affected when legitimate dominant data were used rather than that phishing. Table 4 shows that random forest had the best accuracy of 98.84% at the 60:40 subset. The resulting value is similar to the subset scheme of 90% phishing and 10% legitimate in the MDP-2018 dataset. Naïve Bayes produced an accuracy of 93.12% at the 90:10 subset scheme. This is greater than the value obtained when the MDP-2018 balanced dataset (85.15%) was used. However, Naïve Bayes experienced a decrease in the 90:10 subset scheme compared to the values shown in Table 4 (90.44%).
The random forest has the highest subset scheme's accuracy on the UCI Phishing website, as shown in Table 5. This outperforms the results generated from both balanced and imbalanced datasets. The random forest has the highest accuracy value of 98.31% on the subset scheme of 70% phishing:30% legitimate. This is compared to 97.396% and 97.259% UCI Phishing website balanced and imbalanced dataset. Simultaneously, the Bayesian network has the lowest accuracy of 92.15% (60:40) in all subset schemes. Compared to the initial accuracy, this measure produces better accuracy on imbalanced (92.989%) and balanced (92.62%) datasets from the UCI Phishing website. The Bayesian network and Naïve Bayes are less effective when used on the UCI Phishing website dataset subset scheme.
Meanwhile, when the data portions, namely, legitimate and phishing on the UCI Phishing website, were changed, the random forest was unable to outperform 90% legitimate and 10% phishing data selection techniques, as shown in Table 5. MLP, with 98.59% accuracy, was able to outperform random forest (98.39%). This has the highest accuracy on the UCI Phishing website dataset with the legitimate:phishing scheme. Compared to the balanced and imbalanced datasets, the MLP accuracy of the legitimate:phishing scheme is much better. It is presumed to have increased accuracy, starting from imbalanced (96.9%), and balanced datasets (96.927%), including phishing:legitimate scheme (97.72%) UCI Phishing websites. Table 6 shows that the Random forest has the highest accuracy of 96.96% for the 90% phishing scheme and 10% legitimate on the UCI Spambase dataset. This is better than the UCI Spambase balanced and imbalanced datasets. Random forest's accuracy in the UCI Spambase balanced and imbalanced datasets are 96.0287% and 95.5%, respectively. This proves there is an increase in accuracy when the 90:10 scheme is carried out on the UCI Spambase dataset. Naïve Bayes had a similar experience, it encountered a significant increase of 86.97 % in the accuracy of the UCI Spambase, especially the 60:40 scheme. The accuracy of the UCI Spambase imbalanced and balanced datasets is 79.2871% and 86.1%, respectively. Table 8 shows that Random forest had the best performance when the UCI Spambase dataset with 80% legitimate and 20% phishing schemes was used. Its accuracy is 97.14%, which is higher than the balanced (96.03%), imbalanced UCI Spambase datasets (95.50%) and the 90:10 UCI Spambase scheme (96.96%). However, the maximum and minimum accuracies of Naïve Bayes are 83.23% (60:40) and 76.58% (90:10), respectively. This implies that the best performance was only detected in UCI Spambase with the phishing:legitimate scheme. It had a maximum and minimum accuracies of 94.19% (90:10) and 86.97% (60:40), respectively. Table 7 shows that a total of 8 classifications improved their precision performances in all the subset schemes for the legitimate:phishing class sequence. However, only Naïve Bayes have a partially increased precision performance in the subset scheme. All classification techniques experienced partial performance improvements in ROC and FPR. Decision tree and P.A.R.T experienced partial performance improvements in virtually all the subset schemes. They also experienced an overall improvement only in terms of the performance evaluation precision. This differs from the SVM, logistic regression, and Bayesian network, which can only increase virtually all performances except precision. Table 8 shows that all classification techniques experienced significant performance improvements in the subset scheme using accuracy, F-measure, precision, and TPR for the legitimate:phishing data class sequence. SVM and C.45 are capable of superior performance for all schemes, except ROC and FPR. Meanwhile, the decision tree is a classification technique that got the most performance improvements in some subset schemes. Overall, all the others experienced a decline in FPR performance for all subset schemes. Similar to those in Table 7, all classification techniques in Table 9 were partially increased when the FPR performance evaluation was used. However, the decision tree and C4.5 experienced the most partial increases in the subset schemes' performance. The Bayesian network and MLP experienced an increase in the overall performances of all subset schemes, except for the FPR, which received VOLUME 11, 2023 a partial increase. Generally, all classification techniques tend to increase overall performances using accuracy, F-measure, precision, and TPR. Table 10 shows that the evaluation that does not experience a decrease in performance is accuracy, F-measure, TPR, and precision. This involved the use of the UCI Phishing website dataset with the class order legitimate:phishing. The PRC's minimum performance experienced a decline, as much as 0.1% using MLP with the 90:10 subset scheme (legitimate:phishing) on the UCI Phishing website dataset. Meanwhile, the highest decrease of 56.4% was experienced in the FPR using SVM with the 90:10 subset scheme (legitimate:phishing) in the UCI Spambase dataset. The subset scheme produces performance improvements, especially accuracy. The minimum accuracy performance was 0.02% using C4.5, whereas the maximum was 14.9% using Naïve Bayes. The UCI Spambase dataset with phishing:legitimate class order produced better performance when using the 90:10 subset than the 60:40 subset.
The classification techniques were tested on different datasets and schemes. This research is aimed at determining whether their performances increased or decreased. The random forest classification technique is superior to balanced and imbalanced datasets. Naïve Bayes experienced poor performance, similar to Sahingoz et al.'s findings.
The UCI Phishing website and Spambase are imbalanced datasets. Therefore, the classification technique's performance on the conversion of an initially imbalanced dataset into a balanced one was tested. Adjustments were made to the two UCI datasets by modifying the most negligible class to balance each data. The schema model shows that random forest tends to improve classification performance.
Irrespective of the fact that the previous dataset was balanced, a new test scheme was created by generating an imbalanced one. Meanwhile, the following balanced datasets, 90:10, 80:20, 70:30, and 60:40, were used to generate the MDP-2018 under actual conditions similar to the experiments conducted. Acquiring the actual phishing data is difficult because collaborating with the phishing victims is of paramount importance. There is a high probability that data imbalance is bound to occur, for example, 90% legitimate and 10% phishing data or 30% phishing and 70% legitimate data.
Random forest produces the best performance on the schemes that were used. The lowest accuracy is 90.78% on the UCI Spambase dataset, with a 60% legitimate and 40% phishing scheme. Conversely, the highest accuracy, which is 98.84%, was realized on the MDP-2018 dataset with the 60:40 (legitimate:phishing) and 90:10 schemes (phishing:legitimate).
Accuracy is not the final measure of classification techniques' performance because it only accumulates identifiable amounts [24]. This led to several performance measures such as TPR, FPR, precision, F-measure, ROC, and PRC. According to Saito and Rehmsmeier [39], these have their respective advantages and disadvantages in balanced and imbalanced datasets. The distribution of data classes needs to be analysed, intending to cover each other's deficiencies. Therefore, it was ensured that the best classification technique was superior to all performance measures. Based on the experiments carried out, random forest has the best performance on any existing measure.
However, random forest underperformed only on the UCI Phishing website dataset with the 90% legitimate and 10% phishing scheme, as shown in Table 7. It only excelled at PRC 0.995 when compared with MLP PRC 0.994. The accuracy obtained by random forest is 98.39% when compared to that of MLP, which is 98.59%. This is because it has not been able to fully identify 89 of the 490 legitimate data, whereas MLP identified 71 of them. Random forest was able to identify 5533 phishing when compared with MLP, which was only able to identify 5527. It is presumed that its performance is still the best in terms of identifying phishing under imbalanced dataset conditions, such as 90% legitimate and 10% phishing.
On the basis of the experiments that were carried out, it was concluded that differences in the subset schemes 38730 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. tend to affect the classification technique's performance. In this subset scheme, the UCI Phishing website dataset contributed to the increase and decrease in performances. However, the UCI Spambase dataset with legitimate data class sequence:phishing significantly increased and decreased performance.
Some performance evaluations were either increased or decreased when this subset scheme was used. The accuracy, TPR, F-measure, and precision performances were significantly improved within the range of 0.01% to 14.9%. Meanwhile, FPR and ROC experienced a decrease in their performances ranging from 0.1% to 56.4%. In FPR, all classification techniques experienced a significant increase in several subset schemes except the Bayesian network. The majority occurred because of the legitimate class order:phishing with the 90:10 and 80:20 subset schemes. This is because the subset scheme was used to generate innumerable new data that differ regarding phishing and legitimacy, especially in 90:10 and 80:20. The classification performance decreases because the FPR value is high.
A. COMPARISON OF THE RESULTS WITH OTHER WORKS
This research compared the results of preliminary research on the basis of datasets, schemes, and classification techniques. VOLUME 11, 2023 Although, this was limited because the proposed schemes are the dataset's newest mechanisms. Not all studies have in common the attributes that were intended to be compared. Meanwhile, only [22], [18], and [23] have slight similarities, that is, the use of the UCI phishing website dataset and MDP-2018. However, [23] is almost similar to the present research regarding testing techniques or data acquisition.
The following are the results of comparisons with [22], [18], and [23]: • Karabatak and Mustafa [22] succeeded in presenting a performance evaluation of the website phishing classification technique with an unbalanced dataset. A fivefold cross-validation was used to ensure that the model built by the classification technique is better. However, the measurement performance relies only on accuracy, and the classification technique is not necessarily optimal when using different performance evaluations.
• Gana and Abdulhamid [23] proposed a view different from that of Karabatak and Mustafa [22], involving 10 performance evaluation measures on an unbalanced phishing website dataset. The results obtained by Gana and Abdulhamid [23] are better than those acquired by Karabatak and Mustafa [22]. One of the factors of the model performance generated by the classification technique is the use of a 10-fold cross-validation procedure. Coincidentally, Gana and Abdulhamid [23] and Karabatak and Mustafa [22] adopted fold crossvalidation. Although the number of folds aids in producing better performances, Gana and Abdulhamid [23] only used it on an unbalanced dataset, making it difficult to prove whether or not they used a balanced dataset.
• Vaitkevicius and Marcinkevicius [18] identified the best classification technique for the MDP-2018 balanced dataset using 30-fold cross-validation. However, despite being similar to the test results of Karabatak and Mustafa [22], including Gana and Abdulhamid [23], the performance of the classification techniques is difficult to prove when different datasets are used.
• The present research aims to determine the performance of classification techniques on balanced and unbalanced datasets across multiple subset schemes. The present research shows its impact on the MDP-2018 balanced dataset, as reported by Vaitkevicius and Marcinkevicius [18], and on the UCI Phishing website unbalanced dataset, as stated by Karabatak and Mustafa [22], including Gana and Abdulhamid [23]. Meanwhile, better insights were extracted from Gana and Abdulhamid [23] in the 90:10, 80:20, 70:30, and 60:40 subset schemes, thereby enabling the results obtained from this research to be used in resolving crucial gaps as well as providing directives for the development of studies on phishing, especially classification techniques. Based on the comparative results, the classification techniques proved better than those applied in related research. Some studies analysed the impact of these approaches on unbalanced and balanced datasets concerning certain phishing types. The present research provides more in-depth insights into the impact of classification techniques on balanced and unbalanced datasets using various subset schemes. Finally, [22], [18], and [23] stated the performance of classification techniques in some instances and schemes. At the same time, the present research tends to resolve certain limitations, such as classification techniques' performance in various datasets and subset schemes.
The concept of the proposed subset schema was tested based on a recent research by Alsariera et al. [31]. Incidentally, Alsariera et al. [31] combined meta and base-learners to obtain maximum performance. These two techniques are based on their weaknesses and strengths [42] to achieve maximum performance. The research by Alsariera et al. [31] was selected because it employed Weka techniques. These include ABET (AdaBoost.M1 and Extra trees), ROFET (Rotation Forest and Extra trees), BET (Bagging and Extra-trees) and LBET (LogitBoost and Extra trees). In Tables 11 and 12, it is evident that the ROFET performance has a maximum accuracy of 98.660% for MDP-2018 in phishing:legitimate and 98.9% in that of legitimate: phishing. At the same time, Alsariera et al. [31] obtained maximum accuracy on LBET, ROFET and ABET techniques of 97.5758%, 97.4491%, and 97.4853%, respectively. The maximum LBET performance was 97.8%, higher than the 97.5758% realized by Alsariera et al. [31]. In terms of accuracy performance (table 12), UCI Spam-Base ABET, ROFET, BET, and LBET obtained maximum performance in the 90:10 subset scheme with legitimate:phishing order. However, in contrast to the MDP-2018 with the legitimate:phishing sequence, it was increased to the 90:10 and 60:40 subsets of the scheme. Based on Table 12, the ROFET technique has the best performance in each subset 90:10, 80:20, 70:30 and 60:40 on the MDP-2018 dataset in the order legitimate:phishing. In Table 11, ROFET excels in all subset schemes on UCI Spambase in the order phishing:legitimate.
In contrast to the performance of MCC, several subset schemes experienced significant changes. The UCI Spambase, ROFET and BET techniques have improved performance in the 70:30 and 60:40 subset schemes in the order phishing:legitimate (Table 13). In the MDP-2018 dataset only ROFET had increased performance in the 70:30 and 60:40 for the legitimate:Phishing sequence (table 14). Therefore, 94% of the subset schemes used succeeded in reducing the performance of the ROFET, BET, LBET, and ABET techniques on the UCI Spambase dataset in the legitimate:phishing order. In the MDP-2018, 100% of the subset schemes reduced the performance of the ROFET, BET, LBET, and ABET techniques, in the order of phishing:legitimate. Several other performance measures such as BDR, G-Mean and MCC were included in Alsariera et al's research, to get more insight. MDP 2018 and UCI Spambase were selected because of their significant performances on the adopted technique and that proposed by Alsariera et al. [31]. Based on Table 15, Random forest produces maximum performance in BDR, G-Mean and MCC on MDP 2018 in the order phishing:legitimate. ROFET only excelled at MDP 2018 in the order of legitimate:phishing for G-mean and MCC performance. LBET obtained the highest BDR value at MDP 2018 in the order of legitimate:phishing.
Random forest also produces the best G-Mean, MCC and BDR performances on UCI Spambase in the order phishing:legitimate (table 16). Meanwhile, LBET generates the best G-Mean, MCC and BDR performance on UCI Spambase in the order legitimate:phishing. In Table 15, it generated the highest BDR, while in XVI, it produced maximum performance for G-Mean, MCC and BDR.
Based on the various schemes carried out, whether it is the comparison with the latest research (Alsariera et al. [31]), the inclusion of the most recent measures (G-Mean, MCC and BDR), or the significant performance of each technique, further investigation was conducted to ascertain how this phenomenon tend to either decrease or increase. Samples were collected based on the highest performance, namely the UCI Spambase dataset with a 90:10 subset scheme in the order legitimate:phishing and the MDP-2018 dataset with a 90:10 subset scheme in the order phishing:legitimate order.
The unused dataset, which includes 90% of the phishing data on UCI Spambase and 90% of the legitimate on MDP-2018, were proven. The data on the UCI Spambase in the order legitimate:phishing was labelled as follows, 90:10a, 90:10b and 90:10c is 10% of the first, second and third data records, while the next 10% data record was not used. Based on Table 17, the 90:10b subset scheme has maximum performance on F-Measure, Accuracy, BDR, G-Mean, and MCC. This was also detected in Random forest and BET techniques. However, BET was able to excel at F-Measure, Accuracy, G-Mean and MCC for the 90:10b subset scheme, while Random forest exceled at F-Measure, Accuracy, G-Mean, BDR and MCC for the 90:10c subset scheme.
In MDP-2018 with phishing:legitimate, the same process carried out on the UCI Spambase, was also performed. Furthermore, 10% of each valid data record was labelled like 90:10a, 90:10b and 90:10c. The subsequent 10% of records was not used as was the case with the previous experiments. Based on Table 18, the ROFET experienced maximum performance with a subset of the 90:10c scheme, namely the F-Measure, Accuracy, G-Mean and MCC. ROFET accuracy reached 99.08% and F-Measure 0.991 in the 90:10c subset scheme, while Random forest was able to excel in the 90:10b and 90:10c subset schemes in the performance of F-Measure, Accuracy, G-Mean, BDR and MCC.
Based on the investigation of data that were not used during the performance evaluation, it was concluded that they significantly influenced the performance of the classification technique. Therefore, each subset scheme has a superior technique for detecting phishing attacks. The proposed Technical Development needs to be able to adapt to changes in existing data, thereby ensuring that the phishing attack detection technique becomes optimal.
B. INSIGHTS AND FUTURE RESEARCH DIRECTIONS
Based on the experiments performed, the following are some contributions of the classification techniques for phishing attacks: • There is no accuracy-capable classification technique that is tamper-resistant against publicly available datasets. The results of the evaluation process proves that there is no superior classification technique for performance testing on various datasets. Therefore, its use has to be adjusted to the immediate conditions or data held. According to Japkowicz and Shah [43], an experiment that involves the use of a specific dataset need not generalize the results of different data. This is in line with an experiment carried out by Rao et al. [4], that the performance obtained from public datasets is not the same as that of private (owned) ones.
• Disclosure of detailed information on parameter settings in classification techniques This research was unable to find detailed information on the parameters used by several others, therefore, it was difficult to make performance comparisons. Weka, including its default parameters, was used to test classification techniques on the three datasets, namely, MDP-2018, UCI Phishing website, and Spambase. This research aims to prove that the use of default parameters can be used to realize better performance measures.
• There is no standard value or cutoff range for performance evaluation. The classification technique performance assessment was not found in any category. Generally, preliminary studies used a value of approximately 1, indicating the best performance [44]. Several studies used alternative measurements besides accuracy, for instance, detecting a higher TPR or lower FPR value.
Each classification technique performed effectively in some of the tests. Based on the SLR applied, not all classification techniques excelled in all the performance tests. Therefore, its measurement is carried out through accuracy, although there is need for more insight into the classification technique's performance on the experimental model formulated.
• The selection of a subset scheme tends to affect the classification performance.
Various classification techniques produce different performances in the subset scheme. The 90:10, 80:20, 70:30, and 60:40 subset schemes were selected in the order of legitimate/ phishing. The balanced dataset was also changed to imbalanced using the schema. The formulated scheme reflects the original conditions. Besides, when observed, it is unlikely that one is bound to get legitimate and phishing data in a balanced state. Therefore, there is a need for a classification technique that can deal with these data.
The implemented scheme has been proven to affect the performance of classification techniques. For example, although Naïve Bayes was ranked the lowest, it tends to increase the performance values.
V. CONCLUSION
This research explores diverse classification techniques to explain the maximum performance using a subset scheme. The objectives realized are based on the fact that the use of a subset scheme can affect the performance of classification techniques on various datasets. Therefore, the challenge addressed was the ability of the classification technique to perform when using a subset scheme on balanced and balanced datasets.
The classification technique was tested for performance against the subset schemes of 90:10, 80:20, 70:30, and 60:40. In addition, ten performance measures, namely Accuracy, F-Measure, Precision, TPR, ROC, FPR, PRC, MCC, BDR, and G-Mean, were utilized. The scheme is applied to the data in the following sequence phishing:legitimate and legitimate:phishing. Its users tend to produce a significant increase and decrease with respect to performance. Moreover, each classification technique excels at specific performance measures. Not many of them excel on all performance measures.
The performance of unused data was investigated during testing, such as 90% of the data was legitimate for the phishing:legitimate sequence with the distribution of data of 90% phishing and 10% legitimate etc. The findings of this research prove that unused data significantly affects performance during the classification process. Therefore, further investigation of such data is required.
In addition to the under-sampling technique, there is also an over-sampling technique that researchers often use. Therefore, we are interested in trying the over-sampling approach as a technique used to form new datasets that are sourced from balanced and unbalanced datasets. And we will evaluate the performance of the classification technique on the newly formed dataset.
Recent research that adopted a mix of meta and baselearners to detect phishing attacks was also compared. This is intended to prove that the recent detection techniques also encountered certain problems associated with performance when faced with the proposed scheme. It tends to make the recent detection techniques experience a significant performance change based on the results obtained.
There was a significant performance increase and decrease in the subset scheme. This includes a decreased and increases from 0.01% to 56% and 0.04% to 14.9%, respectively. Random forest classification technique excelled in some of the proposed schemes. Meanwhile, the highest performance in the subset scheme is the ROFET technique with an accuracy of 99.08%, while the lowest was detected in the SVM with an FPR of 0.686.
The selection of the dataset also has a significant impact on classification performance. When the subset scheme was applied to the UCI Phishing website dataset, it contributed the least. However, the UCI Spambase dataset with a legitimate data class sequence:phishing significantly increases or decreases performance.
Many researchers use the hyper-parameter technique to find the best parameters for the best performance. Therefore, we are interested in implementing the hyper-parameter method to determine the performance of the classification technique on the subset scheme that we propose in future studies.
The proposed schemes are better than those of related research. The present research revealed the weaknesses of classification techniques by using datasets and subset schemes. This also includes being able to discern which techniques are capable of being superior to the established schemes. In future, this subset scheme needs to be applied to confirmed cases to compare its performances. Moreover, certain recommendations were proposed for developing research on improving phishing classification techniques. For example, evaluating phishing or legitimate data labelled as unused or bad, specifying a standard performance value for a technique to detect phishing attacks, such as accuracy greater than 90% is acceptable, and creating a phishing attack detection concept that is adaptive to the data provided. | 2022-12-07T18:10:49.788Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "92ca45b519656ee9590aabec79eaf5b3dbd6b9cb",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09967993.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "d47519e5c960778f9157ea521fae46e1ab9fd656",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
35806985 | pes2o/s2orc | v3-fos-license | Follicular psoriasis - dermoscopic features at a glance
Sir, A 34‐year‐old‐female presented with multiple asymptomatic‐to‐mildly itchy skin‐colored‐to‐reddish elevated lesions involving both her lower limbs for last 2 months. There was no history suggestive of upper respiratory, gastrointestinal or urinary tract infection. She denied any history of prior drug intake, scalp scaling, joint pain or swelling, or palmo‐plantar thickening. Cutaneous examination revealed multiple discrete, erythematous follicular scaly papules over her both thighs and lower legs [Figure 1]. Mucosa, nails, scalp, palms, and soles were spared. All systemic examinations were within normal limits. Follicular psoriasis, malassezia folliculitis, and follicular lichen planus were considered as the differentials. Dermoscopic examination under nonpolarized contact dermoscopy (Heine Delta20® Dermatoscope, 10× magnification) revealed a white‐brown background/homogenous area, normal looking terminal hair at the centre, perifollicular scaling, multiple red dots/dotted vessels, red globules, twisted red loops, and glomerular vessels/ bushy capillaries [Figure 2]. Histopathological examination of a papule revealed a dilated follicular opening, parakeratotic follicular plugging, follicular hyperkeratosis, perifollicular confluent parakeratosis, hypogranulosis, Munro‐micro abscess, suprapapillary thinning, upper dermal dilated and tortuous blood vessels, and mild perivascular lympho‐histiocytic and neutrophilic infiltration [Figure 3]. Based on these findings, a diagnosis of follicular psoriasis was made and the patient was advised treatment with a topical application of a combination of calcipotriol (0.005% w/w) and clobetasol (0.05% w/w) ointment.
Follicular psoriasis -dermoscopic features at a glance
Sir, A 34-year-old-female presented with multiple asymptomatic-to-mildly itchy skin-colored-to-reddish elevated lesions involving both her lower limbs for last 2 months. There was no history suggestive of upper respiratory, gastrointestinal or urinary tract infection. She denied any history of prior drug intake, scalp scaling, joint pain or swelling, or palmo-plantar thickening. Cutaneous examination revealed multiple discrete, erythematous follicular scaly papules over her both thighs and lower legs [ Figure 1]. Mucosa, nails, scalp, palms, and soles were spared. All systemic examinations were within normal limits. Follicular psoriasis, malassezia folliculitis, and follicular lichen planus were considered as the differentials. Dermoscopic examination under nonpolarized contact dermoscopy (Heine Delta20® Dermatoscope, 10× magnification) revealed a white-brown background/homogenous area, normal looking terminal hair at the centre, perifollicular scaling, multiple red dots/dotted vessels, red globules, twisted red loops, and glomerular vessels/ bushy capillaries [ Figure 2]. Histopathological examination of a papule revealed a dilated follicular opening, parakeratotic follicular plugging, follicular hyperkeratosis, perifollicular confluent parakeratosis, hypogranulosis, Munro-micro abscess, suprapapillary thinning, upper dermal dilated and tortuous blood vessels, and mild perivascular lympho-histiocytic and neutrophilic infiltration [ Figure 3]. Based on these findings, a diagnosis of follicular psoriasis was made and the patient was advised treatment with a topical application of a combination of calcipotriol (0.005% w/w) and clobetasol (0.05% w/w) ointment.
Follicular psoriasis is an under-recognized entity that affects adults more commonly than children without any sexual predilection. Amongst the two clinical subtypes, the adult form commonly affects females and presents as multiple, discrete, follicle-based, hyperkeratotic papules predominantly over the thigh, as in our case. The second type commonly affects children and present as asymmetric, grouped, follicular, keratotic papules predominantly affecting the trunk, axilla, and extensor aspect of limbs. 1 The role of dermoscopy as a diagnostic tool is gaining importance with time as more diseases are being reported where dermoscopy can play a role not only in their diagnosis but also in monitoring their course. To the best of our knowledge dermoscopic features of follicular psoriasis have not yet been reported in the literature. The various dermoscopic features described for plaque psoriasis are white scale, symmetrically and regularly distributed dotted The dermoscopic features described for various follicular dermatoses that may mimic follicular psoriasis (especially the second type) are keratosis pilaris (irregular twisted or coiled vellus hair embedded in the horny layer, perifollicular erythema, scaling, and pigmentation), follicular lichen planus (follicular plug without broken or twisted hairs), pityriasis rubra pilaris (white keratotic plug, yellow peripheral keratotic ring, perifollicular erythema, and linear vessels), scurvy (whitish hair follicles with "corkscrew" hair surrounded by a hemorrhagic violaceous halo), and perforating folliculitis (central white clod surrounded by structureless gray area and brown reticular lines under polarized dermoscopy). [4][5][6][7] The presence of central keratotic plug along with altered hair morphology (twisted or coiled or broken hair) has been described for disorders of abnormal keratinization such as pityriasis rubra pilaris, keratosis pilaris, and scurvy. The presence of vascular pattern, such as diffuse dotted and glomerular vessels, may help in differentiating follicular psoriasis from these disorders. The perifollicular white homogenous area [asterix, Figure 2] histologically corresponds to the follicular and perifollicular hyperkeratosis and acanthosis [ Figure 3], perifollicular white scale to the perifollicular parakeratosis [ Figure 3] and the dotted and nondotted vessels to the dilated and tortuous dermal blood vessels oriented at different angles to the surface of the skin.
To conclude, the presence of central normal looking terminal hair, perifollicular white scale and homogenous area, and vascular structures such as diffuse dotted, twisted or glomerular vessels may help in differentiating follicular psoriasis from its clinical mimics.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
Development of dysplastic nevus during radotinib therapy in patients with chronic myeloid leukemia
Sir, Radotinib is a selective BCR-ABL tyrosine kinase inhibitor (TKI) developed for the treatment of chronic myeloid leukemia (CML). Recently, a phase III clinical trial of radotinib has shown good clinical efficacy and safety in the treatment for this condition. 1 We were unable to find any report describing dysplastic nevus associated with radotinib therapy. Herein, we report three cases of eruptive melanocytic nevi (EMN) with dysplastic change in patients with chronic myeloid leukemia, highlighting the possibility of dysplastic nevus after treatment with radotinib. Patient 1 was a 35-year-old male with chronic myeloid leukemia, who began treatment with 800 mg of radotinib in 2013. The patient noticed an increase in the number of pigmented lesions over 1 year. Among them, an atypical pigmented macule on his face exhibited a 5-mm-sized flat surface and asymmetric irregular borders with variable pigmentation [ Figure 1]. Patient 2 was a 59-year-old man diagnosed with chronic myeloid leukemia in 2013 who began treatment with 600 mg radotinib in 2015. One month after treatment with radotinib, he noticed new pigmented macules over his entire body. On physical examination, numerous nevi were observed. Among them, a pigmented macule with asymmetric, irregular, and fuzzy borders was noticed on his face [ Figure 2]. Patient 3 was a 25-year-old man with chronic myeloid leukemia who was prescribed radotinib since 2013. Six months after treatment with radotinib, multiple eruptive nevi with atypical irregular macules were observed on his face and arms [ Figure 3]. All three patients did not receive any immunosuppressive therapies other than radotinib during the therapy. The dermoscopic findings of patient 1, 2, and 3 [ Figures 4 and 5] are summarized in Table 1 Figure 9]. In addition, prominent c-kit immunostaining was also observed [ Figure 9]. Based on the clinical, dermoscopic, and histopathological evidence [ Table 1], the three patients were given a final diagnosis of dysplastic nevus with eruptive melanocytic nevi, that developed during radotinib therapy.
Eruptive melanocytic nevi can be associated with various conditions including drug-induced, immunosuppression, and local trauma. 2 The proposed mechanisms for these nevi are not well known; however, some suggest that multiple foci of stimulation or immunosuppression potentially result in disarray in the regulation of melanocyte growth and affect the growth of pigmented lesions. Moreover, the conditions associated with the development of eruptive nevi can also be related to the development of dysplastic nevus. 2 The tyrosine kinase inhibitors inhibit not only BCR-ABL but also c-kit signalling pathway. 1 Therefore, a well-recognized adverse pigmentary change associated with imatinib is diffuse hypopigmentation. Because the c-kit signalling pathway along with stem cell factors plays an important role in the development of melanocytes, treatment with tyrosine kinase inhibitors results in a significantly decreased number of melanocytes with high tyrosine kinase activity.
However, paradoxical hyperpigmentation has been reported in imatinib and nilotinib-treated patients. 3 In addition, there has been two case reports describing radotinib-induced pigmentary changes, which reported the development of eruptive melanocytic nevi 4 and lentigines. 5 Although the precise mechanism of action for this paradoxical hyperpigmentation has not yet been experimentally demonstrated, we suspect that pigmentary changes after radotinib might be associated with aberrant activation of c-kit in a specific mutant type. Moreover, because | 2018-04-03T02:58:44.869Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "79cd88ed3bf6cd8f32cb00e4348a0af02afda04a",
"oa_license": null,
"oa_url": "https://doi.org/10.4103/ijdvl.ijdvl_12_17",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "2bc9d4cb362f2043528a59d37b8a10bc9c740565",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53060256 | pes2o/s2orc | v3-fos-license | Annales Geophysicae Cluster and TC-1 observation of magnetic holes in the plasma sheet
Magnetic holes with relatively small scale sizes, detected by Cluster and TC-1 in the magnetotail plasma sheet, are studied in this paper. It is found that these magnetic holes are spatial structures and they are not magnetic depressions generated by the flapping movement of the magnetotail current sheet. Most of the magnetic holes (93 %) were observed during intervals withBz larger thanBx, i.e. they are more likely to occur in a dipolarized magnetic field topology. Our results also suggest that the occurrence of these magnetic holes might have a close relationship with the dipolarization process. The magnetic holes typically have a scale size comparable to the local proton Larmor radius and are accompanied by an electron energy flux enhancement at a 90 ◦ pitch angle, which is quite different from the previously observed isotropic electron distributions inside magnetic holes in the plasma sheet. It is also shown that most of the magnetic holes occur in marginally mirror-stable environments. Whether the plasma sheet magnetic holes are generated by the mirror instability related to ions or not, however, is unknown. Comparison of ratios, scale sizes and propagation direction of magnetic holes detected by Cluster and TC-1, suggests that magnetic holes observed in the vicinity of the TC-1 orbit (∼7–12RE) are likely to be further developed than those observed by Cluster ( ∼7–18RE).
Introduction
Magnetic holes (MHs) were first detected in the solar wind (Turner et al., 1977).They are observed as depressions in the magnetic field magnitude with durations of seconds to tens of seconds (Turner et al., 1977;Xiao et al., 2010;Zhang et al., 2009).Their scale sizes are often tens to hundreds of proton Larmor radii (Winterhalter et al., 1994;Zhang et al., 2008;Tsurutani et al., 2011).The criteria used by Winterhalter et al. (1994) and Zhang et al. (2008) to find magnetic holes were that the amplitude depression (B min /B) is smaller than 0.5 and 0.75, respectively.Turner et al. (1977) also defined those magnetic holes with little or no change in the magnetic field direction as linear magnetic holes (LMHs) and Winterhalter et al. (1994) found that the ambient plasma parameters of observed LMH trains, which are defined as at least two comparable linear magnetic holes in a 300 s interval, and LMHs were marginally mirror stable.They therefore suggested that LMHs are probably the remnants of mirror modes structures in the solar wind (see also Winterhalter et al., 1995;Russell et al., 2008;Zhang et al., 2008Zhang et al., , 2009)).Many other mechanisms have been proposed to explain the magnetic holes in the solar wind, such as the soliton approach (Baumgartel, 1999;Baumgartel et al., 2003), and theories associated with Alfvén waves (see, for example, Buti et al., 2001;Tsurutani et al., 2005).The stable solitons correspond to isotropic plasma with high β and their propagation direction is close to perpendicular to the ambient magnetic field vector.
Published by Copernicus Publications on behalf of the European Geosciences Union.
It is well-known that the mirror instability (Hasegawa, 1969;Southwood and Kivelson, 1993) could occur in a high β anisotropic plasma with the perpendicular temperature higher than the parallel temperature, while the ion cyclotron mode could also occur under these conditions (Davidson and Ogden, 1975).The difference between them are that the ion cyclotron mode propagates parallel to the ambient magnetic field with a frequency smaller than ion cyclotron frequency, while the mirror mode propagates obliquely to the background magnetic field vector with zero phase velocity relative to plasma flow (Gary et al., 1976;Southwood and Kivelson, 1993).Génot et al. (2001) pointed out that the growth rate of ion cyclotron mode can overcome the mirror mode in a plasma with high temperature anisotropy (T perp /T par ).Therefore, the occurrence of the mirror mode is most likely in a high β and low anisotropy plasma.Most of the theoretical work referred to above assumed cold background electrons.However, ions and electrons are all expected to contribute to the mirror instability (Gary and Karimabadi, 2006).When the electron temperature is comparable to the ion temperature, the effects of electrons cannot be neglected anymore (Pokhotelov et al., 2000;Istomin et al., 2009).The electrons could affect the threshold of mirror instability and also the scale size of magnetic holes, but not very significantly.Recent observations have also shown that the trapped ions could be heated at intermediate pitch angles and cooled at small parallel velocities in the trough of nonlinear mirror mode structures (Soucek and Escoubet, 2011).
As ions can be heated in the perpendicular direction by the quasi-perpendicular bow shock (Liu et al., 2005), mirror mode structures are observed in the corresponding terrestrial magnetosheath (Kaufmann et al., 1970;Crooker et al., 1979;Tsurutani et al., 1982) and in the terrestrial cusp (Shi et al., 2009a).Observations and numerical simulations have shown that magnetic peaks are rarely seen in mirror stable plasma because of their rapid decay, but magnetic holes could survive in a mirror stable plasma.Therefore, both peaks and holes could be observed in the magnetosheath, and most holes are observed near the magnetopause, where the plasma is mirror stable (Baumgartel et al., 2003;Travnicek et al., 2007;Soucek et al., 2008;Génot et al., 2009).
Mirror mode structures are reported to be observed in other regions, e.g.close to Io (Russell et al., 1999), in the vicinity of comets (Russell et al., 1987), in the magnetosheaths of other planets (Bavassano Cattaneo et al., 1998;Volwerk et al., 2008a) and even in the induced magnetosphere of Venus (Volwerk et al., 2008b).Mirror mode structures were first detected in the terrestrial magnetosphere during storm times (Hasegawa, 1969).Rae et al. (2007) reported a series of drift mirror mode structures in the dawn side magnetosphere and believed the observations were an example of the mirror instability excited ULF waves.Most of the above observations in the magnetosphere were interpreted as drift mirror mode structures, and often could drive pulsations in the magnetic field (Cheng and Qian, 1994).The drift mirror instability also adheres to the same threshold conditions as the mirror instability.The convected mirror mode is a standing mode, while the drift mirror instability becomes oscillation with frequency equal to the particle drift wave frequency (Hasegawa, 1969;Pokhotelov et al., 2001;Rae et al., 2007).However, Hellinger (2008) pointed out that the threshold of the mirror instability, in the case of one cold species, is not applicable for the drift mirror instability.
A series of magnetic holes, which were deemed mirror mode structures, were observed by THEMIS-D (P3) between two dipolarization events in the plasma sheet (Ge et al., 2011).The THEMIS-D spacecraft was located at about 11 R E close to the equatorial plane and local midnight tail region at that time.Their results implied that dipolarization might promote the generation of mirror mode structures.The mirror mode structures reported in the article were likely to be already in the nonlinear phase, because the minimum magnetic field magnitude inside the holes was lower than 50 % of the background field magnitude.Moreover, the width of these mirror mode structures is less than one proton Larmor radius, but at tens of electron gyroscales (Ge at al., 2011).
In this paper, we use the data from Cluster during its 2003 tail season and Double Star TC-1 during its 2004 tail season to investigate magnetic holes in the plasma sheet.The apogee of Cluster is about 19 R E and the apogee of TC-1 is about 12 R E .The X location of the THEMIS-D event used by Ge et al. (2011) is about 11 R E .We investigated plasma sheet magnetic holes with the four spacecrafts of Cluster and also TC-1.We examined their electron properties and will discuss their possible formation mechanism.We also compared our results with those of Ge et al. (2011) and will discuss the possible relationship between the magnetic holes detected by Cluster, TC-1 and THEMIS.
Below, we first introduce the procedure we used to find magnetic holes.We then show their spatial features and confirm them by an estimate of the direction of boundary motion.The characteristics of the background magnetic field of these magnetic holes and their scale sizes are shown as well.
Cluster and TC-1 observations
In this section, we study the magnetic holes detected by Cluster and TC-1 in the plasma sheet.We used data obtained from the Flux Gate Magnetometer (FGM) (Balogh et al., 2001), Plasma Electron and Current Experiment (PEACE) (Johnstone et al., 1997), and Cluster Ion Spectrometry Experiment (CIS) (Rème et al., 2001) instruments onboard Cluster.We also used data from Flux Gate Magnetometer (FGM) (Carr et al., 2006) and Hot Ion Analyzer (HIA) (Rème et al., 2005) instruments onboard TC-1.Proton moments were calculated from the CIS-CODIF sensor onboard C1, while electron moments were obtained from the PEACE instrument onboard C2.Both proton and electron moments were calculated on the ground from the observed 3-D particle distributions.In our analysis, the velocity data were taken from the Cluster HIA instrument, and the proton temperature and density were taken from the CODIF sensor.We also used electron pitch angle distributions measured using a combination of the LEEA and HEEA sensors.Ion moments from TC-1 HIA were calculated onboard.
Observation overview
Similar to Xiao et al. (2010), Winterhalter et al. (1994) and Zhang et al. (2008Zhang et al. ( , 2009)), the ratio, B min /B and the directional change angle, are used to identify magnetic holes.B min is the minimum field magnitude inside a magnetic hole and B is the average field magnitude in a given time interval centered on the magnetic hole (300 s in the references above).The directional change angle ( ) is the angle between the magnetic field vectors at the two boundaries of a magnetic hole.Zhang et al. (2008Zhang et al. ( , 2009) ) and Xiao et al. (2010) chose a ratio, B min /B of no more than 0.75 and a directional change angle no more than 15 • to identify linear magnetic holes, while Winterhalter et al. (1994) used a ratio B min /B of no more than 0.5 and directional change angle no more than 5 • as their criteria.Here, in this paper we use B min /B < 0.75, < 15 • to search for magnetic holes in the plasma sheet.Because the magnetic field in the plasma sheet is much more inhomogeneous than that in the solar wind, we use a time interval of 90 s to calculate B rather than 300 s.In this paper, we will use Cluster magnetic field data sampled at 5 vectors per second and TC-1 magnetic field data at spin (4 s) resolution.
The plasma sheet encompasses the magnetotail current sheet and the flapping of the current sheet is frequently observed (Sergeev et al., 2004;Sun et al., 2010).This flapping often results in a spacecraft repeatedly crossing the tail current sheet, during which a reduction in magnetic field magnitude could be observed.It is easy to distinguish current sheet crossings from magnetic holes as the X component of magnetic field reverses direction in the tail current sheet.But there can also be magnetic depressions caused by current sheet flapping where the tail current sheet is not fully From Fig. 1a we can see that the maximum directional angle change inside the structure is smaller than 4 • , which reveals that it is not caused by the current sheet flapping motion.
After automatic selection according to these criteria, we ruled out the magnetic depressions, which may be generated by the flapping movement of the tail current sheet.We found 72 magnetic holes in Cluster 2003 tail season and 42 magnetic holes in the TC-1 2004 tail season.
Features of magnetic holes detected by Cluster and
TC-1
Spatial structures
Figure 2 shows a magnetic hole detected by all four Cluster spacecrafts.From the "interlaced" magnetic field profiles, we can tell that the magnetic hole is a spatial structure, rather than a temporal effect (e.g.Shi et al., 2009a).This can be confirmed by considering the angle between the normals of its two boundaries.Table 1 lists the velocities of the two boundaries of this magnetic hole calculated by the timing method (see Russell et al., 1983;Paschmann and Daly, 1998, their chapters 10, 11, 12, and 14).The angle between the two boundaries is about 8.74 • , which means that the propagation directions of the two boundaries are nearly parallel to each other, and this indicates that the structure is spatial.The boundary velocities of other events that can be observed by all the four spacecrafts were calculated by the timing method or by other methods (Shi et al., 2005(Shi et al., , 2006(Shi et al., , 2009a, b), b), confirming that they are also spatial structures as well.
The background magnetic field of the magnetic holes
We calculated the background magnetic field elevation angle (θ ) for each magnetic hole events detected by Cluster and TC-1 and displayed the results in Fig. 3a GSM coordinate system.From Fig. 3a (Cluster observation) we can see that about 93 % (67/72) magnetic holes have a θ bigger than 50 • , which means B z is bigger than B x , while this percentage amount is about 86 % (36/42) in Fig. 3b (TC-1 observation).Thus, most of the magnetic holes were observed when the background B z was much larger than B x .We also randomly selected 197 plasma sheet passes and calculated θ. Figure 3c is a histogram of their magnetic field elevation angle distribution.From this figure we can see that 31.5 % (62/197) of the plasma sheet passes had a θ bigger than 50 • , smaller than the percentage for magnetic hole intervals.This result indicates that the plasma sheet magnetic holes seem to occur more often in a more dipolar magnetic field topology, which implies that these magnetic holes might have a close relationship with the dipolarization process.
Scale sizes of the magnetic holes
The distances between the four Cluster spacecrafts were relatively small (∼100 km) in 2003, so many magnetic holes were observed by all four Cluster spacecrafts and could be analyzed using the 4 spacecraft timings method.We estimated the scale size of the magnetic holes by simply multiplying the timing velocity and the observed duration.The ratios of their scale sizes to the proton Larmor radii were also calculated, in which the proton Larmor radii were calculated based on the background plasma parameters.Figure 4a is the histogram of these ratios, from which we can see that most magnetic holes have scale sizes smaller than the proton Larmor radii.We also estimated the scale sizes of these magnetic holes through their background plasma flows as carried out by Ge et al. (2011).Both the Cluster and TC-1 HIA velocities were used.We only selected the magnetic holes that correspond to relatively smooth background plasma flows.The ratios of their scale sizes to proton Larmor radii calculated from flow data are shown in Fig. 4b (Cluster) and 4c (TC-1).
The result, plotted in Fig. 4b, is similar to those in Fig. 4a, i.e. the average scale sizes calculated in different ways are almost equal.From the data we have shown here, it appears that these magnetic holes are propagating together with the plasma flow.As we mentioned above, the Cluster magnetic field data we used had a time resolution of 0.2 s, while TC-1 had a resolution of 4 s.The comparison between the magnetic holes scale sizes observed by the two missions might be affected by their different time resolutions.As such, we have also selected magnetic holes using a Cluster spin (4 s) resolution magnetic field data, which is the same as TC-1, and calculated the ratio of their scale size to the proton Larmor radius (Fig. 4d).Even though the difference in the time resolution does have a small effect on the distribution of ratios, the magnetic holes detected by TC-1 still seem to have relatively larger scale sizes than those found by Cluster.
The electron properties
Figure 5 shows 3 examples of magnetic holes in the plasma sheet and corresponding electron energy-pitch angle diagrams.We can see that the electrons inside each of the three magnetic holes have an enhancement of differential energy flux at a 90 • pitch angle relative to background conditions.Most magnetic holes detected by Cluster have this feature, which is quite different from the electron isotropic distribution inside the magnetic holes observed by Ge et al. (2011).Despite the electron energy flux enhancement at a 90 • pitch angle inside the magnetic hole structure, we can see from the three figures (Fig. 5) that the background electron distributions are variable.The survey of Walsh et al. (2011) showed that the isotropy of the plasma sheet electron distribution varies with proton β and electron energy.So this might be the reason for the variance of our magnetic hole background electron distributions.
We also considered whether the enhancement of electron pressure is sufficient to compensate for the decrease of magnetic field pressure inside the magnetic holes.The perpendicular temperature of electrons inside most magnetic holes is increased, which is consistent with the enhancement of electron energy flux at a 90 • pitch angle.The electron parallel temperature is not obviously changed in most magnetic holes, although there are some holes in which the electron parallel temperature decreased slightly (4 of 56 cases), and seldom the electron parallel temperature increased inside magnetic holes (only 1 case).Overall, in most magnetic holes, the total electron temperature increased (only 2 cases decreased).The electron density in most magnetic holes was only slightly above the background, and in some cases even decreased (only 2 cases).Statistical results show that, on average, the thermal pressure had an increase due to the enhancement of electron temperature and density, although this increase in electron pressure does not completely compensate for the reduced magnetic pressure in the magnetic holes.From the statistical results of Walsh et al. (2011), we found that the average proton energy flux is several times of the average electron energy flux, and our results also show that the proton temperature is typically many times of electron temperature (in the next section).Furthermore, our observations show that the proton pressure does not change in the magnetic holes.The total pressure, therefore, would only be affected slightly by the incomplete compensation of electron thermal and magnetic pressures and remains approximately constant.
Relationship with mirror mode instability
The criteria (cold background electron) for the mirror instability can be stated as R = T ⊥ /T // /(1 + 1/β ⊥ ) > 1, where , 1969;Southwood and Kivelson, 1993).Figure 6 shows distributions of β ⊥ and temperature anisotropy (T ⊥ /T // ) of the magnetic holes for which there is CODIF data (56 of the 72 magnetic holes).These parameters are the average values calculated from the 90 s time interval.The horizontal coordinate in Fig. 6 represents β ⊥ and the vertical coordinate represents temperature anisotropy, which is equal to T ⊥ /T // .Every solid circle in Fig. 6 denotes a magnetic hole.The red, green, and blue lines in Fig. 6 represent R = 1, R = 0.9, and R = 0.8, respectively.The region above the red line is mirror unstable, and below the line is mirror stable.From Fig. 6 we can see that 7 % (4/56) of Cluster magnetic holes occur in the mirror unstable environment, and there are about 55 % (31/56) of all magnetic holes that occur in a R > 0.9 environment.
The ratios of these magnetic holes electron to ion temperatures range from 0.1 to 0.5.
We calculated the value of T ⊥ /T // − 1 − 1/β ⊥ − ((T ⊥ /T // − 1) 2 T e )/(2T ⊥ (1 + T e /T // )) > 0, which is the criterion for a mirror unstable condition when taking the effects of electrons into account (Istomin et al., 2009).There were also only 4 events occurring in the mirror unstable environment.The effects of the electrons on the mirror instability threshold can be neglected.
We found that 44 of all the 56 events occurred in the environment with β // values larger than 7, so that here the proton mirror mode had a faster growth rate than proton cyclotron mode (Gary and Karimabadi, 2006), suggesting that the proton cyclotron mode is less likely to be responsible for the formation of the magnetic holes.
Discussion and summary
In this paper we have analyzed the magnetic holes detected by Cluster and TC-1 in the plasma sheet.The magnetic holes detected by Cluster and TC-1 are located from X ∼ 7 R E to X ∼ 18 R E .During the dipolarization process, the geomag-netic field lines in the near-Earth tail region are changed from tail-like to dipolar-like geometry (McPherron et al., 1973).Dipolarization not only affects the geometry of field lines, but also accompanies many other phenomena.Many previous works show that ions could be accelerated (e.g.Mauk, 1986;Zhou et al., 2010), and both the ion density and β could increase several times after the dipolarization process (e.g.Zhang et al., 2007;Tang et al., 2009), which indicate that the R value of plasma should increase.There are also many observations that show that the dipolarization process, coupled with electron acceleration and wave particle interactions, can account for electron heating (see, for example, Ashour-Abdalla et al., 2009;Fu et al., 2011).Ge et al. (2011) pointed out that the dipolarization process might provide a more anistropic plasma environment for the growth of mirror instability and their observations indicate that the magnetic holes might be linked to the dipolarization process itself.Our observations of most plasma sheet magnetic holes are not directly related to the dipolarization process as described in Ge et al. (2011), but in comparison with the randomly selected plasma sheet passes, the plasma sheet magnetic holes do seem to occur more often in a more dipolar magnetic field topology.So this suggests that these magnetic holes might have a close relationship with the dipolarization process.
The structures detected by Ge et al. (2011) occurred in a mirror stable environment.Our results also show that most magnetic holes detected by Cluster exist in a mirror stable environment.Inside the structures detected by Ge et al. (2011), the electron distribution was isotropic, while we found an enhancement of electron energy flux at a 90 • pitch angle in our cases.Since our observations were taken further down to the tail than those of Ge et al. (2011), the above difference might be a result of the different stages of evolution of the magnetic holes.In order to answer these questions, it is clear that the electron properties of magnetic holes require further investigation.
Scale sizes of plasma sheet magnetic holes, based on multi-spacecraft timing calculations of velocity and propagation at ion flow speed, indicates that most structures are smaller than the proton Larmor radius.Previous observations suggest the size of magnetic holes in the solar wind is typically about 10 times the size of their background proton Larmor radius.As Sect.2.2.5 mentioned, the mirror instability might be excited in the plasma sheet.The finite electron temperature effect does not seem to alter the size of mirror mode structures significantly (Istomin et al., 2009); however, most of the plasma sheet magnetic holes occur in a marginally mirror stable environment.Therefore, whether these plasma sheet magnetic holes are generated by the mirror instability requires further investigation.From the increase in the perpendicular electron energy flux in the magnetic holes, it appears that magnetic holes in the plasma sheet have a close relationship with electrons.This raises the possibility that electron instabilities may play a part in the formation of these magnetic holes.
From Fig. 4a, b and c, we find that most magnetic holes detected by TC-1 seem to have a relatively larger scale size than those found by Cluster.From Fig. 7a and b, there are more events with magnetic holes detected by TC-1 than those by Cluster with small B min /B ratios, i.e. there are about 8.8 % (6/68) of the magnetic holes detected by Cluster with ratios smaller than 0.5 and about 19.5 % (8/41), for TC-1.The distribution of the ratios for magnetic holes detected by Cluster with spin resolution magnetic field data is consistent with the above result.These results imply that magnetic holes detected by TC-1 are more fully developed than those by Cluster. Figure 8a is the distribution of timing velocities in the X-Y plane of the magnetic holes detected by Cluster. Figure 8b and c are the distributions of their background plasma flow velocities measured by C1 and C3.From these three figures, we can see that most magnetic holes are propagating towards the Earth, as well as the magnetic holes detected by TC-1 (Fig. 8d).This observation suggests that the magnetic holes detected by Cluster might have propagated to the vicinity of the TC-1 orbit and that during this process, the magnetic holes have evolved further.The generation and evolution of these magnetic holes certainly need more detailed investigation.
The results of this research can be summarized as follows: statistical analysis has shown that for most plasma sheet magnetic holes, the background magnetic field B z is bigger than its B x component and in comparison with the randomly selected plasma sheet passes, they seem to more favorably occur in a dipolar magnetic field topology.Thus, these magnetic holes might have a close relationship with the dipolarization process, though most plasma sheet magnetic holes do not associate with the dipolarization process as in Ge et al. (2011).It is suggested that the magnetic holes detected by Cluster might have propagated to the vicinity of the TC-1 orbit and during this process, the magnetic holes have further developed.These magnetic holes correspond to an enhancement of electron energy flux at a 90 • pitch angle.Most plasma sheet magnetic holes occur in a marginally mirror stable environment, but whether the plasma sheet magnetic holes are generated by the mirror instability or not is unknown.The scale sizes of magnetic holes are generally smaller than the background proton Larmor radius.The scale size of the holes and the electron energy flux enhancement within them suggest that the formation of plasma sheet magnetic holes has a link to the behavior of electrons rather than protons.Theories that link the electron behavior and the generation of this kind of small scale magnetic holes are not present in literature and need to be developed in the future.
Fig. 2 .
Fig. 2. From top to bottom panels: The GSM X component; Y component; Z component of magnetic field; the magnetic field magnitude.
Fig. 3 .
Fig. 3. (a), (b) The histograms for the magnetic field elevation angle (atan(B z /B x )) of the magnetic holes detected by Cluster and TC-1.The B z and B x are their background magnetic field in the GSM coordinate system.(c) The histogram for magnetic field elevation angle of randomly selected Cluster plasma sheet passes.
Fig. 4 .
Fig. 4. Histograms of the ratios of magnetic hole scale sizes to proton Larmor radii.The proton Larmor radius is calculated from their background parameters.(a) The scale size of magnetic holes detected by Cluster calculated from timing velocity.(b) The scale size of magnetic holes detected by Cluster calculated from their background plasma flow velocity.(c) The scale sizes of magnetic holes detected by TC-1, which were calculated from their background plasma flows.(d) The same as b except that these magnetic holes were selected from Cluster spin resolution magnetic field data.
Fig. 5 .
Fig. 5. Electron energy flux (unit log keV cm −2 s −1 sr −1 keV −1 ) obtained by Plasma Electron and Current Experiment (PEACE) as a function of pitch angle and energy and the related magnetic holes.The pitch angle distributions inside red frames are the distributions inside magnetic holes.
Fig. 6 .
Fig.6.The distributions of the temperature anisotropy (T ⊥ /T // ) and β ⊥ for magnetic holes observed by Cluster.The red line, green line, and blue line represent R = 1, R = 0.9, and R = 0.8, respectively.We use their background temperatures and magnetic fields to calculate the T ⊥ /T // and β ⊥ .
Fig. 7 .
Fig. 7.The distribution of B min /B ratios.(a) For magnetic holes detected by Cluster with 0.2 s resolution magnetic field data.(b) For TC-1 magnetic holes selected with spin resolution field data.
Fig. 8 .
Fig. 8. (a) Distribution of timing velocities in X-Y plane of magnetic holes observed by Cluster.(b) The distribution of background plasma flow velocities observed by C1 in X-Y plane.(c) The distribution of background plasma flow velocities observed by C4 in X-Y plane.(d) Background plasma flow velocity distribution of magnetic holes observed by TC-1 in X-Y plane.The red lines represent magnetic holes propagating towards the Earth.The blue lines represent magnetic holes propagating tailward. | 2018-10-09T23:16:14.670Z | 2012-03-26T00:00:00.000 | {
"year": 2012,
"sha1": "b3ad0b75f69558b01215da7658beb4d545346d87",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/30/583/2012/angeo-30-583-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b3ad0b75f69558b01215da7658beb4d545346d87",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236927690 | pes2o/s2orc | v3-fos-license | Treatment and Outcomes of Metastatic Pancreatic Cancer in Elderly Patients
Background and Aims: Although pancreatic cancers are common in older age-groups, the prognosis remains poor due to limited studies on treatment approaches and outcomes in a given population. We aimed to examine treatment patterns and their outcomes in older patients with metastatic pancreatic cancer in a real-world context. Materials and Methods: We conducted a retrospective study including 167 patients with metastatic pancreatic cancer (aged ≥70 years and male/female: 78/89) between January 2010 and July 2015. Patients’ retrieved data from medical records were analyzed according to treatment types, followed by a review of clinicopathologic variables and treatment outcomes. Results: Of the 167 eligible patients for the study, only 21.6% (n = 36) received palliative chemotherapy. The median age of the chemotherapy group was 74.0 years and 78.6 years for the supportive care group. The median survival of the chemotherapy group was 9.2 months (range: 1.0–24.9 months), compared with that of the supportive care group, which was 2.3 months (range: 0.1–31.8 months). Among the patients in the chemotherapy group, 50% (n = 18) received gemcitabine-based double therapy, and 30% patients (n = 9) received second-line chemotherapy. Conclusions: Our results showed that older patients with metastatic pancreatic cancer were less likely to receive chemotherapy. However, the survival benefit from chemotherapy was comparable to that of younger patients’ counterpart. Thus, further study involving identification of older patients who would benefit from cytotoxic chemotherapy is needed.
Introduction
The incidence of cancer in the older population is rising, owing to increasing life expectancy. Additionally, the number of older patients with cancer has increased and is expected to continuously increase with advances in cancer treatment [1]. Cancer typically develops in older people aged ≥55 years. In the USA, for example, 80% of all cancers are diagnosed in people aged ≥55 years, according to the American Cancer Society. Pancreatic cancer is considered one of the most aggressive forms of cancer in which >75% of patients are diagnosed with locally advanced or metastatic disease, mainly affecting the older population with a median age of 71 years at diagnosis. Meanwhile, the incidence of pancreatic cancer has increased in recent years, and 53% of patients are diagnosed initially with metastatic disease already in existence. Despite this large pancreatic patients' population, the older individuals have often been excluded from clinical trials and active cancer treatment. A report has indicated that only 5% of older patients with metastatic pancreatic cancer were enrolled in the clinical trial [2]. Thus, the benefit of systemic chemotherapy for elderly patients in this condition in a palliative setting is not well-known. Also, the choice of chemotherapy regimens for an individual patient may be a common clinical situation, considering factors, such as the extent of disease, prognosis, activity of chemotherapeutic agents, and potential toxicity, especially for those who are >70 years or those who have decreased performance status. Unfortunately, due to the exclusion of older patients in clinical trials, clinicians are challenged by the choice of chemotherapy regimen for this group of individuals after new drug approval.
Gemcitabine was approved as a first-line agent based on a pivotal phase III clinical trial for advanced pancreatic cancer in 1996 [3]. Gemcitabine remains the mainstay of first-line treatment for pancreatic cancer in our setting. In a study of pancreatic cancer patients who were >65 years, only 54% received chemotherapy [4]. The introduction of combination chemotherapy regimen, such as gemcitabine plus nab-paclitaxel [5] and 5-fluorouracil/leucovorin with irinotecan and oxaliplatin (FOLFIRINOX) has significantly improved outcomes, compared with gemcitabine monotherapy. Furthermore, in a phase III randomized study that enrolled 861 metastatic pancreatic cancer patients to evaluate the efficacy and safety of nabpaclitaxel plus gemcitabine combination versus gemcitabine monotherapy, it was found that the median overall survival (OS) was 8.5 months in the former and 6.7 months in the latter group (hazard ratio: 0.72 and p < 0.001). The median progression-free survival and the objective response rate were also improved in the combination arm, and toxicity related to the nab-paclitaxel addition was manageable. A combination chemotherapy regimen, especially FOLFIRINOX, is considered the first-line therapy for pancreatic cancer patients with good performance status [6], and the administration of the combined nab-paclitaxel plus FOLFIRINOX regimen for pancreatic cancer has generally been manageable [7][8][9]; nevertheless, it would be of interest to know the effectiveness of the regimen in older patients. To date, there is no consensus on patient management of chemotherapy regimens for the elderly, and often, either they do not receive therapy or they settle for suboptimal doses of chemotherapy due to the general assumption that older individuals cannot tolerate chemotherapy. Research on the identification of key prognostic factors for pancreatic cancer in older patients is warranted to permit optimal selection and stratification of patients who can best tolerate and benefit from systemic therapy. Although pancreatic cancers are common in older age-groups, treatment approaches and outcomes are less investigated in this population. We analyzed the data to compare treatment outcomes for older patients with metastatic pancreatic cancer retrospectively.
Patients and Data Collection
We conducted a retrospective study, enrolling 167 patients (aged ≥70 years) with histologically confirmed metastatic or recurrent pancreatic adenocarcinoma from January 2010 to July 2015. All demographics and relevant clinicopathologic data were retrieved from the patients' medical records. Subsequently, the clinicopathologic variables and treatment outcomes were reviewed. This study was approved by the Institutional Review Board of Hallym Medical Center with a waiver for informed consent due to the retrospective study design. The ethical principles established by the Helsinki Declaration were followed.
Statistical Analyses
We performed survival analyses by estimating the OS (from the day of diagnosis to the day of death or the last follow-up). The OS was assessed using the Kaplan-Meier method, and the log-rank test was used to test the differences between OS associated with the clinical variables. Statistical significance of the factors associated with OS was investigated using the univariate and multivariate Cox proportional hazards regression models. Hazard ratios and their 95% confidence intervals were computed. p value < 0.05 was considered a statistically significant difference. All statistical analyses were performed using IBM SPSS Statistics 24 (Armonk, NY, USA).
The severity of comorbid diseases was recorded and scored according to the Charlson Comorbidity Index (CCI) [10]. Patients were divided into 3 groups as follows: Group 1: mild, with CCI scores of 0; Group 2: moderate, with CCI scores of 1; and Group 3: severe, with CCI scores ≥ 2.
Patient Characteristics
We recruited 167 patients aged ≥70 years with metastatic pancreatic cancer, who were eligible for the study. hort population. Of these, 21.6% patients received palliative chemotherapy. The median age was 74.0 years (range: 70-84 years) in the chemotherapy group and 78.6 years (range: 70-94.6 years) in the supportive care group. Fiftythree (31.7%) patients were >80 years old, and 78 (46.7%) were male. The metastatic locations were comparable to those observed in pancreatic cancer, which includes the lung, liver, peritoneum, and lymph node. Thirty-five (21%) patients had relapse of the disease after prior curative resection. Between supportive care and active chemotherapy groups, the scores for the Eastern Cooperative Oncology Group (ECOG) scale of performance status showed a significant difference, although the CCI was similar ( Table 2). Among patients with good performance (ECOG 0-1) and low CCI (0-1), 36 of 58 patients received chemotherapy. Thirty-seven percent of good performance and low CCI decline chemotherapy due to concern of the side effect of chemotherapy.
Survival
The median survival was 9.2 months (range: 1.0-24.9 months) in the chemotherapy group as compared with 2.3 months (range: 0.1-31.8 months) in the supportive care group (Fig. 1a). In univariate analysis, we found that the older patients aged ≥80 years with poor ECOG per-formance (EOCG = 2), high CCI (≥ 2), and those who were undergoing supportive care were associated with poor OS (Table 3 and Fig. 1b-d, NsxnsXNKlxn). Additionally, in multivariate analysis, poor performance (EOCG = 2), high CCI (≥ 2), and supportive care were associated with poor OS. In the chemotherapy group, The regimen included chemotherapeutic drugs, such as gemcitabine/Abraxane and FOLFIRINOX. The regimens were used as follows: gemcitabine/Abraxane, n = 12; FOLFIRINOX, n = 3; gemcitabine/erlotinib, n = 3; and gemcitabine single agent, n = 18. Among these patients, 8 of 36 reduced dosage of chemotherapy (3 of 3 in FOL-FIRINOX reduced 25% dose reduction of original dosage and 8 of 12 in gemcitabine/Abraxane reduced 25% dose reduction of original dosage). Nine patients (30%) received second-line chemotherapy. In the supportive care group, 55.8% (n = 74) of patients survived <3 months. However, except for earlier deaths (<3 months), the median survival obtained was 6.5 months in the supportive group. Table 4 showed adverse events of chemotherapy. In patients who received chemotherapy, most adverse effects were mild and manageable. The adverse effects over grade 3 included neutropenia, anemia, peripheral neuropathy, and pneumonitis (16.7%).
Discussion
This study focused on the treatment patterns and their outcomes in older patients with metastatic pancreatic cancer. Only 10.7% of the total number of elderly patients with metastatic pancreatic cancer received gemcitabinebased doublet chemotherapy; however, their OS was comparable to younger patients with metastatic pancreatic cancer.
Based on our results, it can be seen that in the real world, only a few older patients (36/167, 21.6%) with metastatic pancreatic cancer receive chemotherapy. Nonetheless, in these few patients, chemotherapy increased the survival rate, with the combination chemotherapy showing the survival benefit similar to younger patients. One reason for this is that there is no adequate guide and specific indication on who will benefit from chemotherapy in these older patients. An assessment of medical comorbidities and functional status plays a key role in determining fitness for undergoing intensive chemotherapeutic regimens in this important subset of patients. This study clarified the indications for chemotherapy that was vaguely determined by each physician in the clinic during the treatment of older patients with pancreatic cancer. As shown in this study, good performance status and the low CCI were good indices during the selection of appropriate candidates for active chemotherapy in older patients with metastatic pancreatic cancer. Most of these patients in the chemotherapy group tolerated the therapy well and showed a good response to gemcitabine-containing doublet chemotherapy. Possible strategies to improve tolerability without decreasing the efficacy of chemotherapy include dose reduction, schedule modification, and growth factor support.
Although low clinical trial enrollment is typical with older patients and insufficient data exist, most of the pancreatic cancer patients we consult in clinical practice are older. In particular, pancreatic cancer is a very aggressive tumor, and for this reason, older patients with this condition have been a difficult subsegment of population to manage, regarding strategic planning of drug selection for aggressive chemotherapy. However, in recent years, many new drugs have been developed, including immune checkpoint inhibitors and therapeutic vaccines, and these medications are evolving in terms of their effects and side effects; hence, older patients should not be overlooked in such treatment opportunities. Most new drug studies do not only limit patient's age but also differ in the distribution of patients enrolled in a new drug study, compared with those monitored in clinical trials, making it difficult for clinicians to apply the trial results only in clinical practice.
Thirty-seven percent of good performance and low CCI decline chemotherapy due to concern of the side effect of chemotherapy. Despite our study findings that most of the older patients with pancreatic cancer who may do well with chemotherapy are often denied such treatment, this study does not preclude limitations. First, because the study was of a retrospective design, we had limited control over the data collection and had to rely on accurate recordkeeping by other staff for our analyses. Second, due to our small-scale study population size, there could be higher data variability. We hope that in the future, drug researchers would consider older patients for the treatment category.
Conclusions
Our findings in this study supported that aggressive chemotherapy for older pancreatic cancer patients (with good performance and low CCI) might have tolerable side effects and a favorable outcome to improve their survival.
Statement of Ethics
This study was approved by the Institutional Review Board with a waiver for informed consent, owing to its retrospective design. | 2021-08-06T06:17:51.872Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "dd96fc3b95b8e96542af720f7e2033917be3f8bb",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/517245",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "878ea6a1e55b7c6cc4f650cb192c826cebb107fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254436767 | pes2o/s2orc | v3-fos-license | Comparative Transcriptome Analysis Unveils the Molecular Mechanism Underlying Sepal Colour Changes under Acidic pH Substratum in Hydrangea macrophylla
The hydrangea (Hydrangea macrophylla (Thunb). Ser.), an ornamental plant, has good marketing potential and is known for its capacity to change the colour of its inflorescence depending on the pH of the cultivation media. The molecular mechanisms causing these changes are still uncertain. In the present study, transcriptome and targeted metabolic profiling were used to identify molecular changes in the RNAome of hydrangea plants cultured at two different pH levels. De novo assembly yielded 186,477 unigenes. Transcriptomic datasets provided a comprehensive and systemic overview of the dynamic networks of the gene expression underlying flower colour formation in hydrangeas. Weighted analyses of gene co-expression network identified candidate genes and hub genes from the modules linked closely to the hyper accumulation of Al3+ during different stages of flower development. F3′5′H, ANS, FLS, CHS, UA3GT, CHI, DFR, and F3H were enhanced significantly in the modules. In addition, MYB, bHLH, PAL6, PAL9, and WD40 were identified as hub genes. Thus, a hypothesis elucidating the colour change in the flowers of Al3+-treated plants was established. This study identified many potential key regulators of flower pigmentation, providing novel insights into the molecular networks in hydrangea flowers.
Introduction
Flower colour, one of nature's most magnificent displays, plays an important role in attracting animal pollinators, and is therefore crucial for plant ecology and evolution [1]. Among the various plant pigments, carotenoids and flavonoids are the most common and diverse types [2]. However, apart from attracting pollinators, anthocyanins and other flavonoid compounds that share a common biosynthetic pathway may also act as defensive agents or compounds that protect against various biotic and abiotic stresses [3]. Thus, vealed that mutations in the coding regions of ScCHI1/2 and ScbHLH17 prevented the formation of anthocyanin in yellow and white Senecio cruentus cultivars. Differences in the branched metabolic flux of pelargonidin (Pg), cyanidin (Cy), and delphinidin (Dp)-type pathways are determined by the competition for naringenin between ScF3 5 H, ScDFR1/2, and ScF3 H1 [28]. After studying the transcriptome, Chen, et al. [29] observed that Al exposure upregulated 730 genes in the leaves and 4287 genes in the roots of hydrangea, while it downregulated 719 genes in the leaves and 236 genes in the roots. From the data of metabolomic and transcriptomic analyses of S. miltiorrhiza flowers, a total of 100 unigenes that coded for 10 enzymes were recognized as the candidate genes linked with anthocyanin production. Decreased ANS gene expression lowered the anthocyanin content but led to an increased buildup of flavonoids in S. miltiorrhiza flowers [30]. Other studies have reported a connection between colour expression and DNA methylation in other species. A recent study using molecular markers of SSR and MSAP suggests that DNA methylation may be part of the molecular mechanism causing the change in the colour of hydrangea sepals in response to acidic pH [11]. Hyper methylation of the MdMYB10 promoter initiates striped colouration due to an increased anthocyanin concentration in Malus domestica fruit. Alternatively, varying amounts of promoter methylation of the anthocyanindin synthase gene resulted in varied red or white flower colouration in the ornamental plant Nelumbo nucifera [31,32].
Since transcriptome and targeted metabolomic technologies have proven to be powerful tools for elucidating the mechanisms of colouration in various ornamental plants, we used these technologies to investigate differentially expressed genes (DEGs) during different developmental stages in the infertile flowers of Al 3+ -treated hydrangea and thus to elucidate the molecular pathways driving colour alteration. Global gene expression profiles were examined, with an emphasis on genes involved in anthocyanin biosynthesis and flavonoid metabolism, and the regulatory networks were established. This is the first comprehensive transcriptome and metabolome study of hydrangea flower colour variation at acidic pH. These discoveries will aid in the breeding of multi-coloured hydrangea and several other hydrangea species, as well as in the functional characterisation of genes and proteins of interest.
Change of Sepal Colour under Different Soil pH
The sepal of plants grown in acidic soil (enriched with aluminum sulfate) changed from pale yellow at stage (I) or early flowering (EB, S1) to blue-violet at stage (S3) or full flowering (FB), as shown in Figure 1A. In the plants grown in untreated soil (control group, C), sepals were initially pale yellow at stage (I) or early flowering (EB). At stage (II, S2) or mid-flowering (MB), the margins of the sepals turned light pink, and at full flowering, pink ( Figure 1B). In addition, several common types of anthocyanidins comprising cyanidin, delphinidin, malvidin, pelargonidin, and petunidin, involved in colour development, were quantified. High levels of delphinidin, petunidin, and malvidin were found in TS3, with levels of 4.8 µg/g fresh weights (FW), 1.9 µg/g FW, and 1.5 µg/g FW, respectively. In contrast, high levels of cyanidin and pelargonidin were detected in the pink flowers of CS3 ( Figure 1C). This suggests that treatment causes differential accumulation of anthocyanidin content.
Transcriptome Sequencing, Annotation, and Analysis of DEGs
Using RNA-Seq, a sum of 72 million reads with a total nucleotide count of 328,854,166 bp (38.24 GB) was obtained. A total of 34.79 Gb of clean reads was acquired after cleaning and quality verification, with each library producing at least 5.78 Gb of clean reads. Q30 percentages were 91.76%, 92.89%, 90.94%, 91.87%, 93.91%, and 90.98%, respectively. These findings demonstrated that the quality of RNA-Seq was suitable for further investigation (Table S2). Successively, the de novo assembly produced 342,068 contigs and 186,477 unigenes, with an N50 of 903 nt and 794 nt. There were 109,316 unigenes between 200 and 500 nt (58.61%), 48,027 unigenes between 500 and 1000 nt (25.75%), and 7,459 unigenes longer than 2000 nt (4%) ( Table 1).
Functional Annotation
The assembled unigenes were examined using eight public databases (i.e., NR, egg NOG, KOG, KEGG COG, GO, Swiss-Prot, and Pfam), with an E-value cut-off that was more than 1 × 10 −5 , to functionally annotate the transcriptome. Using this method, the annotation of 76 Figure 2A). According to the statistical examination of the E-value features distributed in the Nr annotation, 88% of the assigned sequences exhibited strong homology (E-value < 10 −100 ) and 12% had exceptionally strong homology (E-value 100 −150 ) to the identified plant sequences ( Figure 2B). Figure 2C shows the distribution of the top 24 species for the best match from each sequence. Blast2GO software was used to categorize 128,376 unigenes into 43 functional categories based on Nr annotation, with 18 GO terms relating to biological processes, 12 to cellular components, and 13 to molecular functions ( Figure S1). KOG analysis was employed for analysing orthologous categorisation and the evolutionary rates of genes. The results showed that 78,993 unigenes (55.41% of all annotated unigenes) aligned with 25 KOG classifications (E-value cutoff 1 × 10 −3 ). Among the different categories, a considerable number of unigenes were involved in clusters for the biosynthesis, transport, and catabolism of secondary metabolites (48.25%), followed by transcription (16.45%), protein turnover, posttranslational modification, chaperones (10.22%), and chromatin structure and dynamic regulation (8.10%). Only small proportions (less than 1%) of unigenes were assigned to extracellular structure, uncertain function, and cell modification. There were also higher proportions of genes associated with translation, ribosomal structure and biogenesis (7.05%), signal transduction mechanism (5.92%), DNA methylation (5.94%), and the mitochondrial DNA metabolic process (4.22%) ( Figure 3). Transcripts with normalized read counts less than 0.5 FPKM were excluded from the study. CS1, CS2, and CS3 were discovered to express 28,365, 28,242, and 28,088 unigenes, respectively. Likewise, 27,810, 27,726, and 27,711 unigenes were found in treated samples from the different sepal maturation phases. Figure 4A shows the number of expressed transcripts dispersed in the 0.5-1 FPKM, 1-10 FPKM, and 10 FPKM ranges. The gene expression level correlation coefficient between the three biological replicate samples was greater than 0.73. The results of principal component analysis (PCA) indicated that the classification of the 18 samples could be easily classified into six groups: CS1, CS2, CS3, TS1, TS2, and TS3 ( Figure 4B). The control and treated samples of the same developmental stage showed a distant clustering relationship, indicating that there was a clear distinction between the whole transcriptome profile of control and treatment at each developmental stage ( Figure 4B).
DEG Identification and Functional Enrichment Analysis
The variations in gene expression were examined with the comparison of the three different sepal maturation stages, using thresholds of more than log2 (fold change) ≥2 and adjusted p-value less than 0.05 [33]. This resulted in a total of 896 DEGs (i.e., TS1 vs. CS1, TS2 vs. CS2, and TS3 vs. CS3, Figure 5A). TS2 vs. CS2 (814 DEGs) had the most DEGs among the three comparisons, with 380 and 434 unigenes up-and down-regulated, respectively. In contrast, TS3 vs. CS3 (41) had the fewest DEGs, with 18 and 23 unigenes upand down-regulated, respectively ( Figure 5B). Among all identified DEGs, the assignment of the 621 DEGs was made to one or more GO terms, and these DEGs revealed information about the molecular events that occur during the development of sepal, particularly colour formation. All DEGs were mapped to the GO database using the TopGO software (v2.12.0) to find highly enriched terms when related to the genomic background, using a corrected p-value of 0.01 (Fisher's exact test) as the cutoff value. Among the three Gene Ontology categories, there were a total of ten enrichment GO keywords ( Table 2). Under the biological process group, the most notable enrichment GO terms were pigment biosynthetic process, metabolic process, L-phenylalanine catabolic process, anthocyanin-containing, flavonoid biosynthetic process, response to abiotic stimulus, and pattern specification process. In the molecular function, catalytic activity, oxidoreductase activity, transporter activity, binding, peroxidase activity, electron carrier activity, and transcription factor activity were the most enriched whereas, in the cellular component (CC), intracellular and organelle cells were the most enriched. Understanding the relationship between the biological processes and the genes can be aided by pathway analysis. In TS1 vs. CS1, TS2 vs. CS2, and TS3 vs. CS3, the number of DEGs enriched among KEGG pathways was 26, 127, and 34, respectively, which were assigned to 15, 57, and 12 metabolic pathways, respectively. The most enriched 20 metabolic pathways were explored ( Figure 6). In the comparison between TS1 and CS1, the biosynthesis of flavones, flavonols, isoflavonoids, and glucosinolates were enriched in TS1. In the comparison between TS2 and CS2, the biosynthesis of flavonoids, anthocyanins, stilbenoids, diarylheptanoids, and gingerol, as well as the biosynthesis of secondary metabolites, were increased in TS2. The biosynthesis of the metabolites flavone, flavonol, and phenylalanine was always significantly different at different developmental stages. The biosynthesis of flavonoids is predominant in the developmental stage S1 (CS1 and TS1), whereas the biosynthesis of anthocyanins was predominant in the developmental stage S2 (CS2 and TS2). Flavonoid biosynthesis was significantly enriched in TS1 compared with CS1, whereas anthocyanin was more enriched in TS2 compared with CS2. Al 3+ treatment could induce anthocyanin biosynthesis rather than flavonoid biosynthesis to produce the blue colouration in hydrangea flowers. The carotenoid and isoflavone biosynthesis pathways, the other two metabolic processes involved in the floral colour formation, were also detected in the KEGG enrichment pathway. From these metabolic pathways, information about the pigment metabolism at three different colouring stages of Hydrangea macrophylla can be gleaned. The blue hue of Al 3+ -treated flowers may be closely associated with these metabolic pathways.
Identification of TFs and Establishment of Gene Co-Expression Network Analysis (WGCNA)
Using BLASTX (E-values cutoff 1 × 10 −5 ), the assembled unigenes were aligned with the plant transcription factor database (PlantTFDB) and a total of 88 TFs from eight TF families were found. The MYB TF family had the most members (25), followed by the WD40 (20 TFs), HD-ZIP (14 TFs), bHLH (11 TFs), C2H2 (10 TFs), AP2-ERF (5 TFs), and NAC (3 TFs) families (Table S3). WGCNA was used to construct co-expression gene network modules to further investigate potential unigenes associated with pigmentation transition during the successive developmental stages of the two experimental conditions ( Figure 7A, note: in the dendrogram, each module is represented by a branch, while each gene is shown as a leaf). The co-expression network constructed using the 621 DEGs that remained after eliminating the low-expression unigenes from the total 896 DEGs was integrated into 10 modules. The largest of which was the light blue module with 345 unigenes, and the smallest contained only 26 unigenes (dark green). Figure 7B shows the unigene distribution in each module (indicated with different colours) and the connections between module and trait. The 9 out of 11 DEGs associated with anthocyanin biosynthesis and 2 of 12 DEGs related to flavonoid metabolism are included in the brown module. This indicates that the brown module unigenes play a key role in anthocyanin and flavonoid metabolism. We were particularly interested in the modules that were enriched in the control or treatment groups, especially blue and pink in S2, which aid in distinguishing the flower colour phenotype caused by an environmental pollutant. The modules of interest were therefore selected based on |r| > 0.5 and p ≤ 0.05 criteria and then annotated using KEGG and GO analysis. The light green module was closely associated with TS2. Many colour formation pathways were enriched in the light green module (p ≤ 0.01). The three major metabolic pathways were phenylpropanoid biosynthesis (ko00940, 30 DEGs), anthocyanin biosynthesis (ko00942, 19 DEGs), and flavonoid biosynthesis (ko00941, 10 DEGs) (Table S4). Pearson correlation coefficients of structural genes and transcription factors were determined with SPSS version 17.0 based on their FPKM values. The FPKM values of the unigenes of the transcription factor HymMYB2 and the two WDR40 unigenes were positively (p ≤ 0.01) and negatively (p ≤ 0.01) correlated with the values of CYP73A, F3 5 H, C3H, C2H2, DFR, and ANS FPKM, respectively. In addition, the FPKM level of a WDR68 unigene was correlated negatively with the levels of DFR and F3H (p ≤ 0.01) ( Table S5). The transcription factor HymMYB2, together with other transcription factors such as WER-like and WDR40, might play an important role in the formation of the blue colour of infertile hydrangea flowers. Factors of interest (x-axis) were correlated with each module (y-axis). The first value in each square is the correlation and the second value in parentheses is the p-value of the association. The more positively correlated the module and factor, the redder the square; the more negatively correlated, the bluer. The coding of the factors was as follows: CS1, CS2, CS3, TS1, TS2, TS3, C, and T. * Significant at p < 0.05; ** Significant at p < 0.01; *** Significant at p < 0.001, (-) means no content. Table 3 shows the expression patterns of 15 potential genes based on closed modules. In summary, in the treated plants, all five PALs were down-regulated during sepal maturation, whereas in the untreated plants, they initially remained constant or decreased and later increased. Moreover, their relative expression levels were significantly higher in treated individuals S1-S3 than in the untreated groups. PAL9 and PAL6 were found to be putative hub genes for the dark green module. 4CL12 and 4CL14 were shown to be enriched in the dark green module, and 4CL12 was recognized as a possible hub gene for this module. The significantly higher expression of the 4CL12 gene in S1-S3 of the treated plants, when compared to the untreated groups, indicated that 4CL18 had a crucial role in the signalling pathway. The orange module had three enriched CHSs, with CHS2 and CHS4 recognized as possible hub genes that showed similar changes in expression during different phases in the treated and untreated groups. The relative expression levels of CHS1 and CHS3 in CS3 were 2.3 and 1.5-fold higher than in TS3, respectively. We also searched for two CHRs, which were enriched in these critical modules and discovered that the change in the expression patterns of CHR1 and CHR3 were compatible with the enriched CHSs. In addition, these modules enriched F3 H4, F3 H3, F3 H2, F3 5 H, FLS1, FLS2, PIP2, TIP1, UA3GT, PAP2, DFR1, DFR2, CYP75A, and CYP75B1. The expression levels of F3 5 H were 2.1 and 3.2 times higher in treated plants than in untreated plants in S1 and S2, respectively. DFR1 was up-regulated and peaked in S3 during flower development in Al 3+ -treated plants, whereas it was virtually absent in untreated plants. There was an up-regulation in the DFR2 gene in treated plants and it peaked at S3, whereas in untreated plants its expression was low and remained stable. The expression levels of DFR1 and DFR2 were significantly higher in all stages of the treated plants than in the untreated plants. The expression levels of CYP75A and CYP75B1 were also higher in the treated plants (Table 3).
qRT-PCR of the Transcriptomic Data
For the validation of the transcriptome sequencing data, the sequences of 15 nuclear unigenes related to the formation of the colour blue, which showed varying expression levels in the two experimental groups, were subjected to quantitative real-time PCR using the designed primers. The results show that the expression levels of transcriptome and qRT-PCR analyses are significantly correlated with each other with a correlation coefficient of R 2 = 0.92, indicating that the genes studied are involved in the signalling and/or metabolic pathways associated with colour formation (Figure 8).
Discussions
Flower colour has long fascinated scientists and breeders, and it has been demonstrated that flower colour develops as a consequence of interactions between genes and external environmental conditions [34]. Consequently, the development of the colour blue in the H. macrophylla cultivars can be attained by changing the growing conditions. For instance, altering the pH of the soil or the addition of exogenous Al 3+ can cause colour variation in some barren flowers of H. macrophylla. However, the knowledge about the underlying process of colour change and tolerance to acidic pH is still limited. Breeding plants that can tolerate acidic soils is a pressing issue in agricultural and plant physiology research. In this competition, Chen et al. performed a genome-wide transcriptome analysis of Al response genes in hydrangea roots and leaves using an RNA-Sequencing approach. Numerous transporters were involved in the transportation of the Al-citrate complex from hydrangea roots, including those from the MATE and ABC families. The aluminum transporter Nramp, a plasma membrane transporter for Al uptake, was upregulated in roots and leaves under Al stress, suggesting that it may play an important role in Al tolerance by lowering toxic Al levels. However, the signalling pathways and potential genes involved in the colour change remain to be elucidated [29]. In the current study, next-generation sequencing technology was utilized to compare the transcriptomes of hydrangea sepals grown under Al 3+ -treated and control conditions at three sepal maturation stages to discover the genes and signaling pathways responsible for the colour change in response to Al hyper accumulation.
Comparison of Genes Involved in Flavonoid Biosynthesis in Hydrangeas Grown under Different pH Conditions
Flavonoids are vital pigments found in many plant sepals [35]. Anthocyanins are the end products of the biosynthetic pathways of flavonoids. They produce a wide variety of colours, from pale yellow to blue-violet [36]. The accumulation of the floral anthocyanins malvidin and petunidin causes the colour difference between Al 3+ -treated (blue-violet) and untreated (pink) hydrangea sepals (Figure 1). The conversion from pink to blue needs a change in the pathway of anthocyanin biosynthesis, which most likely occurs several processes before the production of petunidin and malvidin. Thus, the abundance of potential genes involved in flavonoid biosynthesis was evaluated to find vital genes involved in blue colour metabolism. Several isoforms of flavonoid synthesis, including 4CLs, PALs, CHIs, CYP73A, DFRs, ANSs, F3H, F3 5 H, and UA3GT, showed distinct expression patterns in the Al 3+ -treated plants with blue-purple flowers compared with untreated plants with pink flowers, suggesting that the alteration in the expression of these genes, induced exogenously, may occur much earlier than the phenotypic changes. CHS, CYP73A, and CHI are upstream genes of the flavonoid biosynthetic pathway, whereas ANS, F3H, F3 5 H, C3 5 H, DFR, and CYP75B1 are downstream genes. They encode important enzymes in the biosynthetic pathway of flavonoids and thus contribute to the formation of flower colour [37]. The effects of CHI and CHS on flavonoid accumulation are considerable. The initial reaction step in the flavonoid biosynthetic pathway is catalysed by CHS contributing to the formation of the intermediate product chalcone, which is required for all flavonoid classes [38]. In overexpressed CHI tobacco plants, flavonoids were increased but anthocyanins were not detectable [39]. The overexpression of the peony-derived CHI gene in tobacco also increased flavonoid accumulation [40]. Almost all unigenes expressing CHI and CHS were up-regulated at S1 during the development of infertile flowers in both experimental groups, and flavonoid concentration was maximal at this time. When the early expressed genes in the flavonoid biosynthesis pathway were over-expressed, it resulted in flavonoid accumulation and provided precursors for anthocyanin biosynthesis [41]. The biosynthesis pathway of flavonoids is dependent on F3H, F3 H, and F3 5 H. They catalyse the hydroxylation of flavonoids required for anthocyanin production, such as dihydrokaempferol, dihydromyricetin, and dihydroquercetin [42,43]. F3H catalyses the conversion of naringenin to dihydroflavones. In the presence of NADPH and DFR, the three forms of dihydroflavones are reduced. It has been observed that flower colour can vary greatly depending on the type and extent of DFR expression and that DFR is most strongly expressed in sepals with a high concentration of anthocyanins [44]. We have discovered two different DFR copies. DFR-2 is predominantly expressed in pink flowers (it is almost undetectable in blue samples) and has an N residue in the substrate specificity region at the third position that has been shown to cause substrate affinity for the pelargonidin-like precursor, dihydrokaempferol, in a variety of angiosperms [45]. DFR-1, on the other hand, is mainly produced in blue flowering plants and mainly has a D residue at this position, which confers lower or no substrate affinity for dihydrokaempferol in other species [46]. Moreover, NADPH cytochrome P450 expression was significantly higher in the treated plants than in the control group during the first and second phases of sepal maturation. NADPH cytochrome P450 reductase was shown to catalyse electron transfer from NADPH to F3 5 H in petunia, resulting in the blue-purple colour of the plant [47]. The hydroxylation of dihydrokaempferol is catalysed by F3 5 H and leads to the formation of a delphinidin precursor [48]. The loss of F3 5 H function in Antirrhinum spp. [49] or reduced F3 5 H expression in Phlox drummondii [50] results in a transition from blue to red colour. C3 5 H is an enzyme important for the production of delphinidin and promotes blue flower formation [51]. ANS, a vital enzyme that catalyses the last step of the flavonoid biosynthetic pathway, can also catalyse the conversion of proanthocyanidins into coloured anthocyanins. Through the induction of anthocyanin synthesis in sepals by Antirrhinum majus dihydroflavonol 4-reductase (AmDFR) and anthocyanidin synthase (MiANS) genes (AmDFR), transformed through a sequential Agrobacterium-mediated transformation, the flower colour of forsythia (Forsythia x intermedia cv 'Spring Glory') was altered [52]. In the current study, the downstream genes F3 H, F3 5 H, C3 5 H, CYP75B1, DFR1, and ANS of the flavonoid biosynthesis pathway were all increased at the first and second developmental stages of Al 3+ -treated plants. We also analysed the expression levels and patterns of all glycosyltransferases of the anthocyanin biosynthetic pathway and observed four UA3 GT homologous unigenes. However, their FPKMs were extremely low, and there were no significant differences in the expression levels at the three sepal maturation stages of the treated plants, but there was a significant difference in their expression at the second stage between the treated and control groups, indicating that UA3 GTs may be one of the key enzymes for the accumulation of delphinidin-3 -glycosides, the main anthocyanins in blue sepals. Kogawa et al. discovered that UA3 5 GT is critical for the accumulation of polyacylated anthocyanins, called ternatins, in Clitoria ternatea. UA3 5 GT could be glycosylated at the 3 , 5 positions of delphinidin [53]. The blue-violet flower colour of treated individuals suggests that the basic regulation of gene transcription from upstream ANS (anthocyanidin synthase), F3 5 H (flavonoid 3 ,5 -hydroxylase) and DFR (flavanone 4-reductase) may play a vital role in the buildup of flavonoid intermediates and the transition of flower colour.
Identification of Hub Genes Related to Flower Formation by WGCNA
Understanding the changes in blue flower phenotype caused by external influences of the wild type (with pink colour) could shed light on the mechanisms of flower colouration in hydrangea. Any functional changes in critical enzymes of the flavonoid biosynthetic pathway, including changes in the frequency of gene transcription and branching changes of flavone products, could lead to a repeated transition from blue to red/pink [54]. The most important finding of this work was that, by using WGNCA, we were able to identify Al 3+ treatment-specific gene modules ( Figure 7B). This showed that 2 DFRs, 2 4CLs, 3 CHRs, 9 PALs, 4 CHSs, F3 H4, 2 UFGTs and MYB were strongly related with modules relevant to the TS2 or treatment group. They all showed significant variation in transcript levels between treated and untreated individuals, demonstrating that they play a crucial role in floral variation. It is important to note that the above genes were not the ones with the highest expression, suggesting that the genes with the highest expression are not essential for flower colour differentiation [55]. Thus, the usage of WGCNA analyses in this work delivered a good method for identifying key genes associated with different developmental conditions. Wang et al., 2020 used WGCNA analysis to identify nodule genes involved in heavy metal transport, which were found to be particularly abundant in nodules [56]. In camellia, a similar WGCNA approach was used to reveal unigenes related to flower colour, and it was shown that CHS, F3H, ANS, and FLS have a crucial role in controlling the synthesis of flavonols and anthocyanidins [57]. Tan et al. [58] further used WGCNA to extract the Cd-coupled co-expression gene modules from the 22,080 transcripts in 17 RNA-Seq datasets and recognized 271 transcripts as universal Cd-regulated DEGs, which are key components of the Cd-coupled co-expression module. The four hub genes were found upstream of the flavonoid biosynthetic pathway, suggesting that blue flower colouration was mainly stimulated upstream in the treated plant. The reduced expression of 4CL8, PAL9, and PAL6 in both treated and untreated plants is consistent with the results of Wang et al. [59]. The decreased expression of 4CLs and PALs altered the level of cinnamic acid in the ripe fruit peel, according to these researchers [59]. We hypothesized that the reduced expression of 4CL8, PAL6, and PAL9 would affect cinnamic acid concentration in sepals from both treated and untreated plants. The increased expression of CHIs and CHSs in TS2 may play a vital part in the biosynthesis of other flavones, such as isoflavones, which contribute to the colour of many hydrangea flowers.
Identification of Transcription Factors Related to Flower Colour Transition
MYB, WDR, and bHLH transcription factors control the flavonoid biosynthetic pathway in several higher plant species [60,61]. Transcription factors can regulate structural genes either alone or in cooperation, and they can be positively or negatively regulated. Generally, transcription factors affect flower colour in different ways. The MYB-bHLH-WD40 (MBW) ternary transcription complex triggers numerous late flavonoid biosynthesis genes (LBGs), which include three regulatory protein classes, including bHLHs, R2R3-MYBs, and TRANSPARENT TESTA GLABROUS1 (TTG1; also known as WD40) [62,63]. MYB transcription factors, bHLHs, and WD40 regulate the expression of ANS and other downstream genes in Arabidopsis and affect anthocyanin biosynthesis [59]. LrMYB15, a transcription factor regulating DFR, CHSa, ANS, and CHSb, was observed to be involved in anthocyanin biosynthesis in lilies [64]. MYBs were the largest TF family in our analysis with 23 genes. Analysing DEGs and transcription factors based on their co-expression pattern revealed that among these MYBs, the change of flower colour was consistent with that of the expression profiles of 17 genes, with only 5 genes showing an opposite expression trend. The gene HymMYB2 was expressed at all developmental stages of Al 3+ -treated plant sepals. TS2 showed the highest expression level. Despite previous findings of a connection between the expression of HymMYB2 and the total amount of anthocyanins in sterile flowers of numerous hydrangea varieties, no such association was found in this study [51], and the expression pattern in our investigation was not the same between treated and untreated plants. The expression of HymMYB2 was significantly elevated in the treated groups, but it was nearly undetectable in the control plants. This shows that HymMYB2 regulates anthocyanin intermediates in hydrangea with some specificity. The level of HymMYB2 expression was highly associated with the expression of important genes DFR, F3H, ANS, C3 5 H, and WDR40, according to co-expression analysis. The transcription factor HymMYB2 may act on C3 5 H, ANS, and DFR in the biosynthetic pathway of anthocyanin in hydrangea, which is similar to the function of the PeMYB11, PeMYB12, and PeMYB2 genes (structural genes) in Phalaenopsis [65]. Based on the data presented above, different flavonoid biosynthetic pathways were determined in treated and untreated hydrangeas (Figure 9). In summary, flavonoid production in Al 3+ -treated plants is advanced by PAL and 4CL compared with untreated specimens, whereupon a branch of isoflavone biosynthesis, regulated by CHS and CHR, completes the anthocyanin synthesis pathway. In addition, the elevated expression of F3 5 H, F3 H and F3H/FLS leads to an increase in other flavonoid molecules, such as kaempferol and myricetin, which further decreases anthocyanin production. Lastly, a high DFR expression combined with the availability of UFGT may stimulate the synthesis of anthocyanin, leading to blue colour formation.
Plant Material
Hydrangea macrophylla ssp. serrata were provided by the Koetterheinrich breeding company (Lengerich, Germany) at week 48 and placed in a greenhouse at IPK Gatersleben under the condition of an 18-h photoperiod with 200 µmol m −2 s −1 light intensity, a temperature of 21/19 • C (day/night), and 60% relative humidity. Plants were divided into two groups, either as the control without additional treatment or with external Al 3+ application. All plants were supplied with 1 g Universal Weiβ fertilizer (with the analysis of 15 + 0 + 19) per liter of irrigation water in weekly intervals. For the treated group, 1 g of aluminum sulfate (Al 2 (SO 4 ) 3− ) was added to this solution. Aluminum-treated groups and control groups were arranged in a block with complete randomisation and three replicates per group. For each of the developmental periods, three experimental samples were harvested from cuttings obtained from the same plant at the early blooming stage: pale yellow, middle blooming stage: light blue (Al-treated group) to light pink (control group), and full blooming stage: dark blue (Al-treated group) to dark pink (control group). The pH was recorded once during all cycles in the substratum of all the plants with the dilution method 1:2 reported by David et al. [66]. Healthy, barren flowers with appropriate development and no visible diseases or pests were procured, rinsed thrice with deionized water to prevent contamination during sampling, then immediately immersed in liquid nitrogen and placed in a refrigerator at −80 • C [67].
RNA Extraction, cDNA Library Creation, and Sequencing
The RNeasy Plant Mini Kit (Qiagen, CA, USA) was used for total RNA extraction following the manufacturer's instructions. The TURBO DNA-free TM kit was used to purify RNA (Invitrogen, CA, USA). The Agilent 2100 Bioanalyzer (Agilent Technologies, CA, USA) was employed for quality assessment and to measure the concentration of the extracted RNA. For all samples, the RNA integrity number (RIN) was more than eight. A total of 18 RNA-Seq libraries (including three biological replicates at three stages for treated (TS1, TS2, and TS3) and control samples (CS1, CS2, and CS3) were constructed from about 2 µg of hydrangea sepals as per the manufacturer's protocol (Lexogen GmbH, Vienna, Austria). The libraries were pooled in an equimolar way and the Agilent 4200 Tape Station System (Agilent Technologies, Inc., Santa Clara, CA, USA) was used for electrophoretic analysis. Using the Illumina HiSeq2500 equipment, libraries were quantified and sequenced (paired-end sequencing, rapid run, 2 × 101 cycles, onboard clustering) (Illumina, San Diego, CA, USA).
Expression Annotation
Bowtie (v4.4.7) alignment package was used to trace reads to the unigenes. Based on the comparison results, RSEM (RNA-Seq by Expectation maximisation) was used to estimate the expression levels [70]. The Fragments Per Kilobase of transcript per Million mapped reads (FPKM) method was applied to represent the differences in the unigene expression among the samples [70]. Differential expression analysis between different experimental conditions was performed using the DESeq package (v1.18.0) [71]. The differentially expressed genes (DEGs) were identified using log2FC ≥ 2 and p-value ≤ 0.05 [24]. The iTAK software was used for the prediction of the plant transcription factors [72]. For identifying the transcription factors (TFs), all the annotated unigenes were compared against the plant transcription factor database (Plant TF DB v. 4.0), and the best hits in Arabidopsis thaliana were considered as TFs. Using the gene co-expression networks generated from the DEGs and TFs discovered, the transcriptional architecture of anthocyanin, carotenoid, and flavonoid biosynthesis were established by the WGCNA (weighted gene co-expression network analysis) program [73] and then visualized using Cytoscape v. 3.5.1 (San Diego, CA, USA) [26].
GO and KEGG Pathway Enrichment Analysis for Differentially Expressed Unigenes
GO and KEGG pathway enrichment analyses were performed for the differentially expressed unigenes. The topGo package (v2.12.0) was applied for enrichment and refinement of the collected GO annotation, using the "elim" approach and the Kolmogorov-Smirnov test. In-house scripts were used for KEGG pathway enrichment in accordance with Fisher's exact test. Bonferroni correction was used to obtain enriched p-values. The corrected p-value of 0.05 was used as a criterion to determine whether gene sets were significantly enriched.
Quantitative Reverse Transcription-Polymerase Chain Reaction-Based Validation
Five unigenes related to the biosynthesis of anthocyanin and 10 unigenes associated with the biosynthesis of carotenoids were chosen for quantitative reverse transcriptionpolymerase chain reaction (qRT-PCR) analysis. For qRT-PCR, the TransStart Top Green qPCR SuperMix (TransGen Biotech, Beijing, China) and a Bio-Rad CFX96 RT-PCR system (Bio-Rad, Hercules, CA, USA) were used with the following reaction conditions: denaturation at 94 • C for 60 s and 45 cycles of amplification (94 • C for 5 s, 60 • C for 15 s, and 72 • C for 10 s). The 2 −∆∆Ct method was used for calculating the relative expression levels of target genes against the internal control [74]. To normalize the relative expression levels of target genes, the H. macrophylla actin gene was employed as a control [75]. Supplementary Table S1 lists the gene-specific primers. For each experiment, three biological and three technical replicates were used.
Estimation of Relative Pigment Content
The extraction of anthocyanin was carried out using fresh sepal tissue acquired from three sepal maturation phases in Al 3+ -treated and untreated samples. Together, 0.5 g of tissue from each sample was powdered using 1 mL of 98% methanol comprising 1.6% formic acid at 4 • C. Following ultrasonic extraction for 30 min, the samples were centrifuged at 12,000× g for 10 min, the supernatant was transferred to new tubes, and the residues were re-extracted. Later, the supernatants were pooled and filtered through 0.45 mm nylon filters (Millipore). Cyanidin 3-O-glucoside, delphinidin 3-O-glucoside, peonidin 3-O-glucoside, pelargonidin 3-O-glucoside, petunidin 3-O-glucoside, and malvidin 3-O-glucoside were among the standard compounds (ZZBIO Co., Ltd., Shanghai, China). An amount of 10 µL of the extract was analysed by HPLC according to the method of Zheng, et al. [76], (Rigol L-3000, Beijing, China). Three biological replicates yielded mean results and standard errors (SE). For flavonoid extraction, 200 mg of sepal tissue was pulverized with liquid nitrogen, extracted in 10 mL of methanol solution, incubated in dark conditions for 24 h at 4 • C, and then suspended by sonication for 1 h. After centrifugation at 10,000 rpm for 10 min, the supernatant was filtered through a 0.22 mm membrane filter. After removing 2 mL of the supernatant, 2 mL of a 1.5% AlCl 3 solution and 3 mL of 1 M sodium acetate (pH 5.0) were supplemented, and ten minutes later, a UV-Vis spectrophotometer was used to measure the absorbance at 415 nm [77]. The trend in flavonoids' relative content was observed across the three periods on the basis of absorbance values. For carotenoid analysis, liquid nitrogen was used to crush 200 mg of sepal tissue, which was then extracted in 10 mL petroleum ether under dark conditions at 4 • C for 24 h and suspended by sonication for 1 h. After centrifugation at 10,000 rpm for 10 min, the supernatant was collected and filtered through a 0.22 mm membrane filter. A UV-Vis spectrophotometer was used to measure the absorbance at 440 nm [78]. The absorbance readings were used to observe the trend in the relative number of carotenoids across the three periods.
Statistical Analyses
All measurements were executed in triplicate, and outcomes are expressed as mean and standard deviation (SD). The Statistical Package for the Social Sciences (SPSS) v. 19.0 software was used to perform statistical analyses wherein the mean values of each developmental stage were compared using a one-way analysis of variance (ANOVA) according to the Duncan multiple choice test at p < 0.05.
Summary
Transcriptome analysis was used to investigate whether and how the pathways of anthocyanin and flavonoid in Al 3+ -treated (blue-violet) and untreated hydrangea plants with pink flowers contribute to the formation of colour. The quantitative analysis of the essential anthocyanins in the flowers of Al 3+ -treated hydrangea were delphinidin, petunidin, and malvidin derivatives, which were absent in the untreated plants. Transcriptome analysis of sepals from two different growth conditions and three different stages of sepal maturation revealed 186,477 unigenes. Several genes that alter or inhibit flavonoid biosynthetic pathways, competing with the production of other flavonoids or altering the synthesis of anthocyanins, may partially be responsible for the blue colour phenotype in hydrangea flowers. DFR and UFGT are among key genes involved in the blue colouration. Al 3+ -treated plants produce more delphinidin derivatives and have a higher ratio of F3 5 H/F3 H transcription than the untreated pink plants. In addition, we also identified several TF families such as WD40, bHLH17, and MYB11, which are likely important regulators in anthocyanin biosynthesis, chlorophyll metabolism, and carotenoid biosynthesis. This study contributes to a better understanding of molecular mechanisms of colour formation in hydrangea that has scientific value and helps breeders design and adapt the desired flower colours.
Institutional Review Board Statement:
This study does not involve any experiments with human participants or animals.
Informed Consent Statement: Not applicable.
Data Availability Statement: All data supporting the findings of this study are available from the corresponding author upon reasonable request. | 2022-12-09T16:03:16.034Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "064ddbecdc8d18345d7902ffcdf9e37b0e21438d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/23/15428/pdf?version=1670331107",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0ae5d2c70ca88985c1015161e18f698c2204e85",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247980216 | pes2o/s2orc | v3-fos-license | GoFish: A low-cost, open-source platform for closed-loop behavioural experiments on fish
Fish are the most species-rich vertebrate group, displaying vast ecological, anatomical and behavioural diversity, and therefore are of major interest for the study of behaviour and its evolution. However, with respect to other vertebrates, fish are relatively underrepresented in psychological and cognitive research. A greater availability of easily accessible, flexible, open-source experimental platforms that facilitate the automation of task control and data acquisition may help to reduce this bias and improve the scalability and refinement of behavioural experiments in a range of different fish species. Here we present GoFish, a fully automated platform for behavioural experiments in aquatic species. GoFish includes real-time video tracking of subjects, presentation of stimuli in a computer screen, an automatic feeder device, and closed-loop control of task contingencies and data acquisition. The design and software components of the platform are freely available, while the hardware is open-source and relatively inexpensive. The control software, Bonsai, is designed to facilitate rapid development of task workflows and is supported by a growing community of users. As an illustration and test of its use, we present the results of two experiments on discrimination learning, reversal, and choice in goldfish (Carassius auratus). GoFish facilitates the automation of high-throughput protocols and the acquisition of rich behavioural data. Our platform has the potential to become a widely used tool that facilitates complex behavioural experiments in aquatic species. Supplementary Information The online version contains supplementary material available at 10.3758/s13428-022-02049-2.
Introduction
A common framework for the study of animal behaviour and cognition involves presenting stimuli and manipulanda, measuring animals' movements and responses, and programming outcomes according to selected contingencies, while recording all of this information, including fine temporal details.These efforts have led to the development and use of conventional experimental platforms that satisfy these needs while ensuring replicability across laboratories.Perhaps the archetype of such a system is the Skinner box, originally designed for pigeons and rodents, which uses manipulanda suitable for those taxa, detecting behaviour by the closing of circuits through key-pecking, lever-pressing, or interruption of light beams.Such systems promote and enhance reproducibility, a critical need in contemporary behavioural research.However, studying the behaviour of organisms living under water, such as fish, cephalopods, and crustaceans, poses different technical challenges from those in terrestrial species.
Alex Kacelnik and Tiago Monteiro are Co-senior authors Unlike most behavioural laboratory-based experiments involving mammals and birds, the display of stimuli and delivery of food reinforcers for fish is frequently manually executed by an experimenter, increasing temporal variability and vulnerability to observer effects, while restricting scalability (e.g., Potrich et al., 2022;Schluessel et al., 2022).Similarly, data are often recorded by video but annotated visually or digitised at a later time instead of being processed in real-time, which allows behaviour to control reward through pre-programmed contingencies.
Recent attempts (i.e., < 10 years) to improve the automation of behavioural experiments in other fish species using closed-loop systems have shown promising results.For example, Wallace et al. (2020) investigated sex differences in numerical discrimination abilities in mosquitofish using an automated setup that facilitated a range of cognitive tests.Furthermore, automated systems that were originally developed for conditioning experiments in zebrafish (Gatto, Lucon-Xiccato, et al., 2020a;Kuroda et al., 2017;Manabe et al., 2013) have been co-opted for use in guppies (Gatto et al., 2021;Gatto, Testolin, et al., 2020b;Lucon-Xiccato et al., 2018).
However, most of these systems are either commercial solutions and therefore not easily adaptable or accessible (owing to higher costs), or are open-source but require a considerable degree of expertise to operate and adapt, thus lacking the flexibility to be easily applied to multiple experimental situations and/or other subject species.
To address this, we developed GoFish, an open-source and expandable platform for dynamic, fully automated behavioural experiments on fish or other aquatic organisms.Our aim with GoFish is to provide a platform facilitating high-throughput and highly reproducible research that is (i) open-source, (ii) relatively inexpensive and simple to assemble, (iii) readily modifiable, (iv) supported by a growing community of users, and (v) capable of providing a range of behavioural metrics.
Our platform is inspired by present-day behavioural, cognitive and neuroscience experiments that rely on opensource, community based, DIY-type solutions for running and developing new experimental paradigms, as well as for processing and analysing the resulting data streams (Akam et al., 2022;Aoki et al., 2015;Bishop et al., 2022;Buscher et al., 2020;Devarakonda et al., 2016;Geissmann et al., 2017;Guilbeault et al., 2021;Gurley, 2019;Kane et al., 2020;Kapanaiah et al., 2021;Lopes et al., 2021;Mathis et al., 2018;Oh et al., 2017;O'Leary et al., 2018;Pineño, 2014;Siegle et al., 2017;Štih et al., 2019;Swanson et al., 2021;Walter & Couzin, 2021).Briefly, our system allows for the display of stimuli on a computer screen placed outside (but adjacent to) a tank, the tracking and detection of the subject's location in realtime through an overhanging camera, the programming of contingencies between fish movements and the delivery of food rewards, and the automatic recording of data in analysable format.Here, we describe the system and present two closed-loop experiments aimed at demonstrating its performance as a research tool.Although we describe an implementation for goldfish, GoFish can, in principle, be used with other aquatic species with minimal modifications.
As a proof-of-concept, and inspired by classical experiments (Bitterman, 1975;Engelhardt et al., 1973), we ran two closed-loop discrimination experiments using real-time video tracking.We show that individual goldfish can be trained to (i) associate a signalled location with food reward and reverse preference appropriately when the contingencies are reversed (Experiment 1), and (ii) discriminate coloured visual stimuli that switch location between trials (Experiment 2).
The GoFish platform
The setup as presently implemented (Fig. 1) comprises a rectangular prismatic experimental tank (60 x 30 x 36 cm (length x width x height), Table 1) with a 17" LCD computer screen (1920 x 1080; 60 Hz) for stimulus presentation (Table 1), placed directly adjacent to the side of the tank where reward pellets are delivered, (Fig. 1a).
Two custom-made, automated pellet dispensers (i.e., feeders (Fig. 1b,c, Table 1) are clamped onto the upper edge of the tank such that pellets fall on the water surface approximately 2 cm from the closest side of the tank, adjacent to the screen.
Each feeder is placed on either side of an opaque, white acrylic divider (fixed with silicone sealant), running perpendicular to the LCD computer screen, 25 cm into the tank.This partition defines the two choice zones of a Y-maze configuration.An overhanging USB camera ( 1280x 720 resolution, Table 1) held above the tank records each session (Fig. 1a).A laptop (Table 1) controls task contingencies (stimulus presentation and reward delivery) and video acquisition with a Bonsai (Lopes et al., 2015(Lopes et al., , 2021;;Lopes & Monteiro, 2021) custom workflow (see Fig. 1d for an example workflow).A light source (Table 1) is placed outside the tank, opposite to the LCD computer screen (Fig. 1a).The tank is surrounded by opaque Styrofoam panels to visually isolate the fish during experiments.The water level is maintained at approximately 15 cm.In the experiments described below, two identical experimental tanks were run concurrently, with each fish being tested always in the same tank.
Bonsai
The implementation of behavioural tasks and resulting data acquisition is controlled with Bonsai.Bonsai is a high performance, open-source visual programming software, for which there is an active community of thousands of users (https:// github.com/ bonsai-rx/ bonsai/ discu ssions) and several papers describing its inner workings (e.g., Lopes et al., 2015;Lopes & Monteiro, 2021).Bonsai allows users to rapidly develop workflows that can simultaneously manipulate data from various asynchronous input streams (e.g., video, or Arduino controlled pressure sensors), while controlling numerous output devices (e.g., pellet dispensers).
Users can find documentation, video tutorials, online support, and other materials in its accompanying website (BonsaiRX, http:// bonsai-rx.org/).Briefly, Bonsai workflows are constructed by connecting functions, or 'operators' that come in the form of nodes, together (Fig. 1d).These functions are categorised hierarchically within the Bonsai Toolbox that appears in the Bonsai workflow editor.For example, 'Source' functions allow users to easily generate data streams from files or external devices, while 'Sink' functions allow users to save data or trigger external outputs (https:// bonsai-rx.org/ docs/ artic les/ editor.html).These functions can be searched for directly using the Search textbox that appears on top of the Bonsai Toolbox, saving users the need to search through all of the functions within the Toolbox manually.A full list of Bonsai functions and their accompanying descriptions can be found here (https:// bonsai-rx.org/ docs/ api/ Bonsai.html).To quickly acquaint themselves with the basics of Bonsai, users can access common example workflows online (https:// bonsai-rx.org/ docs/ tutor ials/ acqui sition.html), or import them directly into their workflow editor through the Bonsai Gallery, which can be accessed via Tools in the menu bar of the workflow editor (https:// bonsai-rx.org/ docs/ artic les/ galle ry.html).Example video tutorials on how to quickly implement common workflows for data processing and storage can also be found here: https://bonsai-rx.org/learn/.
Pellet dispensers
Design and assembly instructions for laser cut acrylic and 3D printed parts for the pellet dispensers are available from the public repository (https:// bitbu cket.org/ fcham palim aud/ device.pump.fishf eeder/).
The instructions include PCB manufacturing plans and specifications, as well as downloadable firmware.The dispensers are controlled through Bonsai (see example workflow in Fig. 1d).
Stimuli
The potential visual stimuli and their positions are only limited by the monitor employed and its chromatic properties and dimensions.For the experiments described here, the main stimuli were coloured circles (red, green, blue and white, 3.5 cm in diameter, Fig. 2b) on a grey background, presented with centres positioned 5 cm from the bottom of the tank and 7 cm from each side wall (Fig. 1a).All stimuli were programmed using custom Bonsai (Lopes et al., 2015;Lopes & Monteiro, 2021) and BonVision code (Lopes et al., 2021) allowing easy generation and manipulation of visual stimuli.Each fish had a randomly assigned unique pair of colour-reward contingencies (Fig. 2b).We chose colours that have been physiologically (Neumeyer, 1984) and behaviourally (Zerbolio & Royalty, 1983) proven to be discernible by our experimental species (goldfish, Carassius auratus).In a pre-experimental, pre-training phase (see details below), we used a white noise rectangle (13.5 x 12 cm, Gaussian: mean = 0, variance = 10) presented on either the left or right arms of the tank, or in both simultaneously, to signal the imminent delivery of reward in early pre-training, or to signal that reward delivery was contingent on fish swimming to a specific location in later pre-training stages.
Behavioural task control
Task control was fully automated and implemented using a custom Bonsai workflow (https:// github.com/ PTMon teiro/ GoFish_ Ajuwon_ etal_ 2022).Progress through trials was controlled using real-time video analysis of fish movement.After a variable inter-trial-interval (ITI), fish could advance a trial by swimming into the 'start zone'.In the main experiment this was a 10 x 10 cm area opposite the rewarded side of the tank that was equidistant to both choice arms (Fig. 2a).Presence in the start zone after the ITI would trigger stimulus presentation, and a subsequent crossing into either the 'left choice zone' or 'right choice zone' (15 x 15 cm; Fig. 2a) would trigger appropriate contingencies.Outside of these epochs (and locations) the fish position had no influence on the unfolding of the task.Note that the 'start zone', 'left choice zone' and 'right choice zone' were not delineated by physical boundary markings but were defined as specified regions of interest (ROIs) on the video feed corresponding to fixed areas within the experimental tank.Users may wish to make the ROIs visually identifiable to the subjects, as this may influence speed of acquisition.Frames from these ROIs were converted to HSV colour space and a HSV range was applied so as to successfully detect fish.The pixels of the resulting binarized frames from each ROI were summed continuously.Fish entry into the zones was recorded when summed ROI pixels exceeded a set threshold (the value of which was adjusted to each subject prior to the onset of the experiment).After a session was completed, a timestamped event list was generated as a *.CSV file.The first two columns indicate the event name as a string and a timestamp while the third column contains a number which encodes the particular outcome of events that require extra disambiguating information.
Fish tracking
To track the fish centroid in real time, a colour thresholding method (Monteiro et al., 2021) was implemented using a custom Bonsai workflow.Video was recorded at approximately 33 fps.Frames were cropped down to include only the inside of the tank and converted to HSV colour space.An HSV threshold was applied to isolate the fish body from the overall white background given by the tank's bottom.Prior to the onset of the experiment, HSV value ranges were manually set for each fish so as to provide robust tracking in spite of individual differences in fish coloration.The resulting binarized region (pixels are either fish or no-fish) was smoothed and the coordinates of the animals' centroid were extracted.
For each session, a CSV data tracking file is generated which contains the x and y coordinates of the fish centroid throughout the session in two respective columns.A third column records the luminance of a specified ROI; a central point at the top of the LCD screen on which stimuli are presented to subjects.Recording the luminance of this region provides information about the epoch of the task (Fig. 2c) and therefore allows users to associate fish position with particular epochs of the task, enabling behavioural analysis during trial epochs of interest.For example, a low luminance value indicates an ITI period, an intermediate luminance value indicates the epoch during which a new trial is available and high luminance value represents the post-choice epoch within a trial.Example occupancy data from a representative experimental session, can be found in Fig. 2d.
At the session end, a raw video file of the entire session is also generated, allowing users to perform further tracking analysis offline.
Subjects
Five goldfish ranging in size between 7 and 10 cm, (age and sex unknown) participated in the current study.Animals were obtained from a local, commercial supplier (Goldfish Bowl, Oxford, UK).
Fish were housed in groups of two or three, in holding aquaria (60 x 35 x 31 cm; (length x width x height)) where they had access to a rock shelter, pebbles and artificial plants.They participated in experiments five times a week on weekdays and fed a total of 24 sinking pellets a day (Fancy Goldfish Sinking Pellets, Fig. 1c).This diet was supplemented with spinach following experiments on the last day of the week and bloodworms the day after.Fish were kept under a 12:12 h light:dark cycle using fluorescent lights.Water was maintained at a minimum of 21°C using an internal heater and independent thermometer (pH: 8.2; ammonia: 0 ppm; nitrite: 0 ppm; nitrate: max.30 ppm).Partial water changes were conducted at the end of each week and internal filters were cleaned every month.Each holding tank was aerated using an air pump.
For each daily session, fish were transported in a plastic jug to its experimental tank and then back to its holding tank at the end of the session.At the start of each day, ~20 L of water from all holding tanks were transferred to the experimental tanks in order to keep the environmental conditions as constant as possible.The experimental tanks were cleaned at the end of each week.All animals had experimental experience with unrelated contingencies.
Pre-training
Pre-training consisted of three phases lasting a minimum of 18 days in total.Advancing through the phases depended on the individual subject's performance.
(i) Experimental tank acclimatisation
During a 10-min period, fish were allowed to explore and get acclimated to the tanks, previously baited with 12 food pellets throughout.This phase lasted for 1 day.
(ii) Choice zone training
The aim of this phase was for subjects to learn that swimming into either the left or right choice zone (outside of the ITI) was reinforced.After the ITI (drawn from a uniform distribution: min = 5 s; max = 10 s) during which the screen was black, a white noise rectangle would signal potential food availability in either the left or right choice zones of the tank.Reward was then contingent on fish entering the choice zone signalled with the white noise stimulus.For the first 5 days of this phase, there was one session of 12 trials per day and in the following 5 days, one session of 16 trials per day.Following this, for 3 days fish completed two sessions of 12 trials each per day.Rewards were evenly split across both choice zones and allocated randomly.A session ended either when all trials were completed or after 30 min.
(iii) Start position training
The aim of this phase was for subjects to learn that a start position had to be entered before subsequent behaviour could be reinforced.After the ITI (drawn from a uniform distribution: min = 20 s; max = 40 s), trial availability was signalled by a grey screen.
During this period, fish were required to swim first to the back half of the experimental tank into a 'start zone' (i.e., > 30 cm, away from the monitor and feeders) to trigger the onset of the white noise stimulus signalling food availability in either the left or right arm.As in the previous phase, reward was then contingent on fish entering a choice zone signalled with the white noise stimulus.This lasted for a minimum of 3 days.Following this, the start zone length was reduced in half (minimum 3 days), and finally to a 10 x 10 cm centred square (minimum 5 days) that was used in the main experiments (Fig. 2a).There were two sessions of 12 trials each per day.To advance through this phase, animals had to successfully consume the 12 food pellets within a 1-h limit in each daily session.Failure to do so would terminate the training session, with fish returned to their holding tanks.The remaining food pellets would be made available by the end of the day in the holding tanks.
Acquisition phase
Each fish was presented with one daily session of 24 trials.A trial started with an ITI (drawn from a uniform distribution: min = 20 s; max = 40 s) where the screen was black, and behaviour had no consequences.The ITI offset was signalled by a grey screen (Fig. 2c) and from this moment on, entering the start position (Fig. 2a,d) would trigger the presentation of both visual stimuli (i.e., S+ and S-, see Stimuli above) at fixed left/right locations (Fig. 2b,c, counterbalanced across subjects).Fish made choices by entering one of the two choice zones (Fig. 2a).Choosing the S+ side resulted in the delivery of a food pellet after a 5-s delay and the onset of an ITI.Conversely choosing the S-side would start a new ITI after a 5-s delay (Fig. 2c).This experimental phase lasted for 5 days.
Reversal phase
This phase followed the same contingencies as acquisition, except that the rewarded side for each animal (and accompanying stimuli location) was swapped, remaining the same after that.This phase lasted for 7 days.
Experiment 2: Colour discrimination
In this experiment the rewarded side (and S+/S-stimuli) was randomised on a trial-by-trial basis.To make more correct choices the fish had to follow the S+ and S-signals, rather than acquiring a side preference and reversing it.This experiment lasted for 25 days.
Data analysis
Real-time video tracking (see Behavioural task control) was used to control task contingencies and also generated a timestamped event list for each session.Preference and movement time (i.e., initiation and response times) data (Data file 1) were derived from these event lists and analysed using custom Matlab code (R2020a, MathWorks) available at https:// github.com/ PTMon teiro/ GoFish_ Ajuwon_ etal_ 2022.Statistical analyses were conducted in RStudio (v1.2.5033;The R Project for Statistical Computing, 2018).For statistical analyses, choice proportion data was arcsine square-root transformed to normalise the residuals.One sample, one-sided t tests against 50% were used to assess performance at group level.
In both experiments, repeated measures ANOVAs were conducted to assess the effect of session (to detect learning effects).In Experiment 2, repeated measures ANOVAs were also conducted to assess the effect of session terciles on trial initiation times and choices (to detect within session satiation or warming up effects).A type-1 error rate of 0.05 was adopted for all statistical comparisons.
Ethics statement
All experiments were conducted at the John Krebs Field Station and approved by the Department of Zoology Ethical Committee, University of Oxford (Ref.No. APA/1/5/ZOO/NASPA/ Ajuwon/Goldfish), and were carried out in accordance with the current laws of the United Kingdom.Animals were cared for in accordance with the University of Oxford's "gold standard" animal care guidelines.All experimental methods were noninvasive.No food restriction was necessary as fish were fed highly palatable pellets during daily experimental sessions, supplemented by the end of the day in case fish did not eat the minimum daily requirements, and with raw spinach at the end of the last weekly experimental session.Their diet also included blood worms on weekends.Maintenance and experimental protocols adhered to the Guidelines for the Use of Animals in Research from the Association for the Study of Animal Behaviour/Animal Behavior Society ("Guidelines for the Treatment of Animals in Behavioural Research and Teaching," 2006).On completion, the fish were reintroduced into holding tanks and eventually returned to the supplier.
Results and discussion
We illustrate the potential of GoFish for use in automated, closed-loop behavioural experiments with two discrimination learning experiments with goldfish.
In Experiment 1, fish (i) controlled the flow of trials by swimming to a start location, which triggered the onset of visual stimuli in two target sites, and (ii) expressed a choice by swimming to either target ROI, which triggered (or not, depending on choice) a food reward, followed by an intertrial interval, at the end of which the 'start' ROI became receptive and a new trial could be started.Multiple-trial sessions took place without intervention of the experimenter.This protocol was used in an acquisition and a reversal phase.The experiment and its results (smooth, significant acquisition and reversal, Fig. 3a) are similar to those carried out by Kuroda et al. in zebrafish (Danio rerio;Kuroda et al., 2017).Repeated measures ANOVAs with session as the independent variable confirmed a significant increase in preference for the rewarded side in both the acquisition (F 4,16 = 3.02, P < 0.05), and reversal (F 6,24 = 13.84,P < 0.0001) phases.In the last session of each phase, subjects' preference for the rewarded side was significantly above 50% (acquisition phase: 88% ± 0.04 (mean ± s.e.m.); one-sample t 4 = 5.41 P < 0.01, reversal phase: 79% ± 0.04; one-sample t 4 = 5.97 P < 0.01).
In Experiment 2, reward location was randomised on a trial-by-trial basis so that the visual coloured stimuli and the spatial cues were no longer redundant, instead only the former were reliable signals for reward.At group level, fish readily learned to track the location of reward (Fig. 3b).A repeated measures ANOVA with session as the independent variable confirmed a significant increase in preference for the side displaying the S+ stimulus (F 24,96 = 2.01, P < 0.01).Data from the terminal session show that the average proportion of rewarded choices was 69% ± 0.09.This result was significantly above 50% (one-sample t 4 = 2.18, P < 0.05) even though one of the five fish failed to learn, as shown in Fig. 3b.Since the same subjects were used in both experiments, carry-over effects from Experiment 1 (where reward site was constant across) may have influenced acquisition of the random alternation protocol in Experiment 2.
In addition to choice data, we used a real-time tracking pipeline for automated detection and recording of fish entry into the regions of interest (start zone, left choice zone, right choice zone).The tracking data gives direct access to relevant behavioural metrics, such as trial initiation time (i.e., the time animals took to be detected in the start zone following ITI offset; Fig. 2c -ii) and choice response times (i.e., the time from starting a trial to entering one of the choice zones; Fig. 2c -iii).
As a metric for learning and motivational changes, we compared initiation times between the first and last sessions of Experiment 2 (Fig. 3c), but found no significant differences (paired t 4 = -0.86,P = 0.44).
In addition to choice proportion, we measured choice response times.This variable can be extremely informative: in previous studies and protocols it has been found that response times on both single-option and choice trials can be at least as informative regarding preferences and choice mechanisms as choice proportions (e.g., Monteiro et al., 2020).Overall, we found no significant differences in response time between trials in which fish chose correctly or incorrectly (Fig. 3d, first session: paired t 4 = 1.61,P = 0.18, last session: paired t 4 = 0.29, P = 0.44).
Finally, we explored whether the proportion of correct choices varied within sessions by checking for trends across terciles of sessions.Such effects can occur if there are 'warming-up' or satiation effects.Once again, we found no significant effects either early or late in training as revealed by repeated measures ANOVAs with session tercile as the independent variable (Fig. 3e, first session: F 2,9 = 2.29, P = 0.16, last session: F 2,9 = 0.096, P = 0.91).
In summary, as a proof-of-concept demonstration for GoFish, a fully automated, closed-loop, and open-source experimental platform, we show that goldfish can reliably learn to (i) self-initiate trials, (ii) associate a fixed location with reward (iii) reverse their preference when the rewarded location changes, and (iv) associate colours with reward contingencies.We also present temporal data because, although no significant effects were found in this sample study, they illustrate what can be measured and suggest strategies for analysis.
General discussion
GoFish is a new platform for dynamic, fully automated behavioural experiments that facilitates high-throughput, highly reproducible research in fish or other aquatic organisms.GoFish is open-source, inexpensive, highly adaptable, and should be supported by a growing community of Bonsai users.
Critical to GoFish's functionality is a novel reward pellet dispenser for which we provide design and assembly instructions, and Bonsai, the open-source programming language that is used to automate task contingencies and record data.
Using Bonsai in GoFish improves the user-friendliness of the system compared to proprietary experimental platforms for a number of reasons.Bonsai is free and compatible with a vast range of hardware devices meaning that users can easily source components cheaply or use already existing ones.Critically, Bonsai is a visual programming language, meaning that users with little or no previous coding experience can quickly develop effective workflows for task control and data analysis.In order to adapt our workflow for different protocols, users will need to learn the basics of Bonsai, which can be done through the extensive documentation that exists on the Bonsai website, including example workflows and video tutorials.
As a generic experimental platform, GoFish provides advances and improvements over more common experimenter-controlled setups currently used in research on fish behaviour and cognition.These improvements have benefits in four domains: (i) methodology, (ii) animal welfare, (iii) reproducibility, and (iv) education.
Methodologically, the platform reduces the potential for unintended bias in experimenter-run tests, which are harder to run blindly.Also, having fully automated tasks reduces the chance of human errors.Moreover, in combination with the automation, the low cost of GoFish (Table 1) opens the possibility of testing multiple animals in parallel.Such standardisation across setups and subjects increases efficiency and helps to reduce inter-individual variability, ultimately contributing to a general refinement of procedures.Methodological refinements will likely result in the reduction of the numbers of experimental animals used.Moreover, eliminating the experimenters´ presence during data collection reduces noises, shadows or other uncontrolled environmental changes thereby reducing subjects' stress levels, and improving subjects' welfare.
GoFish improves reproducibility, due to standardisation, and highlights the importance of low-cost, open-source tools for the advancement of scientific research.The fact that all components (including software) are open-source should afford further community-based system refinements over the long-term, enabling easier automated extraction of a wider range of behavioural metrics which should enrich the description of behaviour.It is worth noting that studies have raised concerns regarding the applicability of automated operant training methods for fish, as studies with guppies have shown that automating procedures could lead to slower, unreliable, and task dependent outcomes compared to manually implemented tasks (Gatto et al., 2021).We hope our system, due to its flexibility, would enable us and others to explore this matter further.
The experimental configuration that we present herea Y-maze setup for two alternative forced choice reversal learning and colour discrimination tasks-is used as a proofof-concept; GoFish is highly adaptable and could be used without configural changes in multiple other experimental paradigms e.g., quantity discrimination experiments (Potrich et al., 2022;Schluessel et al., 2022), behavioural timing (Talet al., 1999), foraging (Aw et al., 2009;Newport et al., 2021), object recognition (Newport et al., 2016), and navigation (Burt de Perera & Holbrook, 2012).GoFish could also be used to implement experiments using a range of set-ups differing to that reported here (e.g., open field and maze configurations that could employ a greater number of screens and/or feeders than we have).It is also worth noticing that GoFish could be used to present stimuli, in other sensory modalities: instead of using computer screens for visual stimuli presentation, Bonsai affords a large pool of interaction possibilities (e.g., adding a range of sensors and/or actuators, and sound libraries for auditory stimulus generation (Lopes et al., 2021)).Moreover, within Bonsai's framework, our tracking routine, based on colour thresholding, could be extended to implement markerless (Kane et al., 2020 -https:// github.com/ bonsai-rx/ deepl abcut) and multi-animal tracking (Guilbeault et al., 2021;Pereira et al., 2022 -https:// github.com/ bonsai-rx/ sleap).Furthermore, our automatic pellet dispenser could easily be modified to use other regular-shaped rewards by laser cutting a different reward disk (Fig. 1c; see also Arce & Stevens, 2022;Oh et al., 2017).With the present dimensions, the maximum number of rewards between re-fills is 40, which may be limiting for some applications.However, this number depends on the size of individual rewards, which may vary depending on the particular application of the feeder.
Finally, we note that the low price and scalability of the system makes it suitable for hands-on practical experiments and projects in education contexts (e.g., undergraduate projects, summer courses).It could be used for teaching basic animal learning, experimental methods for behavioural research, and data processing (i.e., video tracking) and visualisation.
GoFish is a fully integrated, adaptable platform designed to facilitate the implementation of complex behavioural protocols in aquatic species.We hope that our platform accelerates the pace of refined behavioural research in a range of species that otherwise have been relatively underutilised in comparative and cognitive research programmes.
Fig. 1
Fig. 1 GoFish apparatus, pellet dispenser control and specifications, and video tracking pipeline.a. 3D view of closed-loop operant chamber.Setup includes two custom-made pellet dispensers, computer screen, USB camera, and light source.b. 3D depiction of the pellet dispenser.c.
Fig. 2
Fig. 2 Goldfish were trained to associate colours with food rewards.a. Top view of the experimental tank, highlighting the start position and left and right choice areas.b.Stimuli colour allocation across subjects.c.Trial structure: Every trial started with an ITI, drawn from a uniform distribution (ITI duration, i): min = 20 s, max = 40 s), which was signalled by a black screen.After this a trial became available the screen turned grey, signalling fish could move to the start position.The initiation time ii) was the time between a trial becoming available and a fish entering the start location.As soon as the fish entered the start location, two stimuli would appear on each side
Fig. 3
Fig. 3 Goldfish learned a colour discrimination task with changing reward/cue/location requirements.a. Mean proportion of correct responses for Experiment 1 during acquisition and reversal of a spatial discrimination.b.Mean proportion of correct responses during the visual discrimination task in Experiment 2. c.Initiation times for the first and last sessions of Experiment 2, split into session terciles.d.Response times towards S+ (left) and S-(right) stimuli for the first
Table 1
Parts listDetails of the components used to build one closed-loop behavioural chamber for goldfish learning experiments.Many of the components can be swapped for items of similar functionality to suit particular needs.Pellet dispenser parts list and assembly instructions can be found in a dedicated repository (see Pellet dispenser section for details).Prices are from early 2021, rounded to the nearest pound *UK educational price(VAT exempt) | 2022-04-07T13:18:34.188Z | 2022-04-05T00:00:00.000 | {
"year": 2023,
"sha1": "a7e48c19f16db937699a6951f9e12f8348f3e370",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-022-02049-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa20d6f3509fa3e7ff097a03182e7837bee13c31",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
15977117 | pes2o/s2orc | v3-fos-license | Einstein-Gauss-Bonnet metrics: black holes, black strings and a staticity theorem
We find the general solution of the 6-dimensional Einstein-Gauss-Bonnet equations in a large class of space and time-dependent warped geometries. Several distinct families of solutions are found, some of which include black string metrics, space and time-dependent solutions and black holes with exotic horizons. Among these, some are shown to verify a Birkhoff type staticity theorem, although here, the usual assumption of maximal symmetry on the horizon is relaxed, allowing exotic horizon geometries. We provide explicit examples of such static exotic black holes, including ones whose horizon geometry is that of a Bergman space. We find that the situation is very different from higher-dimensional general relativity, where Einstein spaces are admissible black hole horizons and the associated black hole potential is not even affected. In Einstein-Gauss-Bonnet theory, on the contrary, the non-trivial Weyl tensor of such exotic horizons is exposed to the bulk dynamics through the higher order Gauss-Bonnet term, severely constraining the allowed horizon geometries and adding a novel charge-like parameter to the black hole potential. The latter is related to the Euler characteristic of the four-dimensional horizon and provides, in some cases, additional black hole horizons.
Introduction
Gravitational theories in more than four spacetime dimensions have gained a lot of attention over the past three decades. Although these ideas go back to the early days of General Relativity, with the introduction of Kaluza-Klein theories [1,2], it was the advent of String Theory that revived the notion of higher-dimensional spacetimes as not just an interesting theoretical possibility, but as a necessary ingredient of a unified picture of elementary interactions. Not surprisingly, the mere extension of General Relativity by considering extra spacelike dimensions can immediately lead to very non-trivial alterations in the theory. The inclusion of additional structure in the gravitational action, such as Gauss-Bonnet and Lovelock [3] terms, or brane-like components [4,5,6,7,8,9,10] increases even further the diversity of the models available and gives rise to a rich phenomenology, one which is actively investigated these days. The long standing problems in gravity, such as gravitational collapse, the initial singularity conditions, a number of open cosmological problems such as dark matter and accelerated expansion of the universe, as well as the elusive quantum theory have accumulated over the years to a general consensus which casts considerable doubt on General Relativity as the final word on gravity in a number of different regimes. This acts as a further motivation to give extra-dimensional theories serious consideration as possible routes to a more complete description of this fundamental interaction.
Gauss-Bonnet extensions of General Relativity (GR) have been motivated from a stringtheoretical point of view as a version of higher-dimensional gravity, since this sort of modification also appears in low energy effective actions in this context [11] (see also the points raised in [12]). The same gravitational term is also present in the case of Lovelock theory (for recent reviews see [13], [14]), which provides a unique and unambiguous classical extension of GR in arbitrary dimensions. The theory of such extended gravity theories has been extensively studied (see for example [15,16,17,18,19,20,21,22,23,24]), especially in conjunction with braneworlds (see for example [25,26,27,28,29,30]). Studies of the cosmology of these setups have also provided insight into the possible relevance of the Gauss-Bonnet gravitational term to 4-dimensional inflation and the accelerated cosmic expansion (see for example [31,32,33,34,35]).
It is well-known that Birkhoff's theorem, when considered in the context of higherdimensional GR (n > 4), remains valid and is in fact amplified in terms of its generality [36,37]. The original Birkhoff theorem states that, in four dimensions, any spherically symmetric solution to Einstein's equations in the vacuum is necessarily locally static, a very important result with many applications when considering the gravitational field of ordinary stars. It is worth mentioning that, in four dimensions, there also exists a form of reciprocal to Birkhoff's theorem. First, the horizon of an asymptotically flat stationary black hole must have the topology of a 2-sphere [38]. Moreover, under quite general assumptions, Israel's theorem states that every static black hole whose horizon has the topology of a 2-sphere is isometric to the Schwarzschild solution [39,40]. In other words, not only is its horizon topologically a 2-sphere but it also has the metric of the round 2-sphere. In higher dimensions, these well established four-dimensional uniqueness results just fail : on one hand, because the topology of the horizon is less restricted [41,42,43]; on the other hand, because, even if one insists on having a particular horizon topology, the actual geometry on this horizon is much less constrained. This leaves room for Birkhoff's theorem to remain valid not only for a constant curvature horizon, but also for horizons which belong to the more general class of Einstein spaces. Substituting the usual (n − 2)-sphere of the horizon geometry (in the case of an n-dimensional spacetime) with an (n − 2)-dimensional Einstein manifold will not alter the black hole potential and the previous solution remains valid and static. Spherical symmetry is no longer a prerequisite for staticity. The structure of the space transverse to the horizon is in this way not affected by the details of the internal geometry, as long as the latter continues to be an Einstein space. Such exotic black holes are accompanied by classical instabilities [36,37] similar to those of the black string [44]. In fact black string metrics can be Wick rotated to a subclass of metrics with exotic horizons. The exotic horizon is nothing but the Euclidean version of 4 dimensional Schwarzschild. Therefore one could entertain the possibility that the additional unphysical exotic black holes are just an artifact of not considering the full classical gravity theory in higher dimensions. In fact it was shown by Lovelock in the early 70's [3] that in higher than 4 dimensions specific higher order gravity terms have to be added to the usual Einstein Hilbert action in order to preserve the unique properties of general relativity in 4 dimensions (for a discussion and the geometric properties see [30]). These higher order gravity terms, which include the Ricci and Gauss-Bonnet scalar are dimensionally extended Euler Poincaré densities of 2, 4 dimensional and so forth manifolds.
In fact the situation is very different when higher order curvature terms such as the Gauss-Bonnet term, are introduced. As was recently shown in [45], the presence of the Gauss-Bonnet term can be quite restrictive for the geometry of the horizon of a black hole, compared to ordinary GR results (see also [46] ). Intuitively, this can be understood as follows: in GR, Einstein's equations only involve the Ricci tensor, whereas the Einstein-Gauss-Bonnet field equations expose the entire Riemann curvature tensor to the dynamics. In [45], the authors considered a static spacetime with generic Einstein space as an n − 2 dimensional subspace and then analysed the field equations. They found that the rank two tensor C acde C bcde , where C abcd is the Weyl tensor, is representative of the new solutions and only horizons satisfying the appropriate conditions on C acde C bcde are allowed.
In this paper, we investigate an extension of Birkhoff's theorem to the six-dimensional Einstein-Gauss-Bonnet theory 1 , allowing arbitrary 4-dimensional horizon geometries and of course time dependence in the metric. In particular, we show that Birkhoff's theorem holds quite generically though the theory is far more complex. Although the allowed horizon geometries are far more restricted than in dimensionally extended GR, in agreement with [45], we shall see that they need not be maximally symmetric. Namely, it will suffice that they be Einstein spaces and that the invariant built out by squaring their Weyl tensor be a constant.
We would like to stress that the 6-dimensional case is very special: in 5 dimensions, the Weyl tensor is identically zero, whereas in more than 6 dimensions, Lovelock theory dictates the presence of a higher order gravity term in the action. Furthermore, in 6 dimensions the 4-dimensional horizon geometry allows for a non trivial 4-dimensional Gauss-Bonnet term which when integrated over the horizon surface gives a topological charge, the 4-dimensional Euler-Poincaré characteristic.
The paper is organized as follows. We first derive the general Einstein-Gauss-Bonnet field equations for the class of metrics considered throughout the paper. We then systematically solve these equations. Just as in the Lovelock extension of Birkhoff's theorem [49], we encounter two distinct classes of solutions, plus a third particular one (see also [50] for the classification of the static metrics). The first of them comes along with a fine-tuning of the parameters of the theory, which corresponds in our case to the Born-Infeld limit, and leads to an underdetermined system of equations. The solutions of this branch are not necessarily static. From the second branch we obtain a set of static solutions including black hole solutions, where the horizon is an Einstein space of constant Ricci scalar and constant C acde C bcde , and generalizations of the Nariai solution. We also encounter a branch of solutions obeying the staticity theorem but with non-Einstein space horizons. The third class of solutions is unwarped, and contains both fine-tuned and non-fine-tuned solutions, some of them static, with or without Einstein horizon. We then present a number of explicit examples of such horizon manifolds, for instance products of 2-spheres and the Bergman metric, as well as horizons with a possible relevance for codimension two braneworlds.
Action and Conventions
We begin by considering the Einstein-Gauss-Bonnet action with a cosmological constant in six dimensions where M (6) is the fundamental mass scale in six-dimensional spacetime,Ĝ the Gauss-Bonnet density defined asĜ and Λ the cosmological constant. Using these conventions we can vary the action with respect to the metric to derive the field equations where G AB stands for the Einstein tensor. Uppercase indices refer to six-dimensional coordinates. We have also introduced the Lanczos or Gauss-Bonnet tensor, Interestingly, the latter can also be written using the following rank four tensor (2.6) The tensor P ABCD has several interesting properties: it is divergence free since the Bianchi identities of the curvature tensor are simply ∇ D P ABCD = 0. It has also has the same index symmetries as the Riemann curvature tensor. Tracing two of its indices yields P B ACB = G AC , which in turn yields the divergence free property of the Einstein tensor. In rather loose terms, one can say that P is the curvature tensor associated to the Einstein tensor, just as the Ricci tensor is associated to the Riemann tensor. In four dimensions, this statement is far more precise since P ABCD coincides with the double dual (i.e. for each pair of indices) of the Riemann tensor ⋆ R CD⋆ where ǫ ABCD is the rank 4 Levi-Civita tensor. In 4 dimensions we have H AB = 0 thus picking up the following Lovelock identity (for extensions see [51]), which will be useful to us later on.
In order to proceed with the solution of the equations, we are now going to choose an appropriate symmetry for the metric. We distinguish between the transverse 2-space, which also carries the timelike coordinate t, and the internal 4-space, which is going to represent the possible horizon line element of the six-dimensional black hole. The metric of the internal space h µν is an arbitrary metric of the internal coordinates x µ , µ = 0, 1, 2, 3 but we are imposing that the internal and transerse spaces are orthogonal to each other. This is an additional hypothesis we have to make since h µν is not a homogeneous metric and because our six-dimensional space is not an Einstein space (in GR such an orthogonal foliation is possible for an Einstein metric). At a loss of a better name we will call this a warped metric Ansatz. Guided by the analogous procedure of analyzing Birkhoff's theorem we write the metric as Lowercase greek indices correspond to internal coordinates of the 4-space. We then switch the coordinates of the transverse space to light-cone coordinates, in terms of which the metric reads (2.10) Using the above prescription, we are now able to write down the equations of motion. The uu and υυ equations yield The off-diagonal equation reads We also have the µν equations, which can be brought into the form (2.14) In this way, we have decomposed the gravitational equations into expressions depending on either transverse space quantities, or internal coordinates. The integrability conditions, [52], are unchanged compared to the original version of the theorem [48], and this will permit us to obtain the staticity conditions. Furthermore, the internal geometry of the horizon only enters these equations through expressions involving the four-dimensional Gauss-Bonnet scalar density, the Ricci tensor and scalar of the internal metric h µν . Note the absence of H (4) µν terms due to the fact that internal space is 4-dimensional. Note also that terms proportional to the Gauss-Bonnet coupling constant are the ones responsible for the appearance of R (4) and R (4) µν and in this way, the Gauss-Bonnet term exposes the internal geometry to the transverse space dynamics in a non-trivial way, something which would obviously not occur in ordinary General Relativity. As we will see, this decomposition imposes severe constraints on the allowed form of the horizon geometry in order to get a spacetime solution.
Exact Solutions and Staticity
The uu and vv equations (2.11), (2.12) can lead to three different classes of solutions, depending on wether the first or second factor is zero (an additional class will emerge for constant B). The corresponding solutions have distinct characteristics and are thus treated separately in what follows. Class I and II are both warped solutions whereas for Class III we have B = const..
Class-I
This class corresponds to solutions which can have, in general, time dependence and, hence, for which a Birkhoff-type theorem does not hold. As we shall soon see, all of them imply 5+12αΛ = 0. The latter corresponds to the so-called Born-Infeld limit, an even-dimensional counterpart of the well-known odd-dimensional Chern-Simons limit in which the Lovelock action can be written as a Chern-Simons action for some (a)dS connection -see e.g. [53]. In the Born-Infeld limit, the Lovelock action can be written as a Born-Infeld action for some curvature 2-form, hence its name. For the class of space-time metrics under consideration here, it typically leads to an underdetermined set of equations and the unconstrained components of the metric subsequently allow for a possible time-dependence. This is reminiscent of class-I Lovelock solutions with spherical, hyperbolic or planar symmetry [48,49] and is expectedly related to perturbative strong coupling problems as in the case of Chern-Simons gravity [54].
Setting the second factor of the (uu) and (vv) equations (2.11) equal to zero leads to the common equation from which we can solve for the function ν(u, v) in terms of B(u, v), according to .
Note that this equation immediately constrains the Ricci scalar R (4) of the internal space to be a constant. We are thus required to consider only horizon geometries of constant scalar curvature as candidate solutions. Substituting the above expression for ν(u, v) into (2.13) yields the two additional constraints, The second of these tells us that the Gauss-Bonnet scalarĜ is also constant. Taking the trace of (2.14) with h µν and performing the same substitution we end up with the equation Finally, we can rewrite the complete equation (2.14) in terms of the trace as Given the above mentioned constraints, the first term vanishes because it is proportional to E. The second term can vanish in one of two ways giving us two distinct cases of Class-I solutions both verifying (3.2) and (3.3). We can either have which is the definition of a four-dimensional Einstein space 2 . Coupled with the condition G (4) = 1 6 R (4) 2 , this leads to C i.e. the square of the Weyl tensor of the internal space must be zero. We then have a constant curvature space 3 . Since (2.14) is in this way automatically satisfied, there is no dynamical equation defining the function B(u, v) and thus the system of field equations becomes underdetermined. This is a typical feature of the Class-I solutions which have been discussed in [48]. If, on the contrary, we demand the second factor in the second term of equation (3.5) to be zero, the requirement for a four-dimensional Einstein space on the horizon of the black hole can be relaxed. Instead, we get a third order partial differential equation for B(u, v), which reads This equation can in principle be solved for B(u, v), again for an internal space of constant Ricci scalar and given the constraints (3.3). Note that the horizon is not necessarily an Einstein space but instead we have the 4-dimensional geometrical constraint, We now summarize the results for the Class-I solutions. We distinguish two subclasses, both requiring the fine-tuning condition 5 + 12αΛ = 0, which is the six-dimensional version of the Born-Infeld gravity condition, and a constant Ricci scalar R (4) : • Class-Ia: we have an underdetermined system for the transverse dimension geometry (free function B and (3.2)) and an internal space which is an Einstein space of zero Weyl squared curvature, that is a constant curvature space, • Class-Ib: A completely determined system of transverse dimensions (3.2), (3.8) with an internal geometry obeying (3.3) (non-zero Weyl curvature).
The former of the two subclasses is certainly incompatible with Birkhoff's theorem as demonstrated in [48], whereas for the latter we could not find the general solution to (3.8).
Class-II
Class-II solutions are obtained by demanding, instead of (3.1), that These integrability conditions are the same as in the case of ordinary GR. We will again assume that B is not constant. Equation (3.2) implies that for some functions f and g, which, in turn, yields B = B(U + V ), with U = U(u) and V = V (v). In this way, under the change of coordinates the function B becomes independent of time and Birkhoff's theorem holds. Additionally, rewriting (3.10), ν(u, v) is now defined as where primes denote differentiation with respect to the single argument of each function. Under (3.11), we get e 2ν = ∂zB. The uu and vv equations thus determine the staticity of the metric, as well as the relation between B and ν. We can then determine B(u, v), or equivalently the form of the black hole potential, from the uv equation. Taking advantage of the already deduced staticity, we can express this as Inspection of the above expression leads to the conclusion that a priori only solutions with a constant Ricci scalar and Gauss-Bonnet density for the internal space are permissible. However, this is not always the case, we have to be cautious of special cases. Upon integration, this leads to a quadratic equation for B ′ . We can then solve for B ′ and determine the black hole potential V using the change of variables r = B 1/4 . The corresponding potential turns out to be where M is an integration constant independent of x, related to the mass of the sixdimensional black hole 4 .
We now turn to the µν equations (2.14). Taking the trace with respect to the internal metric leads to the expression It can be shown that this equation can be rewritten as B ′ E uv = 0, which is identically satisfied as a Bianchi identity.
The µν equation then gives, Therefore, we have two distinct cases, depending on which of the two factors of (3.16) cancels. For the first case the horizon has to be an Einstein space with constant scalar curvature, defined by R (4) µν = 3κh µν . This is similar to ordinary GR. However given thatĜ (4) is also constant we have that C αβγµ C αβγµ = 4Θ where Θ is a positive constant. This is the solution obtained by [45]. Now using the properties of the P µναβ tensor and (2.7) we immediately get, This is a supplementary condition imposed on the usual Einstein space condition for the horizon. Both have a similarity in that we ask for (part of) a curvature tensor to be analogous to the spacetime metric. The main difference being that the curvature tensor in question here is the Weyl tensor and, given its symmetries, it is actually its square which is analogous to the spacetime metric. Clearly horizons with Θ = 0 will not be homogeneous spaces and not even asymptotically so in the non-compact cases. We will see in a forthcoming section that they can be related to squashed sphere geometries. Another interesting point is that the Gauss-Bonnet scalar, whose spacetime integral is the Euler characteristic of the horizon, has to be constant. In other words the Euler Poincaré characteristic of the horizon is in this case simply the volume integral of the horizon. In this sense Θ could be thought of as a topological charge. The Gauss-Bonnet scalar of the internal space then readsĜ (4) = 4Θ + 24κ 2 and the potential [45] V (r) = κ + r 2 12α 1 ± 1 + 12 For Θ = 0, we obtain the well known black holes first discussed by Boulware and Deser (see [15,55]).
Alternatively (3.16) tells us that we can have a horizon which is potentially not Einstein, iff B satisfies Note that in this case we have two equations for B and the system is overdetermined. Integrating (3.19), we obtain the following potential where µ and ρ are integration constants. Comparing with (3.15), we make the following identifications : The potential (3.15) reduces to This corresponds to a massless solution resembling adS or dS space, with a curvature radius dependent on both the internal geometry and the Gauss-Bonnet coupling. The solution is defined only for R (4) 2 −6Ĝ (4) > 0. Equation (3.22) is now a geometric equation constraining the 4-dimensional horizon geometry. Indeed R (4) and G (4) no longer have to be constant individually. In section 5.3.1, by Wick rotating these solutions to Lorentzian internal sections, we shall construct Born-Infeld black string solutions. Thus, Class-II contains the folllowing solutions : • Class-IIa : The solution is locally static (3.14), and the horizon is an Einstein space with Θ ≥ 0.
• Class-IIb : The solution is again locally static with potential given by (3.23), but the horizon is constrained by (3.22) and the BI condition is imposed.
Thus, both subclasses of Class-II obey a local staticity theorem.
If R (4) = −β 2 /α, then we have the fine-tuning relation 1 + 4Λα = 0, (3.24) implies that G (4) = β 4 /(2α 2 ) and (3.25) can be rewritten as which implies that either h (4) µν is Einstein and ν is not determined (and thus possibly timedependent), or h (4) µν is not necessarily Einstein and ν obeys the Liouville equation (3.28) The latter can be solved exactly, yielding After a change of coordinates of the form of (3.11), we therefore have Wick rotating the solutions obtained in the former case, allows to construct axially symmetric black string type solutions, provided we impose a certain amount of symmetry to the internal manifold. Some static examples of this subclass of solutions have already been studied (see [22], [21] and references therein). We will briefly study an example in section 5.3.2. It is worth noting that, once we allow for lesser symmetry, the scalar equation (3.24) does not suffice to determine the full horizon metric.
• Class-IIIc : 1 + 4αΛ = 0, the transverse space is of constant curvature, and the horizon satisfies (3.24) and does not have to be Einstein.
Birkhoff's theorem holds for two of the subclasses, Class-IIIb and Class-IIIc.
and a staticity theorem
For generic Class-II and certain Class-III solutions, we have the following local staticity theorem.
Theorem Let (M, g) be a six-dimensional pseudoriemannian spacetime whose metric g satisfies the Gauss-Bonnet equations of motion (2.3) and whose manifold M admits a foliation into two-dimensional submanifolds Σ (x 1 ,...x 4 ) and a foliation into four-dimensional submanifolds H (4) (t 1 ,t 2 ) such that : • the tangent bundles of the leaves T Σ • for all (t 1 , t 2 ), the four-dimensional induced metric h (t 1 ,t 2 ) is conformal to a given four dimensional metric h (4) with conformal factor depending only on (t 1 , t 2 ).
If in addition, either
i) 1 + 4Λα = 0 and 5 + 12αΛ = 0, or ii) 1 + 4Λα = 0 and h (4) is not an Einstein space, or iii) 5 + 12αΛ = 0, h (4) is not an Einstein space and R (4) is not constant, then M admits a locally time-like Killing vector. Furthermore, in case i), h (4) is an Einstein metric withĜ (4) = constant, whereas in cases ii) and iii), h (4) is not Einstein and solves respectively (3.24) and (3.22). This is a restatement of the properties of generic Class-II and some Class-III solutions we studied above, as these are the ones leading to necessarily static solutions. Note that the above theorem does not restrict the horizon geometry to be spherically symmetric. We can thus have horizons which are anisotropic as admissible static solutions. It should also be stressed that this is qualitatively different from the corresponding theorem in five dimensions, since there the black hole horizon is three-dimensional and its Weyl tensor is automatically zero. D = 6 is the first case where the Weyl tensor C αβγδ of the internal space plays a nontrivial role and can impose constraints. In dimensions D > 6, we expect a similar situation, although one would be normally required to also consider the corresponding higher Lovelock densities in such a setup. The theorem of course makes no claims about the stability of such configurations. As we see, allowed horizons are four-dimensional Einstein spaces of Euclidean signature, with an added constraint on their Weyl tensor. Note that, since Θ is non-zero, in the non-compact cases these spaces are not asymptotically flat, for otherwise they should satisfy C αβγδ → 0 at four-dimensional infinity.
Horizon Structure
We now focus on static Class-II solutions and elaborate on the form of the corresponding potential V (r), (3.18), which determines the occurrence of event horizons. In particular, we clarify the role of Θ in this case. There exists two branches of solutions, depending on the sign choice in (3.18): the Einstein branch solutions (-), which tend to Einstein solutions in the limit α → 0, and the Gauss-Bonnet branch solutions (+), which have been argued to be unstable [54]. Because of the stability problems associated with the latter, we restrict ourselves in the following on the Einstein branch, whose potential is given by In the following, we will then take M to be positive, as is required to have a correct definition of mass in the usual Θ = 0 situation [18]. We should stress that once Θ = 0 the proper definition of mass is no longer clear, as the constant Θ changes the spacetime asymptotics. By continuity we take M > 0, entrusting further study on the meaning of these charges to later work. In the BI limit, 5 + 12αΛ = 0, the only contributions come from the Θ and mass terms. At large r, the Θ ≥ 0 term becomes dominant, developing a branch cut-type singularity. Solutions with 1 + 12αΛ 5 = 0 and Θ = 0 are therefore singular. The BI case thus falls into the second family of solutions verifying (2.7) which have to be treated separately.
From the above observation for the BI limit we already see that the Θ > 0 term will increase the possibility of a branch singularity near the BI limit. We assume for the rest of this section that 5 + 12αΛ > 0. A branch cut occurs at r = r bc whenever Q(r bc ) = (1 + 12αΛ 5 )r 5 bc − 24Θα 2 r bc + 24αM = 0 . where r 0 > 0 is the minimum of Q(r). The constraint (4.4) is the generalization of the M = 0 result, the inequality on M being trivially satisfied then. Generically, the effect of the M term will be to decrease r bc , even if its exact expression cannot be computed analytically in the general case.
To go on, let us turn to the horizon analysis, first by considering the background solution, with Θ and M switched off (or equivalently for r large enough to make the Θ and M terms negligible), which is defined iff κΛ > 0, αΛ > − 5 12 . (4.6) We obtain, The solution behaves exacty like 4-dimensional AdS or dS space in GR with effective cosmological constant, Now, as for the existence of event horizons, following [14] and [19], r = r h is a horizon iff • r = r h is a root of P (r) = − Λ 10 r 5 + κr 3 + α (Θ + 6κ 2 ) r − M Whenever Θ = 0, the black holes behave similarly (modulo the branch singularity that puts some constraints on the smallness of the black hole mass) to their General Relativity black hole counterparts. Typically, Λ < 0 permits planar and hyperbolic black holes, Λ > 0 an event and a cosmological horizon, and Λ = 0 a unique event horizon. The key question we want to answer here is: does Θ = 0 introduce novel horizons to the above black holes, keeping in mind that Θ > 0? To answer this question, we momentarily switch off the "mass" parameter M and we note that if α < 0, the resulting black hole potential can be identified with that (tilded quantities) of the five dimensional Boulware and Deser solution [15] (see also [55]), upon the following identifications Thus, we expect that horizons will be formed even if M is set to zero. In that case, P (r) is a bisquare polynomial and its zeros P (r h > 0) = 0 are easily found : This inequality is always true if αΛ > 0, whereas when αΛ < 0 we need Θ < Θ max . These horizons, when defined, are always greater than the corresponding branch cut position r bc (4.3). When ακ < 0, verifying r 2 h > −12ακ yields Θ > Θ 0 , Θ 0 = 6κ 2 1 + 12 5 αΛ . (4.11) The occurrence of horizons due to the Θ-term is summarized in the following Table 1, for various signs of the cosmological constant and zero mass term. In short, Θ has no effect on the advent of horizons if ακ > 0, whereas it will generate a new event horizon if ακ < 0, for an infinite, bounded from below range of values when αΛ ≥ 0 or for a finite range if αΛ < 0. It is quite interesting to see that there is a natural separation between these two cases, specifying clearly the effect of Θ, depending on the respective signs of ακ.
Let us now examine the special case of planar horizons (κ = 0) : • Usually, if Λ = 0, no planar horizons are allowed. Here, there is one at r h = M αΘ provided αM > 0.
If M is not taken to be zero, it is difficult to evaluate quantitatively the impact of Θ, and, apparently, little interesting information can be gained without resorting to a numerical study.
Horizon Geometries in the Static Case
After providing the general discussion of the theorem and the allowed static solutions, we proceed to give some concrete examples. As already mentioned, the geometry of the internal space on the horizon cannot be asymptotically flat due to the non-vanishing Weyl tensor. Candidate solutions are consequently not going to approximate flat space at infinity and we are led to consider geometries of this sort. Two simple examples of such configurations include an S 2 ×S 2 geometry, as well as a variation of the Taub-NUT space, known as Bergman space. Finally, we will consider solutions that may have some interest for codimension two setups.
S 2 × S 2
This four-dimensional space is the product of two 2-spheres, with Euclidean signature and the metric ds 2 = ρ 2 1 dθ 2 1 + sin 2 θ 1 dφ 2 1 + ρ 2 2 dθ 2 2 + sin 2 θ 2 dφ 2 2 , (5.1) where we consider the (dimensionless) radii ρ 1 and ρ 2 of the spheres to be constant. The entire six-dimensional space has the form with the potential In order for (5.2) to be a solution to the Gauss-Bonnet equations of motion, we are led to the condition of equal sphere radii, ρ 1 = ρ 2 . In that case, we have κ = 1 Table 1 clearly shows that such a creation only occurs as ακ < 0, that is α < 0 in our case. If Λ = 0 or Λ < 0, the constraint Θ 0 < Θ implies which is trivially satisfied if Λ = 0 and yields a minimum value for negative cosmological constant, Λ min = 5 12α < 0. On the other hand, if Λ > 0, the constraint Θ < Θ max (necessary to have any horizon at all) implies This gives this time a maximum value for Λ, Λ max = − 5 36α > 0, which more stringent constraint than the one imposed to have a properly-defined background, 5 + 12αΛ > 0.
Bergman Space
The Bergman space is a homogenous but non-isotropic space which can be derived as a special case of the anti-deSitter Taub-NUT vacuum [56,57]. The ordinary Taub-NUT metric 5 can be written as ds 2 = W (ρ) dτ 2 + 2n cos θdφ 2 + dρ 2 W (ρ) + ρ 2 − n 2 dθ 2 + sin 2 θdφ 2 , (5.6) with the potential W (ρ) = ρ−n ρ+n . The Euclidean time coordinate has a period of 8πn. Here, n is what is usually called the "nut" parameter. It has dimensions of mass −1 . Mathematically, we define a nut as a zero-dimensional (point-like) space where the Killing vector generating the U(1) Euclidean time isometry 6 vanishes. The nut is thus a fixed-point of the Euclidean time isometry. The Killing vector generating the isometry is in the case of Taub-NUT K = ∂ ∂τ . A fixed-point occurs where K = 0, or equivalently, |K| 2 = g µν K µ K ν = W (ρ) = 0. Zeros of the Taub-NUT potential are then identified as positions of nuts. For the given potential, this occurs at ρ = n. We see that, at this position, the factor ρ 2 − n 2 in front of the 2-sphere part of the metric is also zero, so the fixed-point set is really zero-dimensional as we would expect from the definition of a nut. This should be juxtaposed with the related concept of a "bolt", as a two-dimensional fixed-point set. We encounter such sets if the potential vanishes at some position different than ρ = n, which signifies the position of a two-dimensional sphere. In that sense, bolts are similar to black hole horizons, since they too are examples of such two-dimensional fixed-point sets for the Euclidean time isometry, although without a nut parameter. To have a regular solution for (5.6), we only consider the range ρ ≥ n.
In order to make contact with the parametrizations used for the description of the Bergman metric, we introduced the SU(2) one-forms to parametrize the 3-sphere σ 1 = 1 2 (cos ψdθ + sin ψ sin θdφ) , σ 2 = 1 2 (− sin ψdθ + cos ψ sin θdφ) , These satisfy the cyclic relations dσ 1 = −2σ 2 ∧ σ 3 etc. The angles θ, φ, ψ vary in the ranges 0 ≤ θ ≤ π, 0 ≤ φ ≤ 2π, 0 ≤ ψ ≤ 4π. The choice of parameters has to do with the asymptotic behavior of metric at infinity (r → 0). There, the metric three remaining coordinates (angular and time) are combined to give a 3-sphere, which we parametrize using θ, φ and ψ. We say that the metric is asymptotically locally flat, or ALF. This should be contrasted with the usual asymptotically flat (AF) metrics, where the corresponding boundary geometry at infinity is a direct product space S 1 × S 2 , instead of S 3 . For the Taub-NUT space, the time coordinate indices a non-trivial fibration of S 3 . Using the SU(2) one-forms, and setting τ = 2nψ, we can eliminate the angular and time coordinates of the metric (5.6) in favor of the one-forms. For the radial coordinate, we make the successive redefinitions ρ → ρ + n, (so that ρ starts at ρ = 0) and then ρ → ρ 2 2n . The Taub-NUT metric can thus be rewritten as where µ 2 = 1 4n 2 . The metric (5.7) can be considered to be a special case of the more general Anti-deSitter Taub-NUT, of the form Note that the mass parameter µ is now defined in terms of k and the nut parameter by µ 2 = k 2 − 1 4n 2 . This is a Taub-NUT space with a cosmological constant −3k 2 . We consider the space of radial coordinates where the metric is non-singular, i.e. 0 ≤ ρ ≤ 1/k, so that ρ h = 1/k is the horizon of the AdS space. For vanishing cosmological constant (k = 0), this reduces to the ordinary Taub-NUT geometry of (5.7), while for µ = 0, the AdS 4 is recovered.
AdS Taub-NUT has in general an SU(2) × U(1) isometry group, which can however be enhanced for special parameter values.
None of the above mentioned spaces is a good candidate solution for the horizon, since they do not possess a constant Θ. For AdS Taub-NUT, we obtain which only becomes constant at radial infinity (past the AdS horizon), Θ ∼ 6k 12 µ 8 . Setting k = 0 in this relation we obtain the corresponding value for the ordinary Taub-NUT, Θ = 6µ 2 (1−µ 2 ρ 2 ) 6 . The space is asymptotically (locally) flat, so Θ ∼ 0 at infinity.
Let us now consider the case where µ = k. We then recover the Bergman metric It describes the coset space SU(2, 1)/U(2), which is a Kähler-Einstein manifold with Kähler potential 11) and the topology of the open ball in C 2 . Setting z 1 = kξ cos(θ/2)e i(φ+ψ)/2 and z 2 = kξ sin(θ/2)e i(φ−ψ)/2 the metric g αβ = −∂ α ∂β ln K 1/k 2 reproduces exactly (5.10) after a change of coordinate ξ 2 = 2ρ 2 /(1 + k 2 ρ 2 ). The Bergman metric (5.10) has an isometry group of SU(2, 1). In practice, the choice µ = k corresponds to infinite "squashing" of the 3-sphere at the boundary ρ → 1/k, such that only a one-dimensional circle remains intact at spatial infinity. By comparing the terms multiplying σ 2 1 + σ 2 2 (2-sphere) and σ 2 3 , we see that as we approach the boundary, the σ 2 3 part blows up faster and becomes dominant. The space has this circle as its conformal boundary. It is now possible to see from the expression (5.9) for Θ in AdS Taub-NUT that the Bergman space has Θ = 6k 4 and is thus a suitable horizon solution. Substituting (5.10) as the metric of the internal space h (4) µν , we verify that it is a solution to the equations of motion. To do so, we first rescale the radial coordinate as ρ → ρ/l, with l having dimensions of mass −1 in order to make the metric dimensionless. As a result, we identify the dimensionless curvature scale k → kl. The bulk potential of the solution is then given by Bergman space exists in the case κ = −k 2 < 0, Θ = 6k 4 . According to Table 1, when M is set to zero, the only case where a horizon may originate from the Θ-term is when α > 0 and Λ, the bulk cosmological constant, is negative. Then, the condition Θ 0 < Θ < Θ max needs to be verified in order to have a new event horizon, on top of the pre-existing Killing horizon.
The left part of the inequality yields α > 0 and is thus trivially satisfied, and the right half gives a minimum value for Λ, This is a more stringent constraint than the one imposed to have a properly-defined background, 5 + 12αΛ > 0, which yields a lower minimum value. If this is verified, the Bergman space with M = 0, Θ = 0 allows an event horizon.
We should note at this point that previous studies have shown the Bergman geometry to be unstable, both perturbatively and non-perturbatively, in the context of ordinary General Relativity [58]. It is not known whether this property persists also in Gauss-Bonnet theory.
As we mentioned above, apart from zero-dimensional fixed-points of the Euclidean time isometry (nuts), one could also consider spaces exhibiting the two-dimensional variety (bolts). This is known and appropriately termed as the Taub-Bolt space and is very similar to the already discussed Taub-NUT. Indeed, the metric for Taub-Bolt is the same as (5.6) and (5.7), with the only distinction that the potential is now (5.14) The position at which W (ρ) = 0 is no longer ρ = n and consequently the term ρ 2 − n 2 multiplying the 2-sphere does not vanish at this point, providing the two-dimensional bolt. Imposing regularity of the potential at the position of the bolt ρ = ρ b we end up with the following prescriptions Is it possible to take the Bergman limit for the Taub-Bolt space like we did with Taub-NUT?
To do so, we should retrace our steps and first recast the metric into the Pedersen form. Unfortunately, this is now non-trivial due to the more involved potential and bolt radius. We can however consider the limit µ = k without deriving the full metric for arbitrary µ.
Inspecting the definition of µ for Taub-NUT, we see that µ = k corresponds to the limit n → ∞. To find the form of the metric in that limit, we first make the shift ρ → ρ + ρ b . The potential can then be written as with the parameters In determining the limit of parameters we used the fact that ρ b ∼ n→∞ n. We then set ρ → ρ 2 2n(1−k 2 ρ 2 ) and keeping only finite terms in the metric, we recover the Bergman space (5.10).
Taub-Bolt has thus the same limit as Taub-NUT for infinite nut parameter. We would like to conclude this section by noting that, taking k purely imaginary in (5.10), we end up with the Fubini-Study metric on CP 2 and that the latter also constitutes a possible horizon metric for a static Lovelock black hole.
Six-dimensional black strings
Let us now turn to some special solutions which resemble black string metrics. Here we assume that the "horizon" surface is of Lorentzian signature. Both solutions presented in this section admit an extra axially symmetric Killing vector (see also [30]).
Six-dimensional warped Born-Infeld black strings
Throughout this section, the BI limit is assumed, that is we set 5 + 12Λα = 0. In this case, we would like to discuss a particular subclass of Class-II solutions, which appears to contain black string solutions as well as solutions that may be relevant to codimension two braneworld cosmology. They correspond to the overdetermined solutions (3.21-3.23). After Wick rotation, these solutions can be rewritten as where the four-dimensional Lorentzian metric h µν needs not be Einstein and is only subject to equation (3.22) that we reproduce here In order to solve (5.23), we assume, for example, that h (4) µν is of the form where dΩ 2 II,k denotes the two-dimensional metric with constant curvature on the sphere, the plane or the hyperbolic space, depending on whether k = 1, 0 or −1 respectively. h (4) µν therefore has spherical, planar or hyperbolic symmetry, although it is certainly not the most general ansatz with these symmetries. Now, it follows from (5.23) that where c 1 and c 2 are integration constants. The corresponding four dimensional metric h (4) µν is not Einsein and distributional sources at r 2 = −6αρ are therefore expected from the matching conditions. These four dimensional metrics h µν do not correspond to any known GR solutions at large distance and are similar to the unphysical spherical solutions of Hořava gravity [59] in the case of detailed balance [60]. Although BI and Hořava theory are radically different, both theories have been shown to suffer from strong coupling problems, [61], [54].
The total space is, in the end, a warped product between a constant curvature two-space and a four-dimensional lorentzian space. This particular black string solution has been first discussed in [62].
Six-dimensional straight black strings
We finally consider the special case of Class-III solutions, with a time-like local Killing vector and an undetermined horizon geometry : The only constraint on the internal geometry comes from the scalar equation (3.24), i.e. 0 = −2Λβ 4 + β 2 R (4) + αĜ (4) , (5.27) where β is a constant "warp factor" and 1 + 4αΛ = 0. As in the previous section, we consider a Wick rotated version in which the internal space is lorentzian and we assume the same particular ansatz for h (4) µν , (5.24). It then follows from (5.27) that where µ and q are both integration constants. The have been rescaled so that the metric resembles the Reissner-Nordström solution far from the source in the minus branch, provided β 2 is set to two-thirds. The six-dimensional metric finally reads and is an unwarped product between a constant curvature two-dimensional space and a four-dimensional unwarped brane admitting Schwarzschild as a limit in one of the branches of solutions, with β 2 = 2 3 . This coincides with the Kaluza-Klein black hole reported in [22], provided β 2 = 1. We should emphasize here that, as an equation for h (4) µν , (5.27) is underdetermined. In particular, had we considered a generic spherically symmetric ansatz, we would have had a free metric function appearing in the internal geometry.
Conclusions
We have found the general solution 7 to the metric (2.8) and have investigated generalizations of Birkhoff's theorem in six-dimensional Einstein-Gauss-Bonnet theory (or Lovelock theory). Our analysis significantly generalizes previous treatments in five dimensions and 6 dimensions, or cases where spherical symmetry of the horizon is imposed from the beginning. Furthermore, the analysis undertaken here agrees with [50] where staticity is assumed. Permitting the Weyl tensor of the internal space in the equations of motion through the combination C αβγµ C αβγν = Θδ µ ν leads to severe restrictions. We analyzed the way this new contribution modifies the available solutions. We distinguish three categories.
The so called Class-I leads both to an underdetermined system of equations and the application of a specific condition between the parameters of the theory. We find two possibilities : • the internal space is a constant curvature space (with Θ = 0) and one of the metric functions in transverse space is undetermined (Ia), • the internal space is not necessarily Einstein (and generically Θ = 0) and all metric functions can be determined (Ib).
The possibility of an underdetermined system of equations once a particular choice of parameters is used seems to hint the presence of an increased "symmetry" in such a case. Class-I solutions do not obey some variant of Birkhoff's theorem, i.e. static solutions are not unique in this context. Class-II solutions on the other hand give rise to a generalized Birkhoff's theorem; static solutions are unique, provided some conditions related to the structure of the internal space are satisfied : • the internal space is Einstein with a constant 4-dimensional Gauss-Bonnet charge and constant curvature (IIa), • the internal space is not necessarily Einstein but is constrained by a scalar equation (3.22) and the BI condition holds (IIb).
The Class-III case corresponds to unwarped metrics, and Birkhoff's theorem also holds in some specific subcases : • 1 + 4αΛ = 0 and the internal space is Einstein (IIIb), or 7 The case of Class(Ib) still demands the reoslution of (3.8) • 1 + 4αΛ = 0, the internal space is not Einstein and can or not be constrained by a scalar equation (3.24) (IIIc).
A third case exists where Birkhoff's theorem does not hold, when both the horizon is Einstein and the condition 1 + 4αΛ = 0 is applied (IIIa). We summarize our results in Table 2. For the Class-II solutions, for which the generalized staticity theorem holds, we studied some examples of non-trivial horizon geometries. The spaces we consider are in general anisotropic, such as the S 2 × S 2 product space and the Euclidean Bergman geometry. The latter can be considered as the appropriate limit of either an AdS Taub-NUT or Taub-Bolt space with infinite nut charge. Bergman space has the squashed 3-sphere (Berger sphere) as its conformal boundary and is thus anisotropic.
It would be interesting to investigate further cases of suitable horizon geometries satisfying the requirements of Birkhoff's theorem and also to study the general conditions under which a class of such solutions may arise. A consistent generalization to higher dimensions would require the inclusion of higher order Lovelock densities in the action. In this case one could consider as possible candidate horizon solutions the Bohm metrics [37], which are known to be admissible if only the Gauss-Bonnet term is taken into account. Apparently, higher-order curvature invariants other than Θ would be involved in distinguishing compatible horizon metrics, potentially requiring a more systematic classification.
The most interesting departure from General Relativity arises due to the non-vanishing of the constant Θ. The latter appears, at the level of the static black hole potential, as a novel integration constant or "charge" and is directly related to the Gauss-Bonnet scalar of the 4-dimensional horizon, a quantity whose integral yields a topological invariant: the relevant Euler-Poincaré characteristic. We saw that the presence of this constant imposes particular and non-trivial asymptotic conditions and certainly a particular topology. Since it can even give rise to novel horizons, it would be interesting to investigate whether this constant can be interpreted as the conserved charge of some Killing symmetry of spacetime and what its physical meaning actually is. | 2009-09-07T09:11:09.000Z | 2009-06-26T00:00:00.000 | {
"year": 2009,
"sha1": "f7c741204fd5f175321cc47eaba0fac2a95a0a83",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0906.4953",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0594bea08f47231a5791a19922b55c9148276014",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
39663258 | pes2o/s2orc | v3-fos-license | Cannabinoids Induce Glioma Stem-like Cell Differentiation and Inhibit Gliomagenesis*
Glioma stem-like cells constitute one of the potential origins of gliomas, and therefore, their elimination is an essential factor for the development of efficient therapeutic strategies. Cannabinoids are known to exert an antitumoral action on gliomas that relies on at least two mechanisms: induction of apoptosis of transformed cells and inhibition of tumor angiogenesis. However, whether cannabinoids target human glioma stem cells and their potential impact in gliomagenesis are unknown. Here, we show that glioma stem-like cells derived from glioblastoma multiforme biopsies and the glioma cell lines U87MG and U373MG express cannabinoid type 1 (CB1) and type 2 (CB2) receptors and other elements of the endocannabinoid system. In gene array experiments, CB receptor activation altered the expression of genes involved in the regulation of stem cell proliferation and differentiation. The cannabinoid agonists HU-210 and JWH-133 promoted glial differentiation in a CB receptor-dependent manner as shown by the increased number of S-100β- and glial fibrillary acidic protein-expressing cells. In parallel, cannabinoids decreased the cell population expressing the neuroepithelial progenitor marker nestin. Moreover, cannabinoid challenge decreased the efficiency of glioma stem-like cells to initiate glioma formation in vivo, a finding that correlated with decreased neurosphere formation and cell proliferation in secondary xenografts. Gliomas derived from cannabinoid-treated cancer stem-like cells were characterized with a panel of neural markers and evidenced a more differentiated phenotype and a concomitant decrease in nestin expression. Overall, our results demonstrate that cannabinoids target glioma stem-like cells, promote their differentiation, and inhibit gliomagenesis, thus giving further support to their potential use in the management of malignant gliomas.
Malignant gliomas remain the most deadly human brain tumors, with poor prognosis despite years of research in antitumoral therapeutic strategies. A hallmark characteristic of gliomas is their molecular and cellular heterogeneity (1,2), which is considered one of the reasons for their high malignancy and recurrence. Moreover, even morphologically or histologically related tumors may behave very differently. Neoplastic transformation of differentiated glial cells was for many years the most accepted hypothesis to explain the origin of gliomas (1,2). However, recent findings support the existence of a stem cellderived origin for different types of cancers such as gliomas and hematopoietic, breast, and prostate tumors (2,3). In particular, glioma-derived stem-like cells (GSCs) 4 have been isolated from both human brain tumors (4 -8) and several glioma cell lines (6,9,10). GSCs are crucial for the malignancy of gliomas (8 -10) and may represent the consequence of transformation of the normal neural stem cell compartment (11). These findings are in line with the observation that gliomagenesis is frequently associated with adult brain germinal zones, in particular the subventricular zone (2,3). It is therefore imperative that the development of new therapeutic strategies for the management of gliomas takes into account their cellular diversity and origin. Among those strategies, cannabinoid-based drugs may represent an alternative to other established chemotherapeutics (12).
The discovery of an endogenous cannabinoid system (13), together with the great improvement in our understanding of the signaling mechanisms responsible for cannabinoid actions (12,13), has fostered the interest in the potential therapeutic applications of cannabinoids (14). Several studies have demonstrated a significant antitumoral action of cannabinoid ligands in animal models (12). Thus, cannabinoid administration to nude mice curbs the growth of different tumors, including gliomas, lung adenocarcinomas, thyroid epitheliomas, lymphomas, and skin carcinomas (12). The antitumoral action on gliomas relies on at least two mechanisms: induction of apoptosis of tumor cells (15,16) and inhibition of tumor angiogenesis (17). Besides their wide distribution in tumor cells, cannabinoid receptors are expressed and functionally active in neural progenitors, in which they regulate cell proliferation and differentiation (18,19). This background prompted us to investigate the actions of cannabinoids on human GSCs and their impact in gliomagenesis. Our results show that GSCs express cannabinoid receptors and that cannabinoid stimulation reduces glioma initiation in vivo, a finding that correlates with increased cell differentiation. These findings provide further support for cannabinoid-based antitumoral therapies that are able to target the brain tumor stem cell compartment.
GSC Culture and Gliomagenesis in Vivo-GSCs were obtained from human brain tumor biopsies digested with collagenase (type Ia, Sigma) in Dulbecco's modified Eagle's medium at 37°C for 90 min (17) and grown under non-adherent conditions (19) in neural stem cell culture medium composed of Dulbecco's modified Eagle's and Ham's F-12 media supplemented with B-27 (Invitrogen), 50 mM Hepes, 2 g/ml heparin, 20 ng/ml EGF, 20 ng/ml FGF-2, and 20 ng/ml leukemia inhibitory factor. Of eight biopsies employed, five rendered cells that fulfilled stem-like cell criteria (3). Representative results are shown for one of these GSC lines and were also extended to GSCs derived from classical human glioma U87MG and U373MG cell lines. Cell lines were grown as described (15,17) and inoculated in vivo, and after tumor digestion, GSCs were cultured. Clonal neurospheres were grown at 1000 cells/ml and analyzed by flow cytometry for the expression of different stem cell markers, including CD133, the stem cell factor receptor c-Kit, and their ability to exclude Hoechst 33342 (side population analysis). Differentiation experiments were performed in polyornithine-coated plates, and adherent GSCs were grown in neural stem cell culture medium without growth factors. GSCs were cultured in the presence of the indicated stimuli after overnight growth factor deprivation.
Differentiation experiments with at least three independent cultures were performed by quantification of the percentage of total cells that expressed the indicated neural antigens or that were highly positive for histone H3 trimethylated at Lys 9 . A minimum of 10 fields were scored in a double-blinded manner to minimize subjective interpretations. Quantified fields were selected randomly by visualizing total cells with a microscope Hoechst filter. Stock solutions of cellular effectors were prepared in Me 2 SO, and the concentrations employed were selected based on previous studies on cannabinoid regulation of neural progenitors (19). No significant influence of Me 2 SO on any of the parameters determined was observed at the final concentration used (0.1%, v/v). Control incubations included the corresponding vehicle content. Gliomagenesis was induced by subcutaneous flank inoculation into athymic nude mice of U87MG-GSCs or glioblastoma multiforme (GBM) GSCs in 100 l of phosphate-buffered saline supplemented with 0.1% glucose (16). In some experiments, 1.5 mg/kg JWH-133 or the corresponding vehicle was administered daily to subcutaneous gliomas. Tumor growth was measured with an external caliper, and volume was calculated as (4/3) ϫ (width/2) 2 ϫ (length/2).
mRNA Detection and Quantification-mRNA was obtained with an RNeasy Protect kit (Qiagen Inc., Valencia, CA) using an RNase-free DNase kit. cDNA was subsequently obtained using a SuperScript first-strand cDNA synthesis kit (Roche Applied Science), and amplification of cDNA was performed with the following primers: human CB 1 , CGT GGG CAG CCT GTT CCT CA (sense) and CAT GCG GGC TTG GTC TGG (antisense; 403-bp product); human CB 2 , CGC CGG AAG CCC TCA TAC C (sense) and CCT CAT TCG GGC CAT TCC TG (antisense; 502-bp product); human fatty acid amide hydrolase, TGG GAA AGG CCT GGG AAG TGA ACA (sense) and GCC GCA GAT GCC GCA GAA GGA G (antisense; 458-bp product); human monoacylglycerol lipase, ACC CTG GGC TTC CTG TCT TCC TTC (sense) and TTC CTG CCG TGG CTG TCC TTT GAG (antisense; 564-bp product); human TRPV1 (transient receptor potential cation channel, subfamily V, member 1), CGC CGC CAG CAC CGA GAA (sense) and ACC GAG TCC CTG GCG CTG ATG TC (antisense; 546-bp product); human glyceraldehyde-3-phosphate dehydrogenase, GGG AAG CTC ACT GGC ATG GCC TTC C (sense) and CAT GTG GGC CAT GAG GTC CAC CAC (antisense; 318-bp product); human Musashi-1, GAT GGT CAC TCG GAC GAA GAA (sense) and CAA ACC CTC TGT GCC TGT TG (antisense; 149-bp product); human nestin, GAG AGG GAG GAC AAA GTC CC (sense) and TCC CTC AGA GAC TAG CGC AT (antisense; 128-bp product); human NOTCH1, GCC GCC TTT GTG CTT CTG TTC (sense) and CCG GTG GTC TGT CTG GTC GTC (antisense; 251-bp product); human OCT4, GAC AAC AAT GAA AAT CTT CAG GAG A (sense) and TTC TGG CGC CGG TTA CAG AAC CA (antisense; 217-bp product); and human SOX2, GCA CAT GAA CGG CTG GAG CAA CG (sense) and TGC TGC GAG TAG GAC ATG CTG TAG G (antisense; 206-bp product). CB 1 and CB 2 receptor PCR amplifications were performed under the following conditions: 93°C for 1 min; two rounds at 59°C for 30 s, 72°C for 1 min, and 93°C for 30 s; two rounds at 57°C for 30 s, 72°C for 1 min, and 93°C for 30 s; and 35 cycles at 55°C for 30 s, 72°C for 1 min, and 93°C for 30 s. Finally, after a final extension step at 72°C for 5 min, PCR products were separated on 1.5% agarose gels. Real-time quantitative PCR was performed with TaqMan probes (Applied Biosystems, Foster City, CA). Amplifications were run in a 7700 real-time PCR system, and the values obtained were adjusted using 18 S RNA levels as a reference.
cDNA Arrays-Total RNA was extracted from vehicle-or HU-210-treated GBM-GSCs cells, and poly(A) ϩ RNA was isolated with Oligotex resin (Qiagen Inc.) and reverse-transcribed with Moloney murine leukemia virus reverse transcriptase in the presence of 50 Ci of [␣-33 P]dATP for the generation of radiolabeled cDNA probes. Purified radiolabeled probes were hybridized to stem gene array membranes (GEArray Q series, SuperArray Bioscience Corp., Frederick, MD) according to the manufacturer's instructions. 5 Hybridization signals were detected using a PhosphorImager and analyzed using Phoretix housekeeping genes on the blots as internal controls for normalization. The selection criteria were set conservatively throughout the process, and the genes selected were required to exhibit at least a 50% change in expression.
Western Blotting-Western blot analysis was performed as described previously (19). Cleared cell extracts were subjected to SDS-PAGE and transferred to polyvinylidene difluoride membranes. Following incubation with primary antibodies, blots were developed with horseradish peroxidase-coupled secondary antibodies using an enhanced chemiluminescence detection kit. Loading controls were performed with anti-␣tubulin antibody. Densitometric quantification of the luminograms was performed using a GS-700 imaging densitometer (Bio-Rad) and MultiAnalyst software.
Identification of Endocannabinoids in GSCs-Samples were dissolved in 1 volume of high pressure liquid chromatographygrade methanol and precipitated with 1 volume of acetone, and non-miscible material was filtered. The supernatant was evaporated, and the residue was partitioned between chloroform and water. A preparative TLC plate (Silica Gel 60 F 24 , 1 mm) was pre-developed with chloroform/methanol (1:1, v/v). Then, the residue of the chloroform layer was redissolved and applied to a Finnigan LCQ MS detector. Detection was performed using the electrospray ionization technique with the full-scan mass spectrometric mode, providing a full spectrum of samples between m/z 100 and 550. The rest of the mixture was loaded onto a TLC plate (40:6:1 (v/v) chloroform/petroleum ether/ methanol) with synthetic anandamide and 2-arachidonoylglycerol as references. The plate was scrapped, and after methanol extraction, filtrates were analyzed by gas chromatography-mass spectrometry with an electron impact detector (Hewlett-Packard G1800 GCD HP-5971) after derivatization with the silylating agent N,O-bis(trimethylsilyl)trifluoroacetamide to form trimethylsilyl ethers at free hydroxyl groups.
Immunofluorescence and Confocal Microscopy-Immunofluorescence was performed in 10-m tumor sections or in cultured cells as described previously (17,18). Samples were incubated with the indicated antibodies and their corresponding secondary antibodies, either anti-rabbit or antimouse antibody highly cross-adsorbed with Alexa Fluor 488 (Molecular Probes). CB receptor expression was determined with anti-rabbit secondary antibody highly cross-adsorbed with Alexa Fluor 594. The number of positive cells was normalized to the total cell number identified by counterstaining with TOTO-3 iodide or Hoechst 33342. GSC differentiation was determined in a minimum of five tumor sections. The human origin of the cells immunostained with the different neural markers was confirmed by double labeling with anti-human nucleus antibody (Chemicon). Tumor section CD133 staining was performed with non-conjugated antibody (Miltenyi Biotec) as described (7) and for in vitro cells after 50 mM ClNH 4 incubation for autofluorescence treatment. Preparations were examined using Leica software and a Leica SP2 acoustical optical beam splitter microscope with two passes with a Kalman filter and a 1024X1024 collection box.
Statistical Analysis-The results shown represent the means Ϯ S.E. of the number of experiments indicated in every case. Statistical analysis was performed by analysis of variance, and post hoc analysis was performing using Student's t test.
Glioma-derived Stem Cells Express CB Receptors-To inves-
tigate the potential effects of cannabinoids on GSCs, we first analyzed whether these cells express CB receptors. GSCs derived from GBM biopsies and human glioma U87MG and U373MG cell lines were cultured and generated neurosphere structures equivalent to those formed by normal neural stem cells (Fig. 1A). Clonal GSC cultures were subjected to subsequent neurosphere passages and showed unlimited self-renewal ability (supplemental Fig. 1A). Thus, we characterized in detail their stem-like cell characteristics. Immunostaining evidenced a high expression of the neural stem cell markers Musashi-1 and nestin (Fig. 1A), with many cells coexpressing both proteins (supplemental Fig. 1B). In addition, flow cytometry analysis showed a CD133-positive cell population (supplemental Fig. 1C), the size of which depended on the tumor of origin. The CB 1 and CB 2 receptors were shown by double immunofluorescence to colocalize in nestin-positive cells both in vitro (Fig. 1A) and in glioma xenografts (Fig. 1B); the CB 1 and CB 2 receptors were present in 49 Ϯ 5 and 31 Ϯ 7% of the nestinpositive cells, respectively. Western blotting of GSC cultures was also used to analyze the presence of CB receptors (Fig. 1C). Reverse transcription PCR analysis confirmed the expression of stemness markers, including CD133, nestin, Musashi-1, SOX2, and NOTCH1, and the pluripotency embryonic stem cell marker OCT4 in GSCs (Fig. 1D). These findings correlated with enhanced CB receptor expression of GBM-and U87MG-GSC populations compared with their respective differentiated counterparts. In particular, GSCs were significantly enriched in CB 2 receptors at both the transcript and protein levels, in line with the correlation between astrocytoma malignancy and CB 2 receptor expression (16). Other elements of the endocannabinoid (eCB) signaling system were also present in GSCs, including TRPV1 and the hydrolases monoacylglycerol lipase and fatty acid amide hydrolase, enzymes responsible for eCB degradation (Fig. 1D). Finally, GSCs were evaluated for their ability to produce endogenous cannabinoid ligands upon incubation with the calcium ionophore A23187 (5 M, 2 min) and subsequent identification by LCQ mass spectrometry (supplemental Table 1). Several major species of 2-monoacylglycerols were detected, specifically palmitoylglycerol and stearoylglycerol. In addition, 2-arachidonoylglycerol and N-arachidonoylglycine, a metabolite of anandamide, were present in the samples at trace levels. Gas chromatography-mass spectrometry analysis after derivatization with N,O-bis(trimethylsilyl)trifluoroacetamide confirmed the data obtained by LCQ mass spectrometry (supplemental Table 1).
CB Receptor Activation Contributes to the Regulation of Glioma Stem Cell Gene Expression-To identify the potential action of cannabinoids on GSCs, we investigated the changes in gene expression induced by the synthetic cannabinoid agonist HU-210 (30 nM) in GBM-GSCs. HU-210 significantly altered the expression of 11 genes ( Fig. 2A) of the 266 genes analyzed. Among them, seven genes involved in regulation of the cell cycle and cell proliferation (CDK4 (cyclin-dependent kinase 4), CDKN1B (cyclindependent kinase inhibitor 1B), FGFR1 (FGF receptor 1), FGFR3 (FGF receptor-3), EGF receptor, EGF, and integrin ␣4) were downregulated by cannabinoid stimulation. In addition, the transcript levels of neuronal MAP2 and the tumor suppressor RBL1 (retinoblastoma-like 1) were increased. These results were confirmed by real-time quantitative PCR analysis of three transcripts (Fig. 2B) and suggest that CB receptor activation regulates essential GSC functions such as cell proliferation and differentiation. However, GSC proliferation and self-renewal were not affected by cannabinoid stimulation during several neurosphere passages (data not shown). The impact of CB receptor activation on differentiation-related genes was therefore determined by quantitative PCR analysis. HU-210 treatment increased the mRNA levels of the glia-specific markers GFAP and S-100 in a CB 1 receptor-dependent manner as evidenced by SR141716 antagonism (Fig. 3, A and B, respectively). In addition, CB 1 receptor activation resulted in increased expression of the early neuronal marker -tubulin III (Fig. 3C). MARCH 2, 2007 • VOLUME 282 • NUMBER 9
JOURNAL OF BIOLOGICAL CHEMISTRY 6857
Cannabinoids Promote Neural Differentiation of Glioma Stem Cells-On the basis of the results of cannabinoid regulation of differentiation-related genes, we next analyzed the regulation of GBM-GSC differentiation by the CB 1 and CB 2 receptor agonists HU-210 and JWH-133 alone or in combination with the receptor antagonists SR141716, SR144528, and capsazepine. Activation of the CB 1 and CB 2 receptors, as demon-strated by the use of their respective antagonists, decreased the nestinpositive cell population (Fig. 4A) and increased the more differentiated cell population expressing the glial markers GFAP and S-100 (Fig. 4, B and C, respectively) or the neuronal marker -tubulin III (Fig. 4D). Capsazepine did not modify the cannabinoid-induced decrease in nestin-positive cells and the induction of GFAP-and -tubulin III-positive cells, although it counteracted the S-100-positive cell increase (supplemental Fig. 2). Histone methylation status has been shown to correlate with changes in neural progenitor cell differentiation (20), and thus, methylation of histone H3 at Lys 9 was monitored. HU-210 and JWH-133 increased the number of trimethyl-histone H3 (Lys 9 )-labeled cells in a CB receptor-dependent manner as evidenced by SR141716 and SR144528 antagonism (Table 1). In summary, these experiments confirmed the neural progenitor ability of GSC cultures that, under differentiation conditions, attach and recapitulate their endogenous differentiation program. Aberrant GSC differentiation was observed, as previously reported (5,6,21). Similarly, cannabinoids induce both neuronal (-tubulin III) and glial (GFAP) gene expression (Fig. 3). Many cells were observed to coexpress both glial and neuronal markers (supplemental Fig. 1D), which, in the case of normal neural stem-derived cells, segregate in different cell compartments.
Cannabinoids Inhibit Gliomagenesis Initiated by Glioma Stem Cells-Stem-like cells are considered to be the initiating cell population of tumorigenesis. Thus, when injected in nude mice, U87MG-and GBM-GSCs induced tumor formation at cell numbers that were 30-and 10-fold lower, respectively, than their differentiated counterparts (0.25 ϫ 10 6 U87MG-GSCs and 1 ϫ 10 6 GBM-GSCs were injected to initiate tumor formation). As cannabinoid receptor activation regulates GSCs, we sought to determine their impact on the ability of GSCs to initiate glioma generation in vivo. U87MG-GSCs previously cultured in the presence of 30 nM HU-210 or JWH-133 were less efficient as tumor-initiating cells (Table 2). Moreover, cannabinoid-treated GSCs generated tumors with a lower growth rate, resulting in smaller tumor size compared with vehicle-treated cells (Fig. 5A and Table 2). Similarly, HU-210-and JWH-133treated GBM-GSCs were less efficient in initiating gliomagenesis ( Fig. 5B and Table 2). In particular, HU-210 notably reduced tumor growth, whereas in the case of JWH-133, tumors were visible only 2 months after the rest of the animals had been killed (Fig. 5B and data not shown). Samples of GSC-derived gliomas were obtained, and their ability to form primary spheres was determined. Tumors generated by cannabinoid-treated GSCs showed decreased neurosphere-forming activity (Fig. 6A) and reduced cell proliferation (Ki-67-positive cells) (Fig. 6, B and C). These observations confirm that cannabinoids inhibit stem-like cell-initiated gliomagenesis.
To analyze cannabinoid regulation of the differentiation status of GSCderived tumors, progenitor markers were analyzed by immunofluorescence. Tumors derived from cannabinoid-treated cells showed decreased nestin immunoreactivity (Fig. 7A), a finding that was also observed in gliomas treated in vivo with JWH-133 (54 Ϯ 7% relative immunoreactivity versus 100 Ϯ 8% in vehicle-treated tumors). In addition, the expression of vimentin, a progenitor marker that has been correlated with glioma malignancy (22,23), was also decreased (48 Ϯ 5% relative immunoreactivity in JWH-133-treated tumors versus 100 Ϯ 9% in vehicle-treated tumors). Next, we analyzed the expression of differentiated markers in cannabinoid-treated derived gliomas. CB receptor activation increased the expression of the neuronal markers MAP2 and -tubulin III (Fig. 7, B and C) and increased S-100 glial immunoreactivity (Fig. 7D). In agreement with a previous report (6), GFAP immunoreactivity could not be detected, but an increase in its transcript levels was observed (250 Ϯ 55% upon HU-210 treatment and 160 Ϯ 20% upon JWH-133 treatment versus 100 Ϯ 10% upon vehicle treatment).
DISCUSSION
The recent discovery of brain cancer stem cells has important implications both for the development of new therapeutic strategies for glioma management and for the evaluation of potential pitfalls and benefits of currently available treatments (1)(2)(3). Here, we show that GSCs express different elements of the eCB system, including G protein-coupled receptors (CB 1 and CB 2 ), the ionotropic receptor TRPV1, and eCB-degrading enzymes (fatty acid amide hydrolase and monoacylglycerol lipase), and that cannabinoid agonists target the stem-like cell compartment of brain tumors, promote GSC differentiation in a receptor-dependent manner, and reduce gliomagenesis in vivo. Although no overt differences between CB 1 and CB 2 receptor-mediated actions in GSCs were evident in this study, potential variations in their molecular mechanisms of action may occur. For instance, it is known that the CB 1 receptor (but not the CB 2 receptor) is coupled to the modulation of various Ca 2ϩ and K ϩ channels (13) and that CB 1 receptor activation is selectively regulated by cholesterolenriched membrane microdomains (24). In addition, the different molecular species of eCBs differ in their affinity for the two CB receptor types. Thus, 2-arachidonoylglycerol rather than anandamide has been proposed as the preferential ligand for CB 2 receptors (25). Furthermore, CB 2 receptor-selective enrichment in GSCs might help to explain the observed correlation between CB 2 receptor expression and glioma cell malignancy (16). On the other hand, capsazepine was able to prevent some of the cannabinoid actions on GSC differentiation, suggesting that these ligands may also affect TRPV1 function by as yet unknown mechanisms. Altogether, these observations support that the coexpression of different CB receptors in GSCs would allow these cells not only to be pharmacologically targeted by CB 2 receptor-selective non-psychotropic ligands (14), but also to respond differentially depending on the molecular composition of the eCB tone present in the tumor niche. Those eCB molecules may be produced by GSCs and surrounding cells of neuronal and glial origin (13). In this respect, altered levels of eCBs have been reported in human GBM biopsies compared with normal brain tissue, suggesting that this family of extracellular lipid cues may be involved in the endogenous antitumoral response (26,27).
The malignancy of human brain tumors inversely correlates with their degree of differentiation as shown by increased nestin, Musashi-1, and doublecortin expression (28 -30), whereas
TABLE 2 Cannabinoid regulation of glioma stem-like cell-initiated gliomagenesis
HU-210-and JWH-133-treated cells were injected subcutaneously into mice, and the efficiency of tumorigenesis (number of mice that generated tumors relative to total injected mice and the corresponding percentages), relative tumor growth, and tumor weight at the end of the experiment were evaluated. their mitotic activity is inversely correlated with their increased expression of mature glial and neuronal markers (22,23,31). Thus, compared with GBM, low-grade astrocytomas, oligodendrogliomas, and neuroblastomas have a better prognosis and much more efficient therapeutic management (1,2). Genetic modeling of glioma origin has shown that, in addition to differentiated astrocytes (32), neural stem cells constitute a potential niche for malignant transformation that may be more permissive for malignization (11,33). These findings were followed by the identification of brain tumor-initiating cells through their selective expression of CD133 (7,8) or their ability to exclude Hoechst 33342 (9,10). The existence of a brain tumor-initiating cell phenotype with stem cell features may lead in the future to potential therapeutic strategies based on enforced stem cell differentiation aimed at decreasing brain tumor-initiating ability (8,34). In this context, a strong correlation between poor glioma prognosis and the expression of a signature of neurogenesis-related genes has been reported recently (23). Similarly, radial glial progenitor cells constitute the putative origin for ependymoma (35), and neural stem cells may cause cerebellar tumors (34). Finally, brain stem-like cells have been shown to reproduce a brain tumor phenotype in a more reliable manner compared with differentiated transformed cells (36). However, it should be kept in mind that, in addition to much evidence supporting a role for stem-like cells in gliomagenesis and tumor biology (3), new studies are required to provide definitive proof of the concept (37). In particular, a better understanding of the molecular mechanisms responsible for the transformation of normal neural stem cells (38,39) and dedifferentiation of neural cells (32,33) and of the alternative origins of GSCs (40) is required. Cannabinoids are known to exert an antitumoral action against gliomas (15)(16)(17), an effect that has been extended to a variety of tumors of different origins (12). Our new data support that, in addition to inducing apoptosis of differentiated transformed cells (12), cannabinoids promote differentiation of GSCs and inhibit tumor initiation. These observations are in line with recent findings demonstrating that cannabinoid stimulation promotes differentiation of non-transformed adult neural progenitors (18,41). Moreover, the expression of CB receptors by brain tumor-initiating cells may reflect their normal developmentally regulated expression pattern by non-transformed neural stem cells such as subventricular zone progenitors (42) and cortical radial progenitors and hippocampal nestin type I cells (19,41). These findings are thus related to the link between the normal neural stem cell compartment and tumor development (2,3). In conclusion, our results demonstrate the action of cannabinoids | 2018-04-03T04:35:44.491Z | 2007-03-02T00:00:00.000 | {
"year": 2007,
"sha1": "78d728c29149c4ec2abf09255cda0885d4ee97eb",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/9/6854.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "d9338f8bc38c02f7a075d655609fddf2775be5ee",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
243974310 | pes2o/s2orc | v3-fos-license | Rosemary Essential Oils as a Promising Source of Bioactive Compounds: Chemical Composition, Thermal Properties, Biological Activity, and Gastronomical Perspectives
Rosemary (Rosmarinus officinalis L.) is a plant worldwide cultivated mainly for essential oils, extracts, and as a spice. Up-to-date results showed diversity in composition of the essential oils, which may influence their quality, biological activity, and thermal properties. Therefore, the aim of this study was to investigate the chemical composition, antimicrobial activity, and thermal properties of the rosemary essential oils originating from Serbia and Russia. Additionally, oils were added to the sunflower oils in order to investigate possible antioxidant activity during the frying. Investigation of the chemical profile marked α-pinene, eucalyptol, and camphor as the most abundant compounds in both oils. However, overall composition influenced in such manner that Russian oil showed significantly higher antimicrobial activity, while Serbian oil proved to be better antioxidant agent in case of frying of sunflower oil. This would significantly influence possible application of the oils, which could be used as an antioxidant agent for extension of the food shelf life, or antimicrobial agent for protection against different microbial strains.
Introduction
Rosmarinus officinalis L. (rosemary) is the plant from Lamiaceae family, genus Rsomarinus L. [1,2]. This plant is cultivated worldwide due to its essential oils, extracts, as a spice, and due to different biological activities [3]. Essential oils of this plant possess many pharmacological properties [2]. When it comes to the chemical profile, there are differences connected to the regionality, seasonality, environmental conditions, agronomic conditions, and varieties in rosemary itself [1][2][3][4]. In most cases, α-pinene, eucalyptol, and camphor are major compounds in rosemary essential oil [1,2,4,5]. However, other compounds such as
GC/MS Analysis
Analysis of essential oil (EO) samples was done with ion trap GCMS (Thermo Fisher, MA, USA). The analysis was performed using the well-known and described method [17,18]. TR WAX-MS (30 m × 0.25 mm, 0.25 µm) capillary column was used, while analyzed samples were dissolved in methylene chloride and injected into GC through TriPlus AS autosampler (2 µL). Temperature program was as follows: initial temperature 45 • C (8 min), then 8.0 • C/min to 230 • C (10 min). Carrier gas was helium (1 mL/min), while injector was operated in split mode (80:1). Injector, MS transfer line, and ion source temperatures were 250 • C, 200 • C, and 220 • C, respectively. Data acquisition was conducted in m/z range of 30-300. Compounds were identified combining the NIST 08 MS database and MS spectra of analyzed standards (matching factors were higher than 850). Final results were expressed as relative percentage (%). Quantitative analysis was performed by creating the calibration curves for analyzed compound in concentration range of 1.0-500.0 µg/mL. The final content of compounds was expressed as milligram per gram of EO (mg/g EO).
Contents of Major and Trace Elements
The digestion of the sample of essential oils was performed on microwave system for digestion (Advanced Microwave Digestion System, Ethos 1, Milestone, Italy) equipped with the HPR-1000/10S high pressure segmented rotor. About 0.5 g of sample was precisely weighed with accuracy ± 0.1 mg in placed in quartz inserts and mixed with of 5 mL HNO3 (65 wt.%, Suprapur ® , Merck KGaA, Darmstadt, Germany). Temperature program of the microwave oven was as follows: increasing in the temperature up to 180 • C for 15 min, maintaining it in the next 20 min, following by the rapid decreasing to the room temperature. Obtained solution was further diluted with the ultrapure water to 25 mL in a volumetric flask. Presence and content of elements and minerals in the samples were established by the ICP-OES (iCAP 6500 Duo ICP, Thermo Fisher Scientific, Cambridge, UK). Concentrations of elements of sample was expressed as mg/kg.
Thermal Analysis
The Q1000 Differential Scanning Calorimeter and Q500 Thermogravimetric Analyzer (TA Instruments, New Castle, DE, USA) were used for thermal analysis of RF and SRB EO. Acquired thermograms were analyzed using TA Advantage Universal analysis 2000 software (version 5.5.24).
Thermal Characterization of EO
The temperature range of the conducted DSC experiments was from 0 to 350 • C. All experiments were conducted under the inert atmosphere (nitrogen) at flow of 50 mL/min. Samples (3.0 ± 0.3 mg) were heated at the rate of 5 • C/min in hermetic Al pans. Thermogravimetric experiments were performed in non-isothermal and isothermal conditions. Samples were weighted in 10.0 ± 0.5 mg. Experiments were also conducted under the inert atmosphere (nitrogen with the flow of 60 mL/min). In the experiments at the nonisothermal conditions the samples were heated to 160 • C at the rate 5 • C/min. In isothermal conditions the samples were kept at 60 • C. Friedman's non-isothermal isoconversional methods [19] was used calculate the activation energy (Ea) of the evaporation of the tested EOs. Five heating rates were used for this purpose (2,5,10,15, and 20 • C/min). ICTAC Kinetics Committee recommendations for collecting kinetic data [20] and for performing kinetic computations [20] were followed when performing kinetic studies.
Oxidative Stability of Sunflower Oil with Different Share of EO
In order to examine the effect of adding EOs on the oxidative stability of sunflower oil, five concentrations (0.1, 0.5, 1, 5 and 10% (w/w)) of RF and SRB EO in sunflower oil were made. Sunflower oil without the addition of EO was used as a control sample. The oxidative stability of all sunflower oil samples was determined by measuring the oxidation induction time (OIT) using the DSC method [21], at 140 • C and under oxygen flow of 50 mL/min. Open aluminum pans were used, and mass of samples was 3.0 ± 0.3 mg. OIT represents the time from the heating of the oil sample at a certain isothermal temperature to the beginning of the oxidation process in it. The higher OIT values indicate that the analyzed oil sample is more oxidatively stable.
Antimicrobial Activity of Samples
In this study, the antimicrobial activity of the tested R. officinalis EO was tested against four bacteria-two Gram negative: E. coli (ATCC 25922) and P. aeruginosa (ATCC 27853) and two Gram positive: B. cereus (ATCC 11778) and S. aureus (ATCC 25923). Moreover, the antimicrobial potential of the selected oils on eukaryotic type of cells was examined on S. cerevisiae (ATCC 9763) and A. brasiliensis (ATCC 16404). All strains were obtained from the American Type Culture Collection and the cultures were kept frozen at −80 • C in cryovials with the addition of glycerol as a cryoprotectant.
For the assessment of the antimicrobial activity of the R. officinalis EO two methods have been employed: disc diffusion method and microdilution method for determination of minimum inhibitory concentration (MIC). Both methods were previously described in details [22,23].
Bacterial strains were grown on Müller-Hinton agar (HiMedia, Mumbai, India) at 37 • C for 24 h and at 30 • C (Bacillus cereus ATCC 11,778 and Bacillus cereusw) for 18 h. Yeast strains were grown on Sabouraud Maltose agar (HiMedia, Mumbai, India) at 25 • C (Saccharomyces cerevisiae ATCC 9763) or at 37 • C (Candida albicans ATCC 10231) for 48 h. Cells were suspended in a sterile 0.9% NaCl solution. Suspensions were adjusted to a concentration of 1·106 cfu/mL (estimated by Densichek; BioMérieux, Marcy-l'Étoile, France). Afterwards, 2 mL of the prepared suspensions for inoculation were homogenized with 18 mL of melted (45 • C) media (the same as for suspension preparation) and poured into Petri dishes. After the solidification, four sterile discs (6 mm in diameter) (HiMedia, Mumbai, India) were placed onto the previously inoculated agar plates. Applied disks were impregnated with 15 µL of the EO dissolved in dimethyl sulfoxide (50 mg/mL). Dimethyl sulfoxide was used as negative control, while chloramphenicol, tetracycline, and actiodion were used as a positive control After the incubation period, the diameter of the inhibition halo zone was measured for each disk using HiAntibiotic Zone Scale™ (HiMedia, Mumbai, India). Each experiment was performed in triplicate (n = 3).
Minimal inhibitory concentration was assessed for gram-positive bacteria using the microdilution method in sterile flat-bottom 96-well microtiter plates. The preparation procedure of suspensions for inoculation is previously described Disk diffusion method. 1 mL of the prepared suspension (1 × 10 6 cfu/mL) was homogenized with 9 mL of Müller-Hinton broth (HiMedia, Mumbai, India). In order to obtain final concentration in each well (n = 3), 100 µL of inoculated media were mixed with 100 µL of extract dilutions. In each test microtiter plate, there were a positive control (inoculated media without extracts) and a negative control (100 µL of medium mixed with 100 µL of extracts). All test plates were incubated for 24 h at 37 • C or at 30 • C (Bacillus strains). Afterwards, a 100 µL aliquot was poured into Petri dishes and homogenized with Plate Count agar (HiMedia, Mumbai, India). Petri dishes were incubated under identical conditions as microtiter plates and the colonies were enumerated by viable count following the incubation period.
Minimal inhibitory concentration (MIC) is known as the lowest concentration of antimicrobial agent that, under defined in vitro conditions, prevents the appearance of visible growth of a microorganism within a defined period of time. MIC is usually calculated as 100 × (Nc − Nt)/Nc (%), where Nc and Nt are numbers of cells of positive control and treatment, respectively.
Statistical Analysis
All measurements in this study had been performed in triplicates. t-test and analysis of variance (ANOVA) followed by Tukey's HSD test (p < 0.05) were used for the statistical analysis. In the OIT results, it was analyzed whether the adding EO had an antioxidative or prooxidative effect compared to sunflower oil without EO, how the concentration of added EO affected the OIT values, and how the type of EO affected the OIT values. All samples that had a statistically significantly higher or lower OIT value (antioxidative and prooxidative effect, respectively) than pure sunflower oil were marked with an asterisk in superscript. Different uppercase letters in the same EO indicate a significant difference of the OIT depending on the concentration of the added EO. Different lowercase letters in the same concentration of the added EO indicate a significant difference of the OIT depending on the type of EO. XLSTAT (version 2014.5.03, Addinsoft, New York, NY, USA) and statistics add-in for MS Excel were used to perform above-mentioned statistical calculations.
Chemical Profiles of Essential Oils
Both Serbian (SRB) and Russian (RF) rosemary oils were analyzed for assessment of chemical profile and composition of terpenes, minerals, and elements. Result of the GC/MS analysis is given in Table 1, while chromatograms are given in Figure S1 (Supplementary Data).
The values are presented as mean ± SD, different superscripts within the same row indicate significant differences of means, according to t-test (p < 0.05). * N.D.-Not detected. "-" not determined.
Obtained results showed that three compounds were principle in both essential oils: α-pinene (23.00% and 17.76% in SRB and RF oils, respectively), eucalyptol (17.79% and 23.40 in SRB and RF oils, respectively), and camphor (14.39% and 17.17% in SRB and RF oils, respectively). Both eucalyptol and camphor were found in higher percentage in RF compared to the SRB oil. However, quantification showed that all three compounds were presented in higher amount in SRB oil. Beside above-mentioned compounds, there were several compounds detected in higher amount: camphene, β-pinene, limonene, p-cymene, borneol, bornyl acetate, and trans-β-caryophyllene.
Results also showed certain diversity in chemical profile between the samples. Thus, α-terpineol and terpinen-4-ol were found only in SRB sample. Moreover, several other compounds, such as γ-terpinene, αand β-phellandrenes, terpinolene, were also found only in SRB oil sample. On the other hand, m-cymene, α-pinane oxide, pinocarvone, carvone, and several other compounds were detected in RF sample (Table 1). It is expected that this diversity in profile would influence their behavior and biological activity.
Previously reported studies about the chemical composition of rosemary essential oils showed also certain diversity in chemical profile and composition. Pellegrini et al. (2018) found camphor to be the principal compound (22.07%) followed by α-pinene (16.64%), eucalyptol (15.71%), and borneol (11.99%) [5]. Interestingly, authors reported absence of limonene in this sample, while borneol was significantly higher than in our samples. Investigation of seasonal diversity in composition of the rosemary oil showed changes in the content and overall profile of analyzed samples [2]. Despite these changes, camphor was reported as the principal compound (35.93-24.38%) followed by eucalyptol (19.26-22.68%), and myrcene (9.55-15.25%). Similar results were obtained by Zaouali et al. [1], but with eucalyptol as the principal compound in most cases. Bajalan et al. (2007) investigated composition of rosemary oils isolated from seven Iranian populations of this plant. However, despite differences in EO's sources, authors confirmed the prevalence of camphor, eucalyptol, and α-pinene [4]. The same case was for study reported by Jordan et al. (2013), where authors investigated influence of phenological stage on chemical composition of rosemary essential oil. Major compounds were the same, i.e., α-pinene (13.0-15.5%), eucalyptol (18.9-21.2%), and camphor (17.0-18.6%) [3]. Bousbia et al. (2009) applied two different approaches for isolation of essential oil, i.e., hydrodistillation and microwave hydrodifussion and gravity, and compared chemical profile of obtained samples. Results were similar when comparing to each other, where α-pinene was the principal compound, followed by camphor and verbenone. However, authors did not report presence of eucalyptol, which is one of the main compounds in our samples [8]. Karakaya et al. (2014) also investigated effects of different extraction techniques (hydrodistillation and microwave-assisted hydrodistillation) on the composition of the rosemary oil. They found eucalyptol to be the principal compound, followed by camphor, α-pinene, borneol, and camphene [7]. There is also study of composition of commercial essential oil [6], which also reported camphor to be the main compound (35.5%), followed by eucalyptol (18.2%). Surprisingly, authors reported rather high content of bornyl acetate (13.4%) and lower content of α-pinene (4.9%) comparing to the results from this study ( Table 1).
Elements and mineral's content are given in Table 2. It might be seen that SRB was quite rich in Fe, Ca, Na, and S, while RF was rich in Ca, Na, and S. Comparing the results of these elements, SRB was richer in their content. Arsenic was not found in both samples, while Co was found only in SRB sample in trace level (0.032 mg/kg). Furthermore, Cd and Pb were also detected in trace levels, what makes these oils safe to use in a diet or as a supplement.
There are several available classifications of elements. One of them classifies elements into four major groups: essential, beneficial, contaminating, and polluting elements [26]. According to Stephanos and Addison essential elements are certain nonmetals (C, H, O, N, S, P, Cl, and I), alkali and alkaline-earth metals (Na, K, Mg, and Ca), and transition elements such as Fe, Zn, Mn, Cu, Co, and Mo. In the group of beneficials are different nonmetals, metalloids, and metals (F, Br, Se, Si, Sn, V, Cr, and Ni. Polluting elements are Hg, Cd, and Pb. Presence of certain elements, such as Fe, Ca, Cr, and Mg, is essential for nutritive value of the essential oils. Bulk elements are necessary for the proper functioning of the organism and should be intake at the daily levels. Iron is an essential microelement necessary for hemoglobin and myoglobin synthesis. Besides those two proteins, iron is also essential for cytochromes and some other enzymes. The deficiency of this element is known as anemia. Several types of enzymes require zinc for proper functioning. These are hydrolases, peptidases, and oxidases. This element has also significant role in gene expression and fold stabilization which requires zinc fingers [27]. Copper has also important role in metabolism, i.e., in the electron transfer process in Type III heme-copper oxidases and also Type I blue-copper proteins [28]. All these elements should be ingested daily. Because of such importance, daily intake levels are defined and known as dietary reference intake (DRI) created by the US Department of Agriculture [29]. According to the DRI, daily intake of Na and K, and Ca is measured in grams, while intake of Mg should be in milligrams. Daily intake of phosphorus is 1.25 g/day for both male and female up to 18 years old. After this age, uptake of this element should be lower (700 mg/day). Iron, zinc, and manganese should be also taken in milligrams a day levels, while copper and chromium should be ingested in micrograms a day levels [29]. The values are presented as mean ± SD, different superscripts within the same row indicate significant differences of means, according to t-test (p < 0.05). * N.D.-Not detected.
Antimicrobial Activity of Essential Oils
The next step of this study was to investigate whether the variation in chemical composition and geographical origin of the tested R. officinalis EO affect their antimicrobial potential. Preliminary screening of the in vitro antimicrobial activity was performed by disc-diffusion method. According to the obtained results (Table 3) it might be noticed that RF showed far better antimicrobial potential in comparison to the SRB. In the case of RF, the maximum inhibition zone of 40.00 mm was registered for all tested microorganisms, except for P. aeruginosa (21.33 mm) and A. brasiliensis (33.00 mm) where the activity might be estimated as moderate to high. Additionally, it should be pointed out that the inhibition zone of RF was even higher than those of the used positive controls (chloramphenicol 30 µg/disc, tetracycline 30 µg/disc, and actidion 30 µg/disc), indicating the possibility of using RF as a natural ingredient in combating the microbial and antimycotic resistance toward the antibiotics. A high antimicrobial activity (above 30.00 mm) was also observed for SRBR against E. coli, S. aureus, and S. cerevisiae. However, this oil showed low to moderate activity against the other tested microorganisms. According to the relevant researches conducted in this area, it can be concluded that rosemary EO usually demonstrated moderate activity against tested set of microorganisms [1,2,6], which is consistent to the results obtained for SRB. To the best of our knowledge, such a high antimicrobial performance of rosemary EO as in the case of RF has not been previously published elsewhere. In the available literature, there are opposite opinions about the carriers of antimicrobial activity in EOs. Some of the authors claim that the dominant chemical components are essential for antimicrobial properties such as camphor, eucalyptol, and α-pinene [1,30]. On the other hand, there are studies which emphasize the importance of minor components in EOs as well as the synergistic effect between terpenoid and phenolic compounds which could be able to disrupt cellular membrane and inhibit cell respiration and ion transport process [3,31]. Table 3. Antimicrobial activity of R. officinalis EO from Serbia (SRB) and Russia (RF) (mean value diameter of the inhibition zone (mm) including disc (6 mm)) ± standard deviation). Values are presented as mean ± standard deviation (n = 3), different lowercase superscript within the same row indicate a significant difference of means according to Tukey's honest significant difference (HSD) test (p < 0.05). CHL-chloramphenicol, TET-tetracycline, DMSO-dimethyl sulfoxide. * nd-not detected.
Group
When it comes to the composition-activity relationship, i.e., structure-activity dependence, it was shown that isomerism does not influences the antimicrobial activity. It has been also reported that functional group's position does not affect the activity. However, occurrence of the hydroxyl functional group in the structure has significant impact on the antimicrobial activity [32]. Therefore, alcohols are more active than aldehydes [33,34]. Furthermore, terpinen-4-ol proved to be a more potent antimicrobial agent than α-terpineol. Explanation for such behavior is the capacity for creating the hydrogen bonds. In this case, position of OH group in 4-terpineol increases the capacity for making the hydrogen bond [32]. Cyclic monoterpenes β-pinene and limonene also showed significant activity. Thus, β-pinene influences the respiration and leakage of the potassium and hydrogen ions in yeasts [35], while both compounds inhibit energy-dependent processes, such as respiration, in S. cerevisiae [36]. Certain terpenes, e.g., αand β-pinene, γ-terpinene, and limonene, induce structural and functional changes of the membrane [37]. Previous results indicated that certain properties of the compounds, such as hydrophobicity and lipophobicity, also significantly influence on their antimicrobial potency. These properties allow them to penetrate through the membrane consequently changing the fluidity, permeability, protein properties, etc. [38]. There are reports which indicate that mixture of two or several terpenes showed higher activity than each one separately [39]. Therefore, synergistic effect should be also taken into an account when comparing the activity of these oils, especially because their chemical composition is different, i.e., different compounds were found in SRB and RF. Thus, it has been reported that p-cymene increases the antimicrobial activity of other compounds [34]. This could be, besides synergy, one of the possible explanations for significantly higher activity of RF comparing to the SRB oil.
Besides the chemical composition, some papers confirm the influence of geographical origin [3,4], seasonal variations [2], as well as varieties in rosemary [1,4] on the antimicrobial performance of rosemary's EO.
After the satisfactory results of preliminary examination by disk-diffusion method, microdilution methods were applied for further investigation of the antimicrobial activity. From the results presented in Table 4 it could be noted that SRB showed good antimicrobial activity against bacteria (MIC ≤ 50%), while in the case of the eukaryotic microorganisms the activity was moderate (MIC > 50%). Moderate activity against eu-caryotic organisms could be attributed to their complex cell structure [40]. In contrast, a very low MIC (below 6.3%) of RF was noted for all selected microorganisms. The obtained results indicating a high antimicrobial activity of SRB and RF are in a good correlation with previously reported studies [2,3]. Such a great antimicrobial performance of the tested EO may contribute to their use in reducing foodborne pathogens and extending shelf life of food products or as a potential natural and green replacement of synthetic antibiotics, antimycotics, and preservatives in food and cosmetics industry.
Thermal Properties
After the initial evaluation (chemical composition and antimicrobial activity), the next step was to determine the thermal properties. These are quite important data because application of these oils would depend on their stability and evaporation. Obtained results are listed in Table 5, and corresponding curves are shown in Figure 1a. The shape of all curves was almost identical for both EOs indicating that analyzed EOs have similar thermal characteristics. This is rather expected given that the most prevalent components were the same in both EOs (Table 1). One wide endothermic peak, in the range of about 150 to 250 • C for SRB and about 170 to 260 • C for RF, appeared on both DSC curves, which corresponded to the process of evaporation [17,22]. The main step of this process (temperature range from Ton to Toff) of the SRB EO was wider than one of RF EO, which was expected, because more components were detected in SRB by GC-MS analysis. The evaporation process in SRB EO began at a lower temperature than in RF EO (Ton,SRB < Ton,RF, p < 0.05), and in both EOs it ended at approximately the same temperature (p < 0.05). The boiling temperatures of the predominant components of both samples varied from about 166 to 264 • C [41], which is in accordance with a temperature range of the EOs evaporation process determined by the DSC method.
One mass loss and one peak were detected on the TG and differential TG curves for both samples, respectively, indicating that the evaporation occurred in one step (Figure 1b). Both EOs evaporated completely to about 120 • C, with the residue of 1.5 to 2%. The peak temperature (T p ) on the DTG curve represents the temperature at which the evaporation process was fastest. T p of SRB EO was lower than T p of RF EO (Table 1, p < 0.05), indicating that the evaporation process in SRB EO reached a maximum rate at a lower temperature compared to RF EO. This is consistent with the DSC results that the evaporation process in SRB EO begins at a lower temperature compared to RF EO. Table 5. Results of thermal analysis (DSC and TGA) of Russian and Serbian rosemary essential oils.
Thermal characteristics of analyzed EOs were also examined in isothermal conditions at 60 • C (Figure 1c). The EOs showed almost identical thermal properties under these conditions, too. About 35% of both EOs evaporated by the time the isothermal conditions were reached (about 2.5 min). Both EOs completely evaporated in about 25 min (extent of conversion-α reached value 1). The rate of the evaporation process under isothermal conditions (dα/dt) was maximal at the beginning and decreased with time, indicating a decelerating type of kinetic model for the process of the evaporation [42]. The explanation for such behavior is that in the beginning, more volatile components evaporate. As they leave the system, less volatile compounds persist in the EOs over time, causing the evaporation rate to decrease. This is in accordance with the literature data for laurel, sage, and coriander EOs, whose evaporation process demonstrated a decelerating type of kinetic model, too [17,22].
The activation energies (Ea) of the evaporation obtained by the Friedman method [19] were in range from 52.5 to 72.6 kJ/mol for RF EO, and from 53.8 to 67.9 kJ/mol for SRB EO. In the literature, the activation energy of evaporation for pure substances is associated with the enthalpy [43,44]. The enthalpies of evaporation of the most prevalent compounds ranged from 37.9 to 52.8 kJ/mol for both oils [41]. These values are slightly lower than the experimentally obtained activation energies of the essential oils' evaporation process. However, it should be kept in mind that essential oils are complex systems consisting of dozens of different components. These components in the system can physically interact, which can affect their evaporation process. Therefore, they can affect the activation energies of the essential oil evaporation process, which should essentially be the average value of the evaporation enthalpies of individual components of essential oil. Ea values did not vary significantly with the increase in the extent of conversion (α), for both EOs (Figure 2). Such results imply that the process of the evaporation is a single-step process, which is consistent with the non-isothermal TGA results. The average activation energies in the tested EOs, 57.5 ± 5.4 kJ/mol for RF and 56.9 ± 3.4 kJ/mol for SRB, were not significantly different from each other (p < 0.05), which is another confirmation that these oils have very similar thermal properties.
The Effect of Added RF and SRB EO on the Oxidative Stability of Sunflower Oil
Finally, investigated essential oils were added to the sunflower oil in different percentages in order to investigate the possibility of EOs utilization as an antioxidant agent during the frying process. The effect of analyzed essential oils in sunflower oil on its oxidative stability was investigated by determining the oxidation induction time (OIT) using DSC in isothermal conditions at 140 • C. DSC is a suitable technique for this purpose, because it simulates the real conditions of using edible oils at high temperatures during heat treatment of foods. The DSC curves of the oxidation process of pure sunflower oil and oil samples containing 1% EO are shown in the Figure 3. The effect of five different concentrations (0.1, 0.5, 1, 5 and 10% (w/w)) was examined, and the obtained results are shown in Table 6. It might be seen that, at concentrations of 0.1, 0.5 and 1% of RF, this oil did not have a significant effect on the OIT values of sunflower oil (p < 0.05). 16.3 ± 0.8 C,a 5 ± 1 C,b, * Values are presented as mean ± standard deviation (n = 3), different uppercase superscript within the same column indicate a significant difference of means, different lowercase superscript within the same row indicate a significant difference of means, and asterisk (*) indicate a significant difference of means between all samples and sunflower oil without essential oil according to Tukey's honest significant difference (HSD) test (p < 0.05).
Concentrations of 5 and 10% significantly reduced the OIT value compared to pure sunflower oil (p < 0.05). The OIT value was reduced about 1.7 times by adding 5% of RF EO and even 4 times by adding 10% of RF EO, indicating that increasing the concentration of RF EO significantly reduced the quality of sunflower oil in terms of its oxidative stability. In the case of SRB EO, concentrations of 0.1, 5 and 10% did not have a significant effect on OIT values compared to pure sunflower oil (p < 0.05). Concentrations of 0.5 and 1% slightly increased the OIT value (p < 0.05), indicating that they improved the quality of sunflower oil in terms of its oxidative stability. Based on these results, it can be concluded that the addition of SRB EO in a certain concentration can improve the oxidative stability of sunflower oil, and thus its quality. While the addition of RF EO in smaller concentrations does not affect the oxidative stability of sunflower oil, higher concentrations can significantly impair it. The reason for such a different effect on the oxidative stability of sunflower oil could be the presence of different minor components in analyzed EO, since both EOs, RF and SRB, have a similar content of the predominant components.
Positive effect of rosemary essential oil on stability of hazelnuts and poppy oils was previously reported [45]. There is also report of high protection activity of rosemary oil in sunflower oil against oxidation [46]. However, authors investigated antioxidant influence by periodic determination of peroxide value, which is simple volumetric method (titration) after exposing the samples at 50 • C. In this case, investigation has been performed at 140 • C, which is more suitable and simulates the cooking processes which include sunflower oil. In this case, antioxidant activity is very important for this role. It has been reported that carvone, myrcene, and γ-terpinene scavenge DPPH radicals very quickly. It was also shown that terpenes with conjugated double bonds had very high antioxidant potency [47]. In this case presence of such compounds was noticed in both oils, but SRB showed better activity in this case because of the higher content of these compounds in it.
Conclusions
Investigation of the influence geographical origin on the chemical profile of the rosemary oil indicated the discrepancies in both composition and in content of the identified compounds in analyzed samples. Although the same compounds were the most abundant one in both oils (α-pinene, eucalyptol, and camphor), both oils had specifical compounds which could be detected in only one sample but not in the other. Such diversity significantly influenced the properties of the oils. Therefore, Russian oil (RF) showed significantly higher antimicrobial activity against all tested strains. On the other hand, when oils were injected into a sunflower oil, Serbian oil (SRB) proved to be more potent as antioxidant agent, while RF did not affect the stability in minor concentrations, but decreased oxidative stability of sunflower oil in higher concentrations. Therefore, application of the essential oil would highly depend on the chemical composition and oils have to be properly investigated prior to decision of the field of application. However, the result presented herein showed high potential of the rosemary as an antioxidant and antimicrobial agent. This implies possible application for different purposes, such as a natural preservative agent, to be used instead of artificial agents which may be harmful for human health.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-11-11T16:23:56.797Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "8e25c1594ea0f05c295c1fbc5f0757dcfc1304a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/11/2734/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a99e1411abb454d047e5d529d99a76cba613acbd",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253972556 | pes2o/s2orc | v3-fos-license | Contralateral cerebello-thalamo-cortical pathways with prominent involvement of associative areas in humans in vivo
In addition to motor functions, it has become clear that in humans the cerebellum plays a significant role in cognition too, through connections with associative areas in the cerebral cortex. Classical anatomy indicates that neo-cerebellar regions are connected with the contralateral cerebral cortex through the dentate nucleus, superior cerebellar peduncle, red nucleus and ventrolateral anterior nucleus of the thalamus. The anatomical existence of these connections has been demonstrated using virus retrograde transport techniques in monkeys and rats ex vivo. In this study, using advanced diffusion MRI tractography we show that it is possible to calculate streamlines to reconstruct the pathway connecting the cerebellar cortex with contralateral cerebral cortex in humans in vivo. Corresponding areas of the cerebellar and cerebral cortex encompassed similar proportion (about 80 %) of the tract, suggesting that the majority of streamlines passing through the superior cerebellar peduncle connect the cerebellar hemispheres through the ventrolateral thalamus with contralateral associative areas. This result demonstrates that this kind of tractography is a useful tool to map connections between the cerebellum and the cerebral cortex and moreover could be used to support specific theories about the abnormal communication along these pathways in cognitive dysfunctions in pathologies ranging from dyslexia to autism.
Introduction
The cerebellum is a brain structure forming complex largescale connections, whose integrative functions are still poorly understood. Besides a well-known role in motor learning and control (Holmes 1939;Evarts and Thach 1969), recent works have demonstrated a crucial role of the cerebellum in a number of other functions including cognition (Middleton and Strick 1994;Schmahmann and Caplan 2006). Tract-tracing and functional investigations both in non-human primates and in humans have shown projections from the dentate nucleus of the cerebellum to prefrontal and posterior parietal cortices via the thalamus supporting the hypothesis of a significant role for the cerebellum in higher cognitive and emotional processes (Middleton and Strick 1994;Schmahmann and Pandya 1995;Kelly and Strick 2003;Ramnani 2006;Strick et al. 2009). However, evidence in humans is more limited compared to that in non-human primates due to technical challenges of assessing in vivo the long polysynaptic connections between the cerebellum and the cerebral cortex (Snider and Eldred 1952;Nandi et al. 2002).
Recent developments in MRI technology have enabled the study of the anatomical cerebellar connections in vivo in humans using diffusion tensor imaging (DTI) and tractography (Habas and Cabanis 2007a, b;Jissendi et al. 2008;Doron et al. 2010;Anderson et al. 2011;Hyam et al. 2012). These techniques have already provided a visualization of afferent and efferent projections through the superior cerebellar peduncles (SCPs), the red nuclei (RN) and the thalamic projections to the cortex (Behrens et al. 2003a;Salamon et al. 2005). A major problem of these studies is that the diffusion tensor model has intrinsic limitations; in particular, it does not directly resolve crossing fibre structures (Alexander et al. 2001(Alexander et al. , 2002Tuch et al. 2002;Jissendi et al. 2008;Tournier et al. 2011). The consequence is that tractography methods based on the diffusion tensor (DT) properties allow only partial reconstruction of cere-bellar white matter tracts, and therefore have limited capability to reveal complex anatomical cerebello-thalamocortical circuits (Salamon et al. 2007). Some investigations have used alternative techniques, which overcome the intrinsic limitations of the DT model: diffusion spectrum imaging (DSI) (Wedeen et al. 2005) was used to study the intra-cerebellar connections in vivo in humans (Granziera et al. 2009) while multi-tensor reconstruction (Behrens et al. 2007) and constrained spherical deconvolution (CSD) (Tournier et al. 2007 were used to identify the dentate-rubro-thalamic pathway, originating from the dentate nucleus in the cerebellum and terminating in the contralateral ventrolateral (VL) and ventroanterior (VA) nuclei of the thalamus (Kwon et al. 2011;Van Baarsen et al. 2013;Akhlaghi et al. 2013). However, nobody has yet reconstructed the cerebello-thalamo-cortical pathway respecting the predicted decussation occurring just after the exit of the pathway from the SCP and leading it to the contralateral thalamus and cerebral cortex in a cohort of healthy subjects.
In this paper, we used advanced diffusion imaging methods to reconstruct, in humans in vivo, the pathway connecting the cerebellar cortex to the contralateral cerebral cortex, passing through the SCP, the RN and the thalamus. Figure 1 shows a schematic view of the most important connections that we expect to find in the cerebello-cerebral circuit. While recognizing that tractography provides only indirect evidence of anatomical connectivity between regions and cannot distinguish between direct connections and pathways involving synapses (like the cerebello-thalamo-cortical pathway) (Catani et al. 2012;Jones et al. 2013), we aimed to assess the usefulness of tractography for investigations of such large-scale neural circuits. In particular, we aimed to ascertain whether (1) pathways connecting the cerebellar cortex with the contralateral cerebral cortex can be reconstructed from in vivo diffusion data; (2) there is a consistency of the tract involvement in cortical areas of the cerebrum and cerebellum with similar function or anatomical meaning; (3) the majority of streamlines passing through the SCP connects the cerebellar hemisphere with contralateral associative areas, as has been hypothesized based on the supposed parallel evolution of these two brain structures (Sultan 2002). Achieving these aims would support the hypothesis that the cerebellum takes part in central circuits involved in higher brain functions and cognitive processing (Habas et al. 2009;Krienen and Buckner 2009;Buckner et al. 2011), and underpin future studies of abnormal communication along these pathways, which could be implicated in pathologies recently shown to involve the cerebellum, such as dyslexia and autism (Schmahmann and Caplan 2006;D'Angelo and Casali 2013).
Materials and methods
In this paper, the reconstruction of the contralateral cerebello-thalamo-cortical pathway was achieved by combining two advanced diffusion techniques: tract reconstruction based on CSD, which can model multiple fibre populations within a voxel and is able to resolve the decussation of the trans-hemispheric connection, and super-resolution maps based on track-density imaging (TDI) (Calamante et al. 2010), which allowed accurate seed and target region placement. TDI maps improve resolution and white matter contrast compared with conventional DTI maps (such as mean diffusivity, MD, and fractional anisotropy, FA) and can be generated from high angular resolution diffusion imaging (HARDI) datasets (Calamante et al. 2010). After reconstruction of the cerebello-thalamo-cortical connections, a number of ''tractography metrics'' were defined in an attempt to quantify the pattern of the connections for specific cerebral and cerebellar cortical regions. Given the wellknown challenges in quantifying connectivity based on tractography (Jones et al. 2013), we defined two simple metrics that, while imperfect, should nonetheless provide sufficiently robust evidence to support our conclusions. These are the proportion of each cortical region that is reached by the tractography algorithm and the proportion of the total cortical volume reached by the tractography algorithm that is contained within each cortical region; neither of these is expected to be overly influenced by streamline count. For completeness, we also report the streamline counts reaching each cortical region.
Subjects
The study was carried out on 15 right-handed healthy adults (7 males and 8 females; mean age 36.1 years and range 22-64 years) with no previous history of neurological symptoms. All participants gave written informed consent. The study protocol was approved by the local institutional research ethics committee.
MRI acquisition
All data were acquired on a Philips Achieva 3T MRI scanner (Philips Healthcare, Best, The Netherlands) using a 32-channel head coil. The HARDI scan consisted of a cardiac-gated SE echo-planar imaging (EPI) sequence acquired axial-oblique and aligned with the anterior commissure/posterior commissure line, for a total scan time of approximately 20 min. The imaging parameters were TR & 24 s (depending on the cardiac rate), TE = 68 ms, SENSE factor = 3.1, acquisition matrix = 96 9 112, 2 mm isotropic voxel and 72 axial slices with no gap. The diffusion weighting was distributed along 61 optimized non-collinear directions with a b value of 1,200 s/mm 2 (Cook et al. 2007). For each set of diffusion-weighted data, 7 volumes with no diffusion weighting (b0) were acquired. For anatomical reference a whole brain high-resolution 3D sagittal T1-weighted (3DT1w) fast field echo (FFE) scan was acquired using the following parameters: TR = 6.9 ms, TE = 3.1 ms, TI = 824 ms, acquisition matrix = 256 9 256, 1 mm isotropic voxel, 180 sagittal slices, acquisition time 6 min 31 s. Fig. 1 The most important connections in the cerebellocortical circuit. Projections from the basal ganglia (through the subthalamic nucleus, STN) go mainly to the thalamic nuclei (VA/VL). The cerebellum sends its output through the superior cerebellar peduncle (SCP), the contralateral red nucleus (RN), and VA/VL of the thalamus to various cerebral areas including the motor cortex (MC), the prefrontal cortex (PFC), the parietal cortex (PC), and the temporal cortex (TC). The decussation (d) of the cerebellothalamo-cortical pathway is indicated by the yellow circle. Modified from D' Angelo andCasali (2013) Brain Struct Funct (2015) 220:3369-3384 3371 Diffusion analysis and fibre tracking HARDI data were analysed using the FSL (FMRIB Software Library, http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/) and MRtrix (http://www.brain.org.au/software/mrtrix/) software packages, following these steps: 1. Pre-processing: Eddy current correction and brain extraction (Smith 2006) were performed using FSL. 2. Structural-diffusion data alignment: The high-resolution 3DT1w volume was realigned to the diffusion data by inverting the full-affine transformation (12 degrees of freedom, FLIRT, FSL) (Jenkinson et al. 2002) from diffusion to high-resolution space. 3. Decussation realignment: For each participant, the 3DT1w volume in diffusion space (obtained in step 2) was realigned along the superior/inferior direction to the MNI-152 template using a rigid body transformation (6 degrees of freedom) with nearest neighbour interpolation. This transformation was chosen to align the decussation region between all subjects to compare parameter values along the aligned tracts while minimizing potential biases that could be introduced when using non-linear registration of diffusion data; hence, we chose to perform the analysis in the individual subjects' space. The transformations were then applied to the diffusion-weighted data. This space will be considered the subject's native space from this point onward, rather than the acquired space. 4. Whole brain tractography: To generate the TDI maps, whole brain tractography was performed with MRtrix using an algorithm that combines the CSD technique with probabilistic streamlines tractography ; the relevant parameters were seed = whole brain, step size = 0.1 mm, maximum angle between steps = 10°, maximum harmonics order = 8, termination criteria: exit the brain or when the CSD fibre-orientation distribution amplitude was \0.1. Streamlines were generated by randomly seeding throughout the whole brain until the desired total of 2.5 million streamlines had been selected. 5. TDI map: From the streamlines obtained in step 4, a TDI map was created as the total number of streamlines passing within each element of a userdefined super-resolution grid (Calamante et al. 2010); for this study a 1-mm resolution grid was used. 6. Cerebello-thalamo-cortical pathways: Cerebello-thalamo-cortical pathways were reconstructed by combining the CSD algorithm with probabilistic tractography and by tracking the bundle passing through two regions of interest (ROIs) (Schmahmann et al. 1999;Habas and Cabanis 2007a, b;Kwon et al. 2011): the SCP and the contralateral RN. These pathways were reconstructed by randomly seeding streamlines throughout the SCP seed ROI (see step 7) until 3,000 streamlines were reconstructed. For clarity, from this point onward the word ''tract'' is meant to indicate the tractography reconstruction of the cerebello-thalamo-cortical connection. To compare this with the conventional diffusion tensor model, cerebello-thalamo-cortical pathways were also reconstructed using a DTI-based streamline deterministic tractography by randomly seeding streamlines throughout the seed ROI defined by the SCP, and using the following parameters: step size = 0.1 mm, maximum angle between steps = 4.5°, initial FA C0.2, termination criteria: exit the brain or when the FA was \0.1; once again, a total of 3,000 streamlines were reconstructed. No contralateral target ROI was defined because with this approach tracts run only ipsilaterally. 7. Seed/target ROIs placement: SCP and RN masks were placed using the high-resolution TDI images.
The seed ROI was defined as a sphere with 2 mm radius centred on the SCP in each cerebellar hemisphere and was identified in the coronal plane, as described by Calamante et al. (2010), while the target ROI on the whole contralateral RN was recognized as a very hypointense region ). 8. MNI normalization: 3DT1w images from all participants were normalized to the MNI-152 template using a non-linear registration algorithm with nearest neighbour interpolation from the FSL library (FNIRT) (Klein et al. 2009). 9. Atlases-diffusion data alignment: The atlas of Brodmann areas (BA) and of the cerebellum (SUIT) (Diedrichsen et al. 2009) was aligned to native space of each subject by inverting the warping transformation obtained in step 8 to more accurately study the cerebellar pathways. 10. Parcellation of cerebral and cerebellar cortices: For all participants, in native space, cerebral and cerebellar cortices were parcellated in two ways: one based on anatomical grounds, the other on a functional basis. Anatomical parcellation consisted of the following areas: • Cerebrum: prefrontal cortex, frontal, parietal, temporal, occipital and limbic lobes (Brodmann 2006); • Cerebellum: anterior, VI, lateral Crus I-II, VIIb/ VIII and inferior lobules (Schmahmann et al. 1999).
Functional parcellation consisted in the following areas: • Cerebrum: motor, associative, primary sensory, primary auditory and primary visual areas (Brodmann 2006); • Cerebellum: primary motor, sensory motor and cognitive/sensory areas (Diedrichsen et al. 2009). Deep grey matter nuclei were segmented using FIRST (FSL). The following areas were considered in the analysis: basal ganglia (caudate, putamen and pallidus), thalamus, nucleus accumbens, amygdala and hippocampus.
11. Quantification of cROI tract , i.e. the percentage of each cortical region (cROI) within the tract: as shown by Fig. 2 this index reflects the proportion of the parcellated cortical region under consideration (cROI) that is involved in the cerebello-thalamo-cortical pathway. For each parcellation and for each subject in native space, cROI tract was calculated as the percentage number of voxels within the parcellation that were reached by any number of streamlines within the tract. 12. Quantification of trGM cROI , i.e. the proportion of the overall tract grey matter (GM) belonging to a specific cROI: as shown by Fig. 2 this index reflects the proportion of the total cortical GM involved in the tract that belongs to a particular parcellated cortical region. For each parcellation and for each subject in native space, trGM ROI was calculated as the percentage of all cortical voxels reached by any streamline in the tract that belong to the particular cROI of interest. This analysis was also performed for the deep GM nuclei, by computing the metric over all voxels within the deep GM regions. 13. Quantification of TSC, i.e. the total streamline count: this measure reflects the number of the streamlines reaching the cortex rather than the number of voxels of the tract included in the cortex. For each cortical parcellation TSC was calculated with MRtrix isolating from a tract only the streamlines that entered a given region. 14. Mean cerebello-thalamo-cortical pathway: To assess the consistency of the tracts in MNI space and for display purposes the tracts from all subjects were normalized using the same transformation calculated for 3DT1w images in step 8. A mean image of tracts was calculated from the binarized tracts for each subject (Ciccarelli et al. 2003). Voxels were assigned the count of the number of subjects with that specific voxel included in the mask. The mean tracts' image was thresholded to include voxels common to at least 20 % of subjects. The unthresholded left and right mean tracts in MNI space are available on request. 15. Thalamus parcellation: Since the thalamus is a synaptic relay, to assess whether the reconstructed cerebello-thalamo-cortical pathways actually reflected the thalamo-cortical connectivity, the thalamus was parcellated as indicated by Behrens et al. (2003a, b) and the mean cerebello-thalamo-cortical pathway was superimposed in MNI space.
Results
The reconstruction of the cerebello-thalamo-cortical pathway The combination of using the CSD algorithm and probabilistic tractography successfully reconstructed the cerebello-thalamo-cortical pathways in all subjects. Seeding from the SCP, streamlines were identified connecting the cerebellar cortex to the contralateral cortical hemisphere, passing through the contralateral RN. Figure 3 shows a comparison between the cerebello-thalamo-cortical pathway reconstructed using DTI and the streamline deterministic tractography (Fig. 3a) and a combination of CSD and the probabilistic tractography ( Fig. 3b) in a representative subject. As can be seen in Fig. 3a, the DTI approach fails to reconstruct contralateral connections, which is a problem that cannot be resolved even with the usage of a contralateral target ROI. To select the contralateral connections it is therefore necessary to start from a non-tensor-based approach such as CSD, which produces streamlines running both ipsi and contralaterally (Fig. 3b); to isolate just the contralateral pathway it is necessary to add of a target region that we chose to be the contralateral red nucleus (Fig. 3c). Figure 4 also shows a 2D rendering of both cerebello-thalamo-cortical pathways from a representative subject. In particular, Fig. 4a shows the tracts colourcoded by direction in order to represent their anatomy, while Fig. 4b shows the same tracts using a single colour per tract in order to distinguish left from right-side streamlines highlighting their extensions into the cerebral and cerebellar cortices.
To highlight the extent of the cerebello-thalamo-cortical pathway, Fig. 5 shows different views of the average tract across all subjects in MNI space. Figure 5a shows the distribution of streamlines in the cerebral cortex: the reconstructed tracts reach the prefrontal, frontal and temporal cortices with a high density of streamlines. Figure 5b shows streamlines distribution in the cerebellar cortex: the highest density of streamlines is observed in lateral Crus I-II and in lateral lobules VIIb/VIII. Figure 5c shows that specific deep grey matter nuclei were reached by a high number of streamlines, especially the VA and VL nuclei in the thalami and the caudate nuclei (but also putamen and pallidus) in the basal ganglia. The tracts also show that most of the cerebellar streamlines are ipsilateral to the SCP seed, with a minimal portion of streamlines crossing contralaterally; the connection to the cortex, instead, runs contralaterally to the SCP seed (as imposed by the presence of the waypoint region of the contralateral RN) with a small number of streamlines running into the septum.
For completeness, we have also displayed the cerebellothalamo-cortical pathway from cerebellar cortex to cerebral cortex by showing slice-by-slice its extension in Supplementary Materials (Supplementary Figure 1). This also demonstrates connections between the cerebellum and septal regions. For simplicity, we have chosen to show only the cerebello-thalamo-cortical pathway seeded in the left SCP.
To highlight the extension of the tract in the thalamic relay, Fig. 6 shows different views of the average tract across all subjects overlaid onto the parcellated thalamus in MNI space. The highest density of streamlines is seen in the VA and VL nuclei of the thalamus, which correspond to areas principally connected with prefrontal and frontal (motor) cortices (Behrens et al. 2003a;Zhang et al. 2008;Mang et al. 2012). A few streamlines reached the anterior thalamic area and from there the temporal lobe. An even smaller contingent of streamlines reached posterior thalamic nuclei and the pulvinar and from there the parietal and occipital lobes. Fig. 3 Example of cerebello-thalamo-cortical pathway from a representative subject. This is a 2D rendering of streamlines extending over a volume of 5 mm, but mapped to a section of 1 mm thick. The same seed ROI (a-c) was placed on left superior cerebellar peduncle. a The tract was reconstructed using DTI and streamline tractography. No target ROI was drawn. b The tract was reconstructed using a combination of the CSD algorithm and probabilistic tractography. No target ROI was drawn. c The tract was reconstructed as in b with a target ROI drawn on the whole contralateral red nucleus. d Details of the fibre-orientation distribution (FOD) within the decussation region. e Details of the FOD and tract within the decussation region The destination of SCP streamlines in different brain structures, including cerebellar and cerebral cortices and deep grey matter nuclei, was evaluated using three parameters: cROI tract , trGM cROI and TSC (see steps 11-13 of the ''Materials and methods''). In turn, cerebral and cerebellar cortices were parcellated into two sets of regions based on either their anatomical or functional basis (see step 10 of the methods) (Schmahmann et al. 1999;Diedrichsen et al. 2009).
Anatomical parcellation Table 1 reports cROI tract , trGM cROI and TSC (averaged across subjects) for left and right tracts added together in all cerebral and cerebellar cortical areas as defined by anatomical parcellation. In the cerebrum, the prefrontal cortex showed the highest value of the three tractography metrics. In the cerebellum, the area of lobule VIIb-VIII showed the highest value of cROI tract and Crus I-II showed the highest value of trGM cROI , while the anterior lobule showed the highest TSC value.
Functional parcellation Table 2 reports cROI tract , trGM cROI and TSC (averaged across subjects) for left and right tracts added together in all cerebral and cerebellar cortical areas defined on their functional bases. In the cerebrum, the motor area showed the highest value of cROI tract , while the associative area showed the highest value of trGM cROI and TSC. In the cerebellum, the sensory motor area showed the highest value of cROI tract , while the c Streamlines distribution of deep grey matter nuclei: the thalami (violet), the caudate (light blue) and the putamen (fuchsia) show the greatest trGM cROI Fig. 6 Extension of the left cerebello-thalamo-cortical pathway overlapped to the parcellated thalami in a representative subject. L indicates the left side of the brain. a 2D rendering: the highest density of streamlines is seen in the VA and VL nuclei of the thalamus, which correspond to areas principally connected with prefrontal (yellow) and frontal (orange and blue) cortices. b Tridimensional representation of the tract: the VA and VL nuclei of the right thalamus (yellow, orange and blue) are hidden from the tract Brain Struct Funct (2015) 220: 3369-3384 3375 cognitive and sensory area showed the highest value of trGM cROI and the primary motor area showed the highest value of TSC. Table 3 reports cROI tract , trGM cROI and TSC (averaged across subjects) for left and right tracts added together in deep grey matter nuclei. The pallidi showed the highest value of cROI tract , while the thalami showed the highest value of trGM cROI and of TSC.
Deep grey matter parcellation
Tractography suggests that the cerebello-thalamo-cortical pathway spreads out to many different areas of the brain. We also compared the proportions of the tract that reached the cerebellar and cerebral cortices in anatomically and functionally corresponding areas, providing evidence for the presence of structural connectivity between these regions. The results are visualized in Fig. 7, where the mean values of trGM cROI and of TSC are shown for each parcellation of the cerebral and cerebellar cortices.
Anatomical parcellation
The main findings from Table 1 are as follows: • Correspondence between trGM cROI of the anterior cerebellum (lobules I-V and lobule VI) and the cerebral frontal lobe, with values of 14 % ± 4 % and 16 % ± 5 %, respectively. • Correspondence between trGM cROI of the prefrontal cortex and the lateral Crus I-II, with values of 38 % ± 11 % and 48 % ± 4 %, respectively.
Functional parcellation
The main findings from Table 2 are as follows: • The hemispheres of the cerebellum and the cortical associative areas have comparable trGM cROI of 79 % ± 4 % and 80 % ± 8 %, respectively. (634) Inferior lobule (IX-X) 9.6 (3.5) 5 (3) 1,805 (874) Data are expressed as mean (SD) for each brain area. Left and right measurers were added together. Bold values represent the maxima values cROI tract percentage of each cortical region within the tract, trGM cROI proportion of the overall tract grey matter belonging to a specific cortical region, TSC total streamline count (688) Data are expressed as mean (SD) for each brain area. Left and right measurers were added together. Bold values represent the maxima values cROI tract percentage of each cortical region within the tract, trGM cROI proportion of the overall tract grey matter belonging to a specific cortical region, TSC total streamline count (365) Data are expressed as mean (SD) for each brain area. Left and right measurers were added together. Bold values represent the maxima values cROI tract percentage of each cortical region within the tract, trGM cROI proportion of the overall tract grey matter belonging to a specific cortical region, TSC total streamline count • The primary auditory and visual cortices show negligible values of trGM cROI of 1 % ± 1 % and 3 % ± 3 %, respectively.
Discussion
In this study, we show that tractography can realistically be used to map the pathway connecting the cerebellar hemispheres with the contralateral cerebral cortex passing through the SCP, RN and thalamus, despite the complications introduced by synapses and the decussation just after the SCP. Based on the more details of the TDI maps, the SCP was used as seed and the contralateral RN as target. The combination of a CSD algorithm with probabilistic tractography then allowed the reconstruction of the cerebellar connections towards several regions of the cerebral cortex. Quantitative analysis of tract projections passing through the SCP and RN showed that the cerebellar hemispheres on one side and the associative cerebral cortex on the other encompassed about 80 % of the tract. Our findings provide structural reconstruction in humans in vivo of the crossed-fibre pathways from cerebellar to cerebral cortex in accordance with predictions by ex vivo anatomical investigations (Voogd 2003;Standring 2008) and support the hypothesis of a prominent connectivity of lateral cerebellum to contralateral associative areas (Buckner et al. 2011). It is however important to emphasize that tractography methods are inherently incapable of distinguishing between single neuron pathways and connections involving synapses, such as those assessed in this study. Nevertheless, tractography is currently the only in vivo in human method for investigating structural connectivity of specific systems and can be used to obtain reliable results, provided certain conditions are met; these are discussed in detail in the ''Limitations of the present study'' section below.
The reconstruction of the cerebello-thalamo-cortical pathway Recent studies have assessed cerebellar tract reconstruction using different types of tractography in vivo in human subjects (Salamon et al. 2007;Habas and Cabanis 2007a, b;Jissendi et al. 2008;Granziera et al. 2009;Doron et al. 2010;Anderson et al. 2011;Kwon et al. 2011;Hyam et al. 2012). Most of these studies have investigated cerebellothalamo-cortical pathways (Salamon et al. 2007;Habas and Cabanis 2007a, b;Jissendi et al. 2008;Doron et al. 2010;Anderson et al. 2011) and a few others have reconstructed intra-cerebellar pathways (Granziera et al. 2009;Takahashi et al. 2013;Dell'Acqua et al. 2013). Several of these studies used the DT model and showed pathways passing through the SCPs and running ipsilaterally towards the cerebral cortex (Salamon et al. 2007;Habas and Cabanis 2007a, b;Jissendi et al. 2008;Doron et al. 2010;Anderson et al. 2011;Hyam et al. 2012). Some other studies instead exploited more complex models to reconstruct portions of the pathway or the intra-cerebellum connections in vivo (Granziera et al. 2009;Kwon et al. 2011;Van Baarsen et al. 2013;Akhlaghi et al. 2013) and post-mortem (Takahashi et al. 2013;Dell'Acqua et al. 2013). The decussation of the SCP is expected to occur from classical neuroanatomical descriptions (Voogd 2003;Standring 2008), but its reconstruction using MRI techniques in vivo in humans has only been achieved by a few studies using advanced diffusion approaches (Tuch et al. 2002;Tuch 2004;Wedeen et al. 2008;Tournier et al. 2012;Fernandez-Miranda et al. 2012;Van Baarsen et al. 2013;Akhlaghi et al. 2013). To the best of our knowledge, only two studies used advanced techniques, e.g. CSD and probabilistic tractography, to reconstruct the dentate-rubral and dentatethalamic pathways in pathological conditions. Akhlaghi et al. (2013) demonstrated that the dentate-thalamo-cortical tracts of patients with Friedreich ataxia showed a decreased FA value and an increased MD value compared with controls, while Van Baarsen et al. (2013) demonstrated, in a single patient with cerebellar mutism, that changes in FA and MD values along the dentate-rubro-thalamic tract and its alterations might be the cause of the mutism. Both of these studies assessed how specific pathologies affected structural characteristics of the tracts of interest rather than investigating cerebellar involvement in cognitive processes, therefore offering complementary information to our findings. These observations confirm the importance of anatomical and functional studies of cerebellar connections in understanding pathologies.
In this paper we have shown contralateral connections between the cerebellum and the prefrontal, frontal and parietal cortices via the thalamus in humans in vivo, which we achieved by implementing a pipeline with two key points: the selection of a non-Gaussian diffusion model and the definition of a seed and a target ROI (Palesi et al. 2013). We chose to combine a method based on CSD with probabilistic tractography, because this approach has been shown to allow tracking through complex crossing fibre regions Akhlaghi et al. 2013). Using this approach we could reconstruct the contralateral cerebello-thalamo-cortical pathways originating from both left and right SCPs, which are completely missed using DTbased tractography methods. In fact, streamline DTI tractography techniques are unable to resolve the convergence of differently oriented tracts into the same area, as occurs in the white matter in the medullary core of the cerebellum. This intrinsic limitation could only be partially overcome using probabilistic tractography. To better represent the fibre structure, non-tensor models must be used that are known to address these fibre-crossing issues. The further use of a seed and contralateral target ROI, placed on highresolution TDI images, warranted the selection of streamlines crossing at the decussation point as we expected from anatomical knowledge. Notice that the use of a seed and contralateral target ROIs cannot help resolving the lack of crossing streamlines when using the DT model.
The pathways that we generated show anatomical consistency between subjects, involving several areas of the cerebellar cortex, cerebral cortex and deep grey matter nuclei, passing through the VA and VL nuclei of the thalamus, caudate and putamen. In particular, most cerebellar streamlines are ipsilateral compared to the SCP seed, while a minimal proportion of streamlines cross over to the contralateral side. These findings are anatomically plausible (Watt and Mihailoff 1983;Noda et al. 1990).
Tractography metric results
Having shown that the cerebello-thalamo-cortical pathway reconstructed by tractography was in accordance with findings from tract-tracing studies, as discussed above, we introduced metrics that reflect how different grey matter areas could be involved in the reconstructed tracts, allowing us to make further observations regarding the characteristics of the cerebello-thalamo-cortical pathway. In particular, comparable proportions of cortex were reached by the tract in anatomically and functionally corresponding areas of the cerebellar and cerebral cortices. Thus, areas expected to be connected from functional studies are also characterized by similar tractography metrics, in accordance with suggestions of strong links between the developments of corresponding regions (Sultan 2002).
From the anatomical parcellation point of view (Table 1), our results were in agreement with classical literature results by finding that the anterior cerebellum (lobules I-VI) and the cerebral frontal lobe were similarly involved in the tract (trGM cROI was 14 and 16 %, respectively). These findings support a correspondence between these two areas, in line with the expected topography of primary motor and premotor areas (Snider and Eldred 1952;Grodd et al. 2001;Kelly and Strick 2003). Indeed, these regions are known to be reciprocally connected and to subserve motor and premotor functions (Schmahmann et al. 1999;Diedrichsen et al. 2009;Krienen and Buckner 2009).
From a cognitive point of view we would expect to find anatomical correspondence between prefrontal cortex and the lateral Crus I-II (Habas et al. 2009;Krienen and Buckner 2009). Our data show that indeed there is similar involvement between lateral Crus I-II and prefrontal areas (trGM cROI was 48 ± 4 and 38 ± 11 %, respectively). Furthermore, our results are supported by recent tracttracing and electro-physiological studies on primates and rats demonstrating that the cerebellum is effectively linked to the prefrontal cortex forming ''closed-loop'' connections (Middleton and Strick 2001;Mittleman et al. 2008;Arguello et al. 2012;Watson et al. 2014). Indeed, the cerebellar pathway extended considerably into prefrontal cortical areas in agreement with ex vivo anatomical determinations, which have shown that the cerebellum is reciprocally connected with the medial prefrontal cortex (PFC) (Watson et al. 2009), the dorsolateral PFC (Kelly and Strick 2003), and the anterior PFC (Krienen and Buckner 2009). The medial PFC is important in saccadic movements and cognitive control (Ridderinkhof et al. 2004) and is strongly involved in determining behaviour on the basis of expectations (Amodio and Frith 2006). Moreover, this cortical area plays a key role in fear extinction processes (Morgan et al. 1993;Milad and Quirk 2002). The dorsolateral PFC is particularly important in working memory (Petrides 2000), mental preparation for imminent actions (Pochon et al. 2001), and procedural learning (Pascual-Leone et al. 1996) and its functional alteration is involved in major psychoses (Weinberger et al. 1986(Weinberger et al. , 1988Dolan et al. 1993). The anterior PFC is less well understood (Ramnani and Owen 2004) but its main function could be that of integrating multiple distinct cognitive processes during goal-directed complex behaviours. Therefore, the fact that there is a possible correspondence of tractography metrics between cortices with similar functional roles, as reported here, supports the hypothesis of a route through which the cerebellum can influence both cognitive tasks through connections with various areas of the PFC and sensory and motor tasks through connections with frontal and parietal cortices (Schmahmann and Pandya 1993;D'Angelo and Casali 2013).
The observation that the parietal cortex only encompassed 4 % of the tract-connected GM is likely due to the low number of streamlines connecting between the cerebellum and the posterior thalamic nuclei (Fig. 6). Indeed studies focused on thalamic connectivity (Behrens et al. 2003b;Zhang et al. 2008;Mang et al. 2012) have demonstrated that the VA and VL nuclei of the thalamus are mainly connected with motor areas and the prefrontal cortex rather than the parietal cortex, which is in turn principally connected with the posterior thalamic nuclei receiving somatosensory information from pathway ascending from spinal cord and brain-stem. From ex vivo experiments it is known that the cerebellum sends outputs through the posterior VL of thalamus to the inferior parietal lobe (Clower et al. 2001), which is involved in response to the sight of an object, as well as to the act of grasping it, in reach-to-grasp arm movements (Tunik et al. 2005), and in the creation of cross-modal sensorial representations of objects (Grefkes et al. 2004). Therefore, the combination of findings from imaging studies on thalamo-cortical connectivity and from ex vivo experiments suggests the existence of a physiological connection between the cerebellum and the parietal cortex through the posterior VL thalamic nucleus. Our findings are in agreement with this hypothesis because the cerebello-thalamo-cortical pathway that we reconstructed mainly connects the cerebellum with the VA and VL thalamic nuclei. Further evidence of the coherence between our results and the literature is represented by the scarce connection we observed between the cerebellum and the parietal cortex through the posterior VL thalamic nucleus. Moreover, indications from literature suggest functional connectivity between the cerebellum and the parietal cortex (Buckner et al. 2011) but the existence of a direct anatomical pathway is still debated (Clower et al. 2005).
The temporal lobe encompassed 35 % of the tract-connected GM. Although the exact nature of connections between the temporal lobe (including the hippocampus and amygdala) and the cerebellum is still unclear, this connectivity is in line with studies showing that the temporal cortex indeed contributes to the cortico-pontine pathway both in humans and in macaque monkeys (Ramnani 2006). Indeed fMRI resting state (He et al. 2004) and dynamic causal modelling (Booth et al. 2007) have revealed functional connectivity between cerebellum and temporal areas, although this may in part depend on connections emitted by the fastigial nuclei through the middle cerebellar peduncle (at least in monkeys and cats) (Heath and Harper 1974).
Of the functional parcellation results (Table 2), the most striking finding was that the hemispheres of the cerebellum and the cortical associative areas encompassed 79 and 80 % of the tract GM, respectively. The associative cortex comprises the prefrontal 25,(46)(47), parietal (except BA 1-3) and temporal (except BA 41-42) cortices and the limbic lobe. Since prefrontal, limbic and parts of parietal and temporal cortices are known to be involved in cognitive processes at different level of complexity (D'Angelo and Casali 2013), our results supports the theory that lateral areas of the cerebellum are also involved in higher cognitive processes (Schmahmann et al. 1999;Strick et al. 2009;Diedrichsen et al. 2009;Habas et al. 2009;Krienen and Buckner 2009;Watson et al. 2014).
The primary auditory and visual cortices only constituted 1 and 3 % of the tract-connected GM (Table 2), respectively, in line with the results from Buckner et al. (2011) who, using fMRI, have shown that primary auditory and visual cortices did not appear functionally connected with the cerebellum. On the other hand, these streamlines may be underestimated in this tractography study due to their relative position with respect to the cerebellum. Indeed, fibres connecting the cerebellum with visual and auditory areas (located in the occipital and temporal lobes, respectively) might have high curvature and therefore be partially undetected by tractography methods (e.g. see discussion in Buckner et al. 2011). A fairly recent DTI study (Doron et al. 2010) suggests that the cerebellum is strongly connected with the precentral gyrus and the superior frontal gyrus, which take part in motor and oculomotor processes as well as in the processing of spatial working memory (Du Boisgueheneuc et al. 2006). However, the very important role played by the cerebellum in controlling the execution of saccades, in elaborating the visuospatial information concerning the eye target (Tilikete et al. 2006;Guerrasio et al. 2010) and in controlling vestibulo-ocular reflexes, depends on connections emitted by the fastigial and vestibular nuclei through the inferior and middle cerebellar peduncles, which cannot be detected by placing a seed in the SCP.
An additional observation is the presence of a conspicuous number of streamlines connecting the cerebellum to the basal ganglia via the RN and the thalamus (Middleton and Strick 2002). Although the anatomo-functional relationship between basal ganglia and cerebellum remains unclear, a fast synaptic connection has recently been reported between these two structures (Chen and Khodakhah 2012), which also show coherent activity in fMRI recording (Mastropasqua et al. 2013). Moreover, basal ganglia are secondarily affected by atrophy in the presence of cerebellar damage Olivito et al. 2013). It has been postulated that a functional relationship between basal ganglia and cerebellum could be important for controlling movement (Amaral 2000). However, although our observation is in line with this concept, we have to point out that the present technique cannot be used to determine either the direction of the streamlines (projecting to or from the thalamus) or whether there are effective synaptic connections allowing communication between cerebellum and basal ganglia through the thalamus. Therefore, the nature of observed streamlines apparently connecting cerebellum and basal ganglia remains to be clarified.
Finally, our analysis also revealed streamlines reaching the septum. Again, although connections between the cerebellum and deep parts of the limbic system (including the septum) have been suggested (Heath et al. 1978), the synaptic nature and directionality of this pathway as well as functional evidence in humans await experimental confirmation.
Limitations of the present study
While tractography is compelling in being applicable in vivo non-invasively, and hence in human subjects, it suffers from well-documented shortcomings. Although these have already been reported in several tractography publications, these limitations are discussed here in the context of the specific pathway under investigation in this study.
First of all, MRI tractography cannot distinguish between efferent and afferent fibres, since water diffuses equally in both anterograde and retrograde directions. The present results therefore cannot be used to inform models that rely on the determination of the direction of axon potential propagation.
Second, tractography methods cannot at present discriminate between direct and indirect connections between regions, since the diffusion weighted signal is influenced by the average microstructural architecture over the scale of an imaging voxel, and not by the directionality of the signalling process or the presence of synapses. Indeed, the cerebello-thalamo-cortical pathways are known to be polysynaptic and are not expected to form a direct connection between the cerebral and cerebellar cortices. In particular, it must be acknowledged that the connections to/ from the VA and VL nuclei of the thalamus are complex, including not only fibres from the SCP but potentially also fibres from the basal ganglia. However, while tractography cannot identify regions of synapses, it is a mathematical algorithm with predefined rules and as such it may nonetheless be able to delineate onward connections if these rules are respected; for example, fibre-tracking algorithms typically require a certain degree of alignment between the fibre orientations estimated in neighbouring voxels; provided this requirement is satisfied, the algorithm will proceed through a region of synapses. If on the other hand the fitting of the fibre orientation distribution is noisy or does not capture the correct microstructure, the tractography algorithm could terminate even if there is continuity of the underlying biological tract.
One further limitation of tractography studies in general is that diffusion MRI data are rarely acquired at resolutions higher than 2 mm isotropic; this low spatial resolution is a considerable limitation when reconstructing pathways that converge onto a small structure and subsequently diverge towards a wider area of the brain. Here, we used a combination of CSD and super-resolution track-density imaging at 1 mm resolution to minimize this problem.
Another issue is related to the fact that tractography algorithms preferentially choose streamlines with minimal bending and there is a dependency of tract volume on path length and the tractography algorithm itself. Connections from the VA thalamic nucleus towards the prefrontal cortex have the highest trGM cROI and TSC and also are characterized by minimal bending. Moreover, the anterior lobule of the cerebellum has the highest value of TSC and it is also the closest cerebellar region to the SCP seed point of the tract. However, the observed anatomical difference among these areas matches the expected difference in functional connectivity (Habas et al. 2009;Krienen and Buckner 2009), suggesting that, for this specific application, streamline connectivity revealed by our technique is not critically affected by anatomical constraints.
A final consideration on the validation of tractography results (Mori and van Zijl 2002) is that while tractography can indeed provide macroscopic neuroanatomical information on white matter pathways by reconstructing fibre structures that contain bundles of axons running along the same orientation, it cannot distinguish individual axonal pathways, whose diameter is typically less than 10 lm. For this reason, tractography cannot claim that the reconstructed tracts are anatomically accurate, and in fact results should be validated using other techniques. The most common way to infer information about axonal connectivity is using virus retrograde transport and chemical tracttracing techniques in animals (Middleton and Strick 2002;Kelly and Strick 2003;Clower et al. 2005). The principal issue is that these techniques provide information at cellular level that cannot be compared directly to MR-derived results. Moreover, tract-tracing techniques cannot be applied to humans, where most information has come from post-mortem histological data (McNab et al. 2009;Miller et al. 2011;Seehaus et al. 2013;Dell'Acqua et al. 2013). An approach combining post-mortem dissection with advanced tractography seems best suited to characterize white matter architecture in humans and validate tractography results (Catani et al. 2012;Dell'Acqua et al. 2013), but requires the use of non-conventional scanners. A further way to validate tractography results is to compare the core of major white matter tracts with classical anatomical knowledge, because trajectories and locations of these tracts are fairly well known. However, the subcortical portions of the reconstructed tracts remain problematic due to the high uncertainty of fibre direction at the grey/white matter border.
Recent developments in tracking methods (e.g. Smith et al. 2012Smith et al. , 2013 may help minimize some of these effects in future work, and thus provide a more accurate estimate of the connections between cerebellar and cerebral cortices. Nonetheless, most of these limitations are inherent to diffusion MRI and will invariably need to be taken into consideration when interpreting any tractography results.
Conclusions
We have shown that our advanced imaging methods allow visualization of the pathway connecting the cerebellar hemispheres with the contralateral cerebral cortex, passing through the SCP, red nucleus and VL and VA nuclei of the thalamus. The demonstration of congruent trGM cROI of the cerebral and cerebellar cortices in functionally corresponding areas bears relevant functional implications. First, this result supports the coevolution of the two structures proposed on the basis of comparative cortical surface measurement across vertebrates (Sultan 2002). Secondly, since the cerebellar network has almost identical structure in all its sections and is organized in parallel poorly interacting modules (Standring 2008), it is possible that a similar computational cerebellar algorithm is applied to different cortical functions, ranging from motor control to sensory perception and cognition. This observation has special relevance for the generation of computational schemes and models of cerebro-cerebellar network loops (Ito 2008). Given that our advanced imaging analysis was successful using high-quality data acquired on standard clinical scanners, this method has immediate potential in the assessment of cerebellar structural connectivity in neurological conditions, for example in dyslexia and autism (e.g. Bauman and Kemper 2005;Boso et al. 2010) for which a cerebellar origin has been proposed (for review see D'Angelo and Casali 2013). | 2022-11-27T14:17:53.099Z | 2014-08-19T00:00:00.000 | {
"year": 2014,
"sha1": "4b517dc658c2bb05c755914a826b443f4bdabc65",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00429-014-0861-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "4b517dc658c2bb05c755914a826b443f4bdabc65",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
} |
4429434 | pes2o/s2orc | v3-fos-license | Phase-slip induced dissipation in an atomic Bose-Hubbard system
Phase slips play a primary role in dissipation across a wide spectrum of bosonic systems, from determining the critical velocity of superfluid helium to generating resistance in thin superconducting wires. This subject has also inspired much technological interest, largely motivated by applications involving nanoscale superconducting circuit elements, e.g., standards based on quantum phase-slip junctions. While phase slips caused by thermal fluctuations at high temperatures are well understood, controversy remains over the role of phase slips in small-scale superconductors. In solids, problems such as uncontrolled noise sources and disorder complicate the study and application of phase slips. Here we show that phase slips can lead to dissipation for a clean and well-characterized Bose-Hubbard (BH) system by experimentally studying transport using ultra-cold atoms trapped in an optical lattice. In contrast to previous work, we explore a low velocity regime described by the 3D BH model which is not affected by instabilities, and we measure the effect of temperature on the dissipation strength. We show that the damping rate of atomic motion-the analogue of electrical resistance in a solid-in the confining parabolic potential fits well to a model that includes finite damping at zero temperature. The low-temperature behaviour is consistent with the theory of quantum tunnelling of phase slips, while at higher temperatures a cross-over consistent with the transition to thermal activation of phase slips is evident. Motion-induced features reminiscent of vortices and vortex rings associated with phase slips are also observed in time-of-flight imaging.
These results clarify the role of phase slips in superfluid systems. They may also be of interest to outstanding questions regarding dissipation in other bosonic systems, such as the source of metallic phases observed in thin films 7,8 , and might serve as a test bed for theories of bosonic dissipation based upon variants of the BH model 9 .
Although believed by many to be the simplest model that captures the relevant features of boson physics in a variety of physical systems, the BH model is not integrable, and therefore a full characterization of its features is a challenging theoretical problem. The BH model is described by the Hamiltonian (1) where b i removes an atom from site i, n i =b i † b i is the number of bosons on site i, and i is the energy cost for a boson to occupy site i. Particles in the BH model move by tunnelling with energy J between adjacent lattice sites i and j and interact pairwise on the same site with energy U. In our experiment we can directly test this model because ultra-cold atoms trapped in an optical lattice are a realization of the BH model (for sufficient lattice potential depth) 10,11 . While material and electronic parameters are not easy to independently configure in solid systems, we are able to quantitatively probe the BH model by precisely determining and controlling parameters such as J and U.
We study mass transport, the equivalent of charge transport for neutral bosons.
Charge transport is studied in solids by using an electric field to apply a uniform force to the charge carriers. In contrast, we use the spatially inhomogeneous restoring force from a parabolic confining potential to excite damped harmonic motion of the atom gas center-of-mass (COM). The confining potential is included in Eq. 1 through the site energies , where r i is the distance from site i to the centre of the harmonic potential and k is the spring constant for the restoring force. We measure the COM motion damping rate for small COM velocities. In this regime-analogous to the linear, or ohmic, regime for conductivity experiments on solids-the damping rate is independent of velocity. In analogy to experiments on solids, we measure the response of the damping rate to changes in the temperature of the atom gas. We also extend measurements possible in solids by examining how the damping rate depends on the ratio J/U, controlled by tuning the lattice potential depth.
Previous work on transport of BECs in optical lattices has focused on regimes not described by the BH model 12 , on low-dimensional systems 13,14 , on relatively highvelocity transport 15 , and on probing the Landau and dynamic instabilities [16][17][18][19] . In contrast to much of that work, we work at low velocity and do not study phenomena associated with either instability. The maximum COM velocity in our data is controlled to be smaller than the critical velocity for the Landau and dynamic instabilities 20 , including the effect of strong interactions 21 . We do not observe any phenomena characteristic of these instabilities, such as significant change in the condensate fraction, strong non-linear damping, or excitations of the condensate similar to those observed in ref. 17. Measurements of temperature-dependent dissipation in solids have proven to be a powerful tool in understanding phenomena such as phase slips 5 and other sources of resistance 7 ; our work is the first systematic investigation of the effect of temperature on transport in an optical lattice.
The experimental sequence is shown in Fig. 1 three atoms or less are confined on each lattice site for the data in this paper. The strength of the lattice potential is characterized by a dimensionless parameter s, defined by the lattice potential depth sE R along each lattice direction ( is the recoil energy and m is the atomic mass). After transfer into the lattice, COM motion is generated by applying a rapid impulse to the BEC along the vertical direction z. The COM velocity is measured using time-of-flight (TOF) imaging after the motion is allowed to freely evolve for up to 200 ms.
The COM motion we observe is described well by damped harmonic motion; Fig. 2 shows representative data. We fit the time evolution of the BEC COM velocity assuming the equation of motion for the COM coordinate z in the impulse direction, where m* is the effective mass. The damping rate , the oscillation frequency , and the initial velocity (which ranges from ~0.8-1.8 mm/sec for the data in this paper) are left as free parameters in the fit. Our model of the COM motion assumes that the restoring force -kz from the harmonic confinement and a dissipative force act on the BEC COM. The damping rate , which is the exponential decay rate for the amplitude of the motion, is the equivalent of electrical resistance in a solid. This can be understood by considering that Ohmic resistance in a material-regardless of the source of dissipation-leads to a force on the charge carriers proportional to -v, where is the resistivity and v is the charge carrier velocity.
The temperature dependence of the damping rateis shown in Fig 3. We measure the ratio of temperature T to the critical temperature T c for condensation in the magnetic trap by determining the fraction of atoms in the BEC before transfer into the lattice. In Fig. 3 we show the measured for s=2 and 6, lattice depths which sample only the superfluid region of the BH phase diagram at zero temperature 9 . Equation 1 requires beyond tight-binding tunnelling energy corrections for s=2 that must be considered for detailed comparison between theory and the data shown in Fig. 3(b).
The data are shown vs. inverse temperature because we fit the data to a model, , of thermally activated damping which permits a finite zero- Consistency with a model that includes finite dissipation at zero temperature is, by definition, the equivalent of metallic behaviour in a solid-metals, superconductors, and insulators are defined as materials that possess finite, zero, and infinite resistance at zero temperature, respectively 7 . We have verified that the levelling off of at low temperature is not caused by T/T c saturating in the lattice by measuring BEC fraction after release from the lattice. The BEC fraction does not change significantly after transfer into the lattice for the range of s and T/T c in Fig. 3.
The data shown in Fig. 3 are qualitatively consistent with a predicted cross-over between quantum tunnelling and thermal activation of phase slips 21 . Phase-slip models of dissipation were first used to describe the intrinsic critical velocity of superfluid 4 He [1] and the appearance of resistance in a superconducting wire 2 . In the context of this work, phase slips permit the BEC COM velocity to relax through generation of topological phase structures such as vortices and vortex rings. The COM motion of the BEC is metastable because there is an activation barrier to a phase slip event occurring and driving the system toward the zero velocity ground state. Phase slips occur when the system tunnels through the activation barrier or when thermal fluctuations produce the required activation energy. The dissipation rate should therefore cross-over from thermally activated behaviour (i.e., exponentially dependent on the inverse temperature) to temperature independent at a characteristic temperature for any system in which phase slips are the dominant dissipation mechanism. The data in Fig. 3 In the quantum tunnelling regime, the phase slip rate is predicted to be proportional to e -S , where S is an action characterizing the process of quantum fluctuations driving the system to lower velocity 2,27 . A generic scaling law for the action, , is derived for the BH model in the appendix to ref. 21. In Fig. 4 we plot the measured damping rate for different at two temperatures in the temperature-independent damping regime. The data are fit to a line on the log-lin scale, which is equivalent to the model predicted for quantum tunnelling of phase slips; the data show excellent agreement with the predicted scaling law. The systematic increase in the damping rate for the higher temperature data in Fig. 4 is consistent with a residual rate of thermally activated phase slips. vortex ring viewed on edge and a single vortex observed along its core are marked in red and were not detectable at the shorter expansion time used for the data in Fig. 3 and Fig. 4. Vortex rings and single vortices nucleated by phase slips must lie in a plane perpendicular to the direction of COM motion; single vortex lines may be oriented along any direction in this plane. Vortex rings will therefore be detected on edge by our imaging system, but single vortices will only rarely align with the imaging axis and be clearly resolved. We observe these features with approximately 20% probability at s=8, consistent with a random generation process such as phase slips; we do not observe vortex features if COM motion is not excited. We find that vortices and vortex rings are most likely to be detected close to the edge of the BEC where the energy gap to nucleation is smallest.
In conclusion, we observe temperature-dependent damping of COM motion in an optical lattice that is consistent with dissipation caused by phase slips. The parabolic potential used to confine the atoms gives rise to an inhomogeneous density distribution, which may enhance the effect of phase slips because the activation barrier is suppressed at the edge of the BEC. This system may therefore be comparable to thin superconducting wires and strips, in which vortices entering and leaving at the boundaries strongly influence current flow 30 . The effects of inhomogeneous density and finite size on phase slip dynamics in this system remain to be conclusively addressed theoretically. The technique used in this paper can be extended to probe transport properties in models that are relevant to solid materials, such as two-dimensional and disordered BH models. Direct imaging of vortex rings and vortices nucleated by phase slips may be used to address the microscopic dissipation dynamics in this system.
Methods Summary
The optical lattice is created using three pairs of orthogonally polarized laser beams at 812 nm. These beams are weakly focused to a 120 m waist, and slightly frequency offset to eliminate residual cross-dimensional interference resulting from imperfect polarization. COM motion is excited by changing the strength of a confining magnetic potential for 5 ms. The signal-to-noise ratio for measuring the COM velocity is The oscillation data used to measure were checked in two ways for non-linear response. First, no significant change in the fitted was measured if the first period of motion was excluded from the fit. Secondly, fitting the data to a non-linear damping model with typically increased reduced 2 for the fit by 10-20%. We found no clear dependence of on T/T c or J/U, and the fitted value of averaged across all of the data was 0.68±0.07. Ultimately, our sensitivity to weak velocity dependence is limited because of finite signal-to-noise ratio in measurements of the COM velocity. and T/T c =0.6 is determined by taking two images. One image is partially repumped and used to determine the total OD of the BEC. The second image is fully repumped and is used to determine the total OD of the thermal component by fitting the low-OD region of the image. We compare OD from the two images by calibrating the fractional change in OD for partial repumping. The calibration is performed using a thermal gas and taking images at two free expansion times.
We measure T/T c using BEC fraction because techniques for determining T in an optical lattice have not been proven. Interactions may change T/T c for BECs loaded into the lattice for the highest value of s in Fig. 4 [32][33][34]; the extent to which this effect plays a role in experiments has not been resolved. The temperature of the gas was controlled by altering the evaporative cooling procedure, resulting in varying BEC number and T c at fixed s. The average T c for the data in this paper is 0.13 K, determined from the total number of atoms and the magnetic trap oscillator frequency; T c spans a 0.07 K range for the data in Fig. 3. The heating caused by the dissipation observed in our experiment cannot be detected within our experimental uncertainty in condensate fraction or absolute temperature.
Technical noise. We rule out several technical noise sources as dissipation mechanisms that could explain damping of COM motion. Anharmonicity in the dipole potential may 2, 2 F Fm effectively damp COM motion for large s. To check for anharmonic behaviour, we measure COM motion when the retro-reflected lattice laser beams are removed, which eliminates the lattice potential and reduces the depth of the dipole potential by a factor of ~2. Using this technique we measure a consistent with zero for lattice laser intensities corresponding to s=9 (s=18 if the retro-reflected beams are present), eliminating trap anharmonicity as an effective dissipation source for the data in Fig. 3.
Relative motion between the lattice and harmonic potential or fluctuations in s (caused by retro-reflecting mirror motion and lattice laser intensity fluctuations, respectively) can lead to dephasing of dipole mode motion by transferring atoms into states with different m* in excited bands 35 ; we do not, however, observe population outside of the lowest-energy band. The total spontaneous emission rate per atom is less than 0.3 Hz for s=6, so momentum diffusion caused by scattering light from the optical lattice laser beams is insignificant. The lattice depth varies by less than 3% across the BEC, so spatial variation in the effective mass can play no role in the dissipation timescales measured in our data. | 2016-05-01T19:34:11.743Z | 2007-08-22T00:00:00.000 | {
"year": 2007,
"sha1": "80ef3f02906eef02deea75833e2744d284c13a2f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0708.3074",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "80ef3f02906eef02deea75833e2744d284c13a2f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
4539188 | pes2o/s2orc | v3-fos-license | Friedreich Ataxia: Clinical Report of an Uncommon Point Mutation (R165C)
Introduction: Friedreich ataxia (FRDA) is the most common hereditary ataxia now. It is inherited as an autosomal recessive disease. Most of the patients are homozygotes, with an expansion of a GAA triplet in both alleles of the first intron of the frataxin gene (FXN, 9q13) (95-98% of the patients). The rest of the patients are heterozygotes with an expansion in only one allele and a point mutation in the other. These cases are more difficult to diagnose due to the low prevalence and the needed of enlarge molecular tests. Case Report: An ambulant 42-year-old man was referred to our hospital due to gait instability that had started 7 years ago. A clinical examination showed gait ataxia, areflexia, decrease vibration sense, scoliosis, and pes cavus. Results and Discussion: Laboratory tests, neuroimaging and neurophysiologic studies had been done since then without relevant findings. Somatosensory evoked potentials were also done and described a sensitive axonal neuropathy with an affectation of the posterior columns of the spinal cord. Due findings of 300-350 repetitions of GAA in one allele and the point mutation R165C in the other that confirmed the diagnosis. Conclusion: This case report highlights that, although most patients of Friedreich ataxia are usually homozygotes, there are a small number of patients that are heterozygotes and can have different phenotypes being important to identify them to give genetic counselling and detect new complications that suppose a risk for their lives. *Corresponding author: Rosa María García Tercero, Department of Neurology, Hospital General University of Alicante, Alicante, Spain, Tel: 636920304; E-mail: roxamary_ab@hotmail.com Received January 21, 2018; Accepted February 22, 2018; Published February 26, 2018 Citation: Tercero RMG, Heras JG, Urrea CD, Benitez PB, Pérez AH, et al. (2018) Friedreich Ataxia: Clinical Report of an Uncommon Point Mutation (R165C). J Neurol Disord 6: 376. doi:10.4172/2329-6895.1000376 Copyright: © 2018 Tercero RMG, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. gait instability. His past medical history was unremarkable, although he reported some clumsiness in his childhood that did not prevent practice sports. There was not consanguinity in his family. The patient did not have any treatment. Symptoms onset were at 36 years old with gait disturbances and he went to a private center where the neurological examination only described pes cavus and areflexia. He was studied with a magnetic resonance imaging (MRI) and electrophysiological studies that did not show abnormalities. Two years later he came back due to worsening of the gait and new MRI was done showing a small lumbar disk protrusion that did not justify the clinic. It was also made a new electrophysiological study that showed a decrease of amplitude in right peroneal nerve and he was also studied by otorhinolaryngology that described a global imbalance with not vestibular relation. When he was 42 years old he came to our hospital because of the persistence of gait disturbances. We reviewed all the studies and the clinical reports that had been done previously. The new neurological examination revealed gait ataxia, areflexia, decrease vibration sense, scoliosis, and pes cavus. The systemic exploration was normal. Analysis with ions (including cupper and ceruloplasmin), blood count, vitamins, hormones, proteins, serology, and autoimmunity study were normal. Somatosensory evoked potentials were also done and described a sensitive axonal neuropathy with an affectation of the posterior columns of the spinal cord. Taking the evolution of the patient into account and the results of the studies that had been done, several diagnoses were considered thinking in a genetic neuropathy with compromise of the posterior columns. Citation: Tercero RMG, Heras JG, Urrea CD, Benitez PB, Pérez AH, et al. (2018) Friedreich Ataxia: Clinical Report of an Uncommon Point Mutation (R165C). J Neurol Disord 6: 376. doi:10.4172/2329-6895.1000376
Introduction
Friedreich ataxia (FRDA) is the most common hereditary ataxia and it is inherited as an autosomal recessive disease. Most of the patients are homozygotes with an expansion of GAA triplet in both alleles of the first intron of the FRDA gene (FXN, 9q13) but the rest are heterozygotes with an expansion in one allele and a point mutation in the other (2%-5%) [1,2]. There have been described missense, nonsense and splicing mutations and although the two last mutations can produce severe manifestations, missense mutations can be less sharp, with milder and atypical clinical phenotype. It has also been reported some exonic deletions, but its frequency is extremely unusual [3]. Apart from that, clinical expressions are related with the number of GAA repetitions and an inverse relation with the age of onset has been observed. Size of expansion is variable and can be between 67 to 1700 repeats (normal in humans is 7-40 repeats) and this supposes a reduction of frataxin protein due to the lack of expression of the FRDA gene [1,4].
This protein is related with the mitochondrial iron metabolism and its dysfunction produces an increase of oxygen free radicals that causes intracellular damage. Frataxin is hardly expressed in the spinal cord, cerebellum and heart so it can explain the typical clinical manifestation. It causes an affection of nervous system, skeletal and foot deformity, optic disk pallor, cerebellar dysarthria, ataxia and other diseases like diabetes and cardiomyopathy [1][2][3]. Most of these patients have a reduced life expectancy and it is mainly related with cardiac problems. Yet, there is not a curative treatment and all the management is focus on looking out and treats the symptomatic manifestations. Considering that Friedreich ataxia is a genetic disease is necessary to give them genetic counseling [1]. We describe a clinical case of a heterozygote patient with a missense mutation (R165C) with unusual clinical phenotype. gait instability. His past medical history was unremarkable, although he reported some clumsiness in his childhood that did not prevent practice sports. There was not consanguinity in his family. The patient did not have any treatment. Symptoms onset were at 36 years old with gait disturbances and he went to a private center where the neurological examination only described pes cavus and areflexia. He was studied with a magnetic resonance imaging (MRI) and electrophysiological studies that did not show abnormalities. Two years later he came back due to worsening of the gait and new MRI was done showing a small lumbar disk protrusion that did not justify the clinic. It was also made a new electrophysiological study that showed a decrease of amplitude in right peroneal nerve and he was also studied by otorhinolaryngology that described a global imbalance with not vestibular relation. When he was 42 years old he came to our hospital because of the persistence of gait disturbances. We reviewed all the studies and the clinical reports that had been done previously. The new neurological examination revealed gait ataxia, areflexia, decrease vibration sense, scoliosis, and pes cavus. The systemic exploration was normal. Analysis with ions (including cupper and ceruloplasmin), blood count, vitamins, hormones, proteins, serology, and autoimmunity study were normal. Somatosensory evoked potentials were also done and described a sensitive axonal neuropathy with an affectation of the posterior columns of the spinal cord. Taking the evolution of the patient into account and the results of the studies that had been done, several diagnoses were considered thinking in a genetic neuropathy with compromise of the posterior columns.
(FRDA) there is not such clinical homogeneity as in other recessive disorders and due to the atypical presentation of our patient, another sensitives neuropathies were taken into account and a differential diagnosis were made with them (hereditary sensory and autonomic neuropathy, Fabry disease, Familial amyloidotic polyneuropathy, Adrenomyeloneuropathy etc.) but the normality of the probes that have been done and the lack of another symptoms related with these diseases caused that Friedreich ataxia was suspected. With this case we make relevance that not all patients have an early onset, or a severe phenotype and a good neurologic exploration is important to recognize them. Once the diagnosis is made, it is necessary to follow them in consults paying attention to heart problems and giving them genetic counseling.
We enlarged the analysis with a study of long chain fatty acids that were normal and a genetic study of the gen TTR and FDRA. There was no mutation in TTR but in FDRA was found an expansion of 300-350 repetitions of GAA in one allele. After these findings and because of the clinical exploration was compatible with Friedreich ataxia, a new test genetic was solicited looking for a point mutation that was finally detected in the other allele (C493 C>G; p (Arg165Cys)). The expansion detected and this point mutation in heterozygosis confirmed the diagnosis of Friedreich ataxia. A transthoracic echocardiogram (TTE) and an ECG was performed without abnormalities and genetic counseling was given.
Discussion
Although most patients with Friedreich ataxia are homozygotes, there are a small percentage that are heterozygotes. It has been described forty-four different mutations in FXN gen, including point mutations, insertion and/or deletion. Missense mutations usually affect protein structure and/or function [1,5]. It is also known that the number of repetitions of GAA reduce frataxin expression and due to this, large repetitions cause an early onset [1,5]. In our patient was found a small expansion of 300-350 repetitions of GAA in one allele that might have contributed to the late onset. Our patient was diagnosed when he was 42 years old. He was heterozygote and had a missense mutation (R165C) in which there was a substitution from arginine to cysteine in this position. This mutation is not located in the domain of the carboxy-terminal frataxin and this produces a less severe phenotype, as Palau's article describes [1]. Researching in literature, there are not so many cases described with this kind of mutation. Forrest et al. found a woman with a relatively mild presentation in her middle age (27-yearsold) without cardiac affection. We ignore her expansion of GAA [6].
In the study of Galea et al. is said that this mutation has a mild to moderate effect in the protein stability, but it causes an important reduction in its union with other molecules [7]. The discovery of the mutations in the FRDA has permitted a diagnosis of this disease in patients with an atypical clinical and a later onset than traditional forms [8]. In the literature there are two main forms of late-onset atypical presentation: late-onset FRDA (LOFA) (onset between 25-39 years) and very late-onset FRDA (VLOFA) (> 40 years). In this forms gait and limb ataxia are present and dysarthria appears later during the disease. Other manifestations as scoliosis, pes cavus, cardiomyopathy and diabetes are less frequent in atypical patients although abnormal electrocardiogram can appear [9].
Conclusion
Despite of been inherited as a recessive disease, Friedreich ataxia | 2019-03-17T13:11:27.113Z | 2018-02-26T00:00:00.000 | {
"year": 2018,
"sha1": "c57bf3234b58460c251c96e2c8b26db1bd677c08",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2329-6895.1000376",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bb879ba00307463269f816ac1c701b4873fcaff4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196206024 | pes2o/s2orc | v3-fos-license | Using social network and semantic analysis to analyze online travel forums and forecast tourism demand
Forecasting tourism demand has important implications for both policy makers and companies operating in the tourism industry. In this research, we applied methods and tools of social network and semantic analysis to study user-generated content retrieved from online communities which interacted on the TripAdvisor travel forum. We analyzed the forums of 7 major European capital cities, over a period of 10 years, collecting more than 2,660,000 posts, written by about 147,000 users. We present a new methodology of analysis of tourism-related big data and a set of variables which could be integrated into traditional forecasting models. We implemented Factor Augmented Autoregressive and Bridge models with social network and semantic variables which often led to a better forecasting performance than univariate models and models based on Google Trend data. Forum language complexity and the centralization of the communication network, i.e. the presence of eminent contributors, were the variables that contributed more to the forecasting of international airport arrivals.
Introduction
The tourism industry represents an extremely complex business scenario, where companies carrying out very different activities integrate their products and services -these comprise travel agencies, tour operators, restaurants, hotels, transportations providers, etc... Products and services can be sold both individually or in holiday packages [1]. Accessing local knowledge is a fundamental step when people are planning a trip. This information can be provided by travel agencies, personal acquaintances, guide books, or by the web. With the rapid evolution of the internet and connected devices, such as laptops and mobile phones, the information that people can access on the web has dramatically increased [2], also producing a revolution in the tourism industry. New technologies and online services changed the way tourists relate with travel agents and the way they organize new trips [3]: for example, people can now easily use the web to look for the cheapest flights, compare thousands of hotels, book their access to a museum, or reserve a table at a restaurant. Consequently, the numbers of clients in the industry increased as well as the amount of information they can access [4,5]. Moreover, operators can now offer their products and services without intermediaries, thus having the possibility to reduce the final price.
Competition is always stronger and marketing strategies can leverage on a better knowledge of the consumer to offer personalized products [6]. Companies can now increase their profits through insights coming from the analysis of search queries on Google, or of the content of online reviews [7]. The consumer is now even smarter and more aware of the tricks behind some marketing campaigns.
Therefore, many people prefer to rely on the judgement provided by their peers, more than on the information they find on companies' websites. The online interaction on social networks, or on dedicated platforms, makes people feel part of a group [8]; many of them get a sense of reward when they can share their knowledge and help others [9]. Accordingly, online reviews and user-generated content acquired a great importance and made the success of very well-known websites like TripAdvisor, also confirming their usefulness while making tourism demand predictions [10][11][12]. Big data shared on online social networks can help anticipate rapid changes in tourist preferences and popularity trends of destinations and local attractions; this can be achieved by both analyzing the topics emerging from the online discourse and studying the interaction dynamics among users [13][14][15][16][17].
Following this trend, we propose the analysis of the online travel forums included in one of the world's leading tourism platforms, TripAdvisor, by using methods and tools from social network and semantic analysis [18,19]. The objective is to discuss the usefulness of variables extracted from the study of online communities, in order to forecast international arrivals in the airports of European capital cities. Our contribution is based on the investigation of both the content of people's posts and their social interactions, with the idea that a more active online community, where knowledge-sharing is supported by functional social dynamics, can be predictive of a higher number of arrivals. We present new variables that are relatively easy to extract and monitor from online sources and which could be integrated in other existing forecasting models to improve their accuracy. In this study, we test our methodology considering the last 10 years of the online discourse on TripAdvisor's forums, focusing our attention on 7 major European capital cities. To be consistent with the analysis of the language use, we limited our sample to posts written in English. Nonetheless, future research could replicate our methodology considering other online sources, different languages, and focusing on other predictions (such as the number of visitors to museums or other specific tourist attractions). It is important to consider that our study is exploratory for a part. We prove the informative value of semantic and social network indicators, without the ambition of providing full explanation about the reasons behind their influence on the forecasts made for each city. This would require a new dedicated research, which we advocate for the future.
Forecasting tourism demand has significant policy implications; insights from our analysis are useful both for decision makers at a regional and country level and for companies operating in the tourism industry [20,21]. Better predictions can help local companies and policy makers to allocate resources, define pricing policies and implement business plans. More accurate predictions reduce the risk of misplanning, and can be vital for the growth of tourism-dependent economies, both at a local and at a national level [22,23]. Our study also contributes to the literature about tourism forecasting, presenting a new methodological approach and new metrics -based on the social network and semantic analysis of big data -which go beyond the study of online reviews or web search activity [22,24].
Forecasting Tourism Demand
Big data and the development of information and communication technologies have a great importance for the tourism industry, as internet is a preferred knowledge source for tourists and one of the most important drivers of tourism demand [4,25,26]. Accordingly, new buzzwords are emerging, such as 'smart tourism' -a concept used to "describe the increasing reliance of tourism destinations, their industries and their tourists on emerging forms of ICT that allow for massive amounts of data to be transformed into value propositions" [27]. New data can now be acquired analyzing tourist interactions on social media websites or their use of mobile applications which enhance their travel experience [28][29][30]. Big data analytics can provide new knowledge about destination choices [31], support strategic decision-making in tourism destination management [32], and help the forecasting of new arrivals [33,34]. In this context, social media and online reviews play a significant role, as they support information search, decision-making and knowledge exchange for tourists [34]. For the companies operating in the tourism industry, social media represent a means to communicate with customers and a place for the implementation of a good part of the marketing strategy [35]. Online travel forums are used by tourists who have specific questions, which are not usually answered in common reviews of tourist attractions: forums reveal specific information needs and their link with prospective destinations [36].
In this study the authors follow a big data approach to extract information from the TripAdvisor travel forum, and measure new variables which could help in forecasting tourist arrivals. Forecasting tourism demand has been a major topic of research in the past decades [37][38][39]; scholars used a wide range of techniques, with no single model succeeding in outperforming the others in all situations [20].
Some studies focused their attention on the effects that new communication channels, especially social media, have on tourist decisions and choice of destinations [10,40] -for example, Saparks and Browning [24] studied the impact of online reviews on hotel bookings; others researchers investigated the information needs that bring people to generate questions on online travel forums [36].
Tourism demand can be measured using different proxies, such as the number of nights spent in accommodation establishments or the number of visa requirements. Many studies focused on tourists arrivals and provided predictions based on time series and seasonal trends [41,42]. Considering online sources to help these predictions is not new. Some scholars inferred tourism demand from an analysis of search engine and web traffic data [43,44]. Li, Pan, Law and Huang [45] developed a composite search index to more efficiently analyze search query volumes and improve the forecasting accuracy of Chinese tourism demand. Similarly, Yang, Pan, Evans and Lv [33], used autoregressive models combined with search query data. Artola, Pinto and de Pedraza García [46] proved that traditional models can be improved by using data from Google Trends. Choi and Varian [47] carried out a very similar research, using again Google Trends to predict visitors to Hong Kong. Also Bangwayo-Skeete and Skeete [22] supported the idea that Google Trends can help outperform conventional time series models. Gunter and Önder [48], instead, used Google Analytics to predict city arrivals in Vienna.
Recent works proposed methods which combine different data sources and techniques to improve models accuracy [49]. Sun et al. [50], for example, combined data mining and models based on Markov chains. Other scholars examined big data, combining multiple online sources -such as price levels and web traffic -to make predictions [51]. We agree with the importance of using combined approaches [52] and data sources -and maintain the need of finding new variables which can be integrated in existing models; these variables should be also reasonably easy to extract quite in real time.
Exploring online community dynamics to predict tourism demand
Fewer studies used online travel forums data to make predictions. Dali and Yutaka [53], for example, looked at the most recurring words in a Chinese forum, to forecast Chinese people traveling to Japan. To the extent of our knowledge, there are also few studies dealing with social network analysis and prediction of tourism demand. Indeed, the use of social network analysis in tourism is still scarce and recent [54]. With this research, we try to fill this gap. We discuss the role of social network and semantic variables that can be extracted from online big data sources -in our case, the TripAdvisor travel forum -to support the forecasting of international airport arrivals.
We chose to analyze online forums instead of TripAdvisor's reviews, for two main reasons: firstly, to study the discourse about European capital cities overall, without limiting our attention to single tourist services or attractions; secondly, because the effects of reviews on tourist behavior has already been explored by many scholars [55][56][57][58]. Indeed, the study of online reviews has sometimes to face the problem of deceptive content, generated by people who share false experiences and judgements to promote local business [59].
The success of an online community depends on many factors such as its level of activity, the presence of rotating leaders and the speed at which users get answers to their questions [19,60]. A community with many active members and posts, where more answers are given to people's questions, is usually more popular than a less participated online group. Koh and Kim [61] proved that knowledgesharing activity predicts both community participation and promotion. In addition, if the online content is accessible without a registration, this leads to a better indexing on search engines thus attracting more members [62]. Knowledge sharing activities can also be supported by the presence of informal moderators, who keep different social groups together and offer eminent contributions to the discourse [63]. In general, when the users' level of expertise is higher, one could expect more rapid and effective answers to people's questions [64]. In terms of social network structure, the presence of eminent contributors usually translates into higher network centralization [60,65]. In terms of rotating leadership and democratic participation to the community life the picture is still open to debate: on one hand, Antonacci et al. [60] proved the importance of rotating leaders to support participation and growth of virtual communities of practice; on the other hand, Gloor et al. [66] showed that, in more operational contexts, the presence of steady leaders -who keep static positions and use a simple language -is appreciated by knowledge-seeking clients. The use of language is another dimension worth to be explored, not only with regard to complexity. Yin, Bond and Zhang [67] showed that the analysis of positive and negative emotions embedded in review texts can be far more informative than ratings.
Salehan and Kim [68] showed that online reviews with a neutral sentiment are perceived as more useful. Accordingly, we expect that forum posts with overly positive sentiment could be perceived as suspicious and less informative by perspective tourists.
Given the influence that online travel communities can have on choices of prospective tourists [53], it is important to understand and measure their dynamics, to see if information can be extracted to make meaningful forecasts. In this study, we use the framework proposed by Gloor and colleagues [19,60,66] which suggests considering three dimensions for a comprehensive analysis of online social interactions: degree of interactivity, degree of connectivity and language use. This implies using methods and tools of Social Network and Semantic Analysis to investigate: the social structure of interaction, i.e. the shape of relationship among community members and, for example, the presence of central leaders; the evolution of this structure over time and metrics of interactivity, such as the average response time to received messages; the style of the language used in online conversations measuring, for example, its positivity or complexity.
Compared to the research about online reviews, the study of online communities to forecast tourism demand is relatively new. As a consequence, we carried out an explorative analysis to discover the most significant variables which could be used to forecast international airport arrivals.
Methodology
We looked for online data which could be relatively easy and fast to crawl and which could be helpful in predicting the number of visitors to touristic destinations in Europe. Specifically, we focused our experiment on the forecasting of international visitors to seven European capitals, analyzing the online forums of the TripAdvisor website. We chose TripAdvisor as this is the leading tourism online platform, active since February 2000 and used all over the world. In 2017 it counted 535 million users and included reviews and information about 7.3 million restaurants, accommodations, airlines and tourism attractions 1 . The website, available in multiple languages, counts more than 455 million unique visitors every month and has the power to significantly drive and influence tourist decisions. This platform includes an online forum (also accessible to non-registered users) where people can interact by exchanging travel tips and opinions and by sharing personal experiences. This forum deals with topics tightly connected to our research question, it is rich in information and user interaction, and has a high number of posts: as a result, it is a suitable candidate for our analysis [69].
In order to extract forum data, we developed a specific web crawler using the Java programming language. The crawler was able to parse html pages and extract information of interest, with associated timestamps to allow a longitudinal analysis. We conducted our experiment analyzing more than 2,660,000 forum posts, written by more than 147,000 users, considering a time period of ten years (from January 2007 to December 2016). We did not collect antecedent posts as the first forum interactions were in September 2004 and we wanted to be sure to skip the forum startup phase. Our analysis was restricted to posts written using the English language for two main reasons: firstly, to be consistent in the measurement of semantic variables; secondly, because English was the most used language for the exchange of opinions among tourists of different nationalities. In addition to forum interactions, we analyzed profile pages where information about participants -such as their gender, age and number of posts/reviews -were available.
For the selection of the seven European capitals, we considered the top European nations according to the EUROSTAT 2 ranking on the number of nights spent in tourist accommodation establishments for the year 2016. Subsequently, we selected those capital cities for which we found a significant number of forum posts on TripAdvisor in the past ten years (more than 100,000 posts overall, at least 10,000 per year). Cities selected with this procedure would have been the same if considering the European capital cities with the highest number of international airport arrivals 3 . Due to data quality issues, we could not analyze three cities we originally selected: Athens, London and Rome. For these cities the crawler produced a significant amount of incomplete or inconsistent data -as the website API returned errors or because the html structure of the webpages resulted inconsistent (or changed) during the collection process. Therefore, to avoid introducing biases in the analysis, we preferred working on a sample of 7 cities, for which we could collect verified data of good quality. The capitals included in the study were: Amsterdam, Berlin, Lisbon, Madrid, Paris, Prague and Vienna. We analyzed 7 separate datasets, as each city had a dedicated travel forum on the online platform, organized in forum topics. Users could either open new topics or comment on existing ones. Table 1 shows the total number of forum posts and users for each city, as extracted from the crawler. We see that Paris had the highest participation. We here present the list of variables we could measure and include in the study. The measurement of each variable was repeated on a monthly basis, for each capital city.
Number of Posts Number of Users
Percentage Male. It is the proportion of male users who posted in the forum.
Average Age. It is the average age of users who posted in the forum.
Users Level. Each user activity on TripAdvisor is rewarded by a specific number of points -for example, users get 100 points for writing a review, 30 points for uploading a photo and 20 points for writing a forum post. Points translate into levels (ranging from 0 to 6, where level 1 is obtained at 300 points and level 6 at 10,000 points or more). Users who largely contribute to the website are awarded with a higher level, which reflects their reputation and partially their expertise. Users level is calculated as the sum of individual levels, considering those users interacting in a city forum.
Users Photos. It is the sum of the total number of photos uploaded on TripAdvisor by the users who were active in a city forum.
As a proxy for the number of international tourists traveling to a capital city, we considered the number of international arrivals in that city airport (excluding transit passengers), as extracted from the EUROSTAT 4 database. Even if considering airport arrivals has been done in previous studies [70] and air transport and tourism proved to be interlinked [71], our choice can have some potential limitations as people could be traveling for work and not for tourism-related reasons. Moreover, tourists could access a capital city by other means of transport. Some of these limitations are common to other possible proxies for the level of tourism -for example, if the number of nights spent in tourist accommodation establishments are taken into account, there would be the problem of including people staying in hotels for work purposes. Moreover, the number of nights spent in a city does not necessarily reflect the number of people who visited that city, due to the variability of the time spent in the city by each tourist [70]. Another indicator -which has been used in the past [72] -is the number of visa requirements, which is however very difficult to associate to the number of visitors to a specific city and is therefore more appropriate when carrying out an analysis at a country level. In addition, European tourists often do not need a visa to access other countries in Europe. Accordingly, we maintain that our choice of selecting international airport arrivals as the dependent variable of our study is not completely free from possible biases, but it can still represent a good proxy of tourism demand.
This choice is consistent with other studies [70] which already proved that level of tourism is associated to airport arrivals [73].
Social Network Data
Collecting forum data was important as it allowed to map the interaction dynamics within the online communities. Thanks to our crawler we were able to extract the social network of each city for the ten years was computationally not viable, given its very big size). Network size is consistent with the rankings reported in Table 1 -with Paris having the most participated forum. The contribution offered by this research is based on the exploration of online social interaction in travel forums to identify variables which can help forecasting tourist arrivals. Specifically, we investigated social dynamics according to the framework proposed by Gloor and colleagues [66], which is based on the measurement of the degree of connectivity and interactivity in online communities and on the analysis of language use.
Social structure (connectivity) was studied considering the two well-known metrics of Group Degree Centrality and Group Betweenness Centrality [18]. Degree centrality is a measure of the number of direct connections of each user; it answers to the question: "how many other users did he/she interacted directly with?". When measured at the group level, it shows how much variation there is in degree centrality scores of individuals. If a network is dominated by a central actor, connected to all others who do not share connections among them, the group degree centrality is maximum and equal to 1 [18]. Betweenness centrality, on the other hand, is a measure that goes beyond direct links and shows how frequently a node lies in the paths that interconnect the other nodes; this measure can often be considered as a proxy of the amount of information that passes through a specific social actor [18,74].
Similarly to group degree centrality, group betweennes centrality expresses the heterogeneity of betweenness centrality scores, and it reaches the maximum value of 1, if the network is close to a star graph, where a central actor interconnects all his/her peers [18].
Interactivity has been studied by considering the number of new users, the levels of activity and the Average Response Time (ART) taken by users to answer comments or questions (measured in hours).
Activity counts the number of network links generated by the users' posts. The New Users variable counts the number of new users joining an online city forum.
In addition, we calculated a group level metric which expresses the Rotating Leadership of community members, operationalized as the count of their oscillations in betweenees centrality [75]. A community where members occupy static positions -for example for the presence of eminent contributors who share their unique knowledge -has zero or few oscillations; on the other hand, when community members support the active participation and involvement of other users, they rotate more, sharing their leadership and making the interactions more 'democratic'.
The use of language was studied along the dimensions of language Sentiment and Complexity.
Sentiment is a measure expressing the positivity or negativity of community posts; it ranges from 0 to 1, where 0 represents very negative posts and 1 very positive ones. The calculation was made by using the machine learning algorithm included in the software Condor [19]; we used the same software to calculate also language complexity, based on the likelihood distribution of words within a post, as illustrated in the work of Brönnimann [76]. Briefly, complexity is the probability of each word to appear in the text based on the term frequency/inverse document frequency (TF-IDF) information retrieval metric: where n is the total number of words within a post, V is the vocabulary of words that appear in the post, q(w) is the frequency of word w, p(w) is the probability of word w to appear in a post, and log 1/p(w) is the inverse document frequency of word w in the corpus.
Lastly, in order to compare the outcomes of our model with past research, we collected two additional variables named Google Trend Flights and Google Trend Holidays; these variables correspond to the Google Trend search volume index for the search queries made by the name of a city followed (or preceded) by the word "flights" or the word "holidays" respectively. This choice is consistent with previous studies [77][78][79], and detailed in the work of Artola et al. [46] who also examined the limitations of this choice. Artola and colleagues showed that using this variables can significantly improve the prediction of tourism inflows. Here we do not dwell on these variables and findings, but use them for comparative purposes. Table 2 summarizes the variables which we used to forecast international arrivals.
Users Photos
Sum of the total number of photos uploaded on TripAdvisor by the users who were active in a city forum.
Users Level
Level attributed to each user on TripAdvisor (summed for each city forum). Depends on user experience (number of reviews, forum posts, uploaded photos). Users who largely contribute to the website are awarded with a higher level, which reflects their reputation and partially their expertise.
Percentage Male
Percentage of male users in a forum.
Average Age
Average age of users who posted in the forum. Table 2. Variables used to forecast international arrivals.
Forecasting Model
Let us suppose that the scalar time series to forecast , is generated by the following autoregressive model (AR): Where is the target series, h represents the number of steps ahead to forecast, represents the ℎ coefficient of the autoregressive part of the model of order p and is a serially uncorrelated error term with ( ) = 0, ( 2 ) = 2 , ( 4 ) < ∞, such that ( | − ) = 0.
Let us also suppose that a large number (M) of indicators t , are available. In general we refer to all the Socio-Semantic Indicators (SSI) presented in Sections 3 and 3.1, considered with their lags. Given the high dimension of M, that is M > T , the series X t cannot be included in the model separately.
However, the objective remains to extract useful information from X t in order to improve the forecasting ability of (1). This task can be accomplished by reducing the number of regressors. A standard solution to this problem is imposing a factor structure to the predictors, in order to extract a small number of components from a large set of variables, so that the relevant estimation model can be reformulated as a factor augmented autoregressive model (FAAR): where F t represents a R × 1 vector of factors and ξ a coefficient vector.
Put it differently, the h-step-ahead forecast is given by the following equation: To estimate the forecasting model (3) we followed a three-steps algorithm as suggested by Girardi, Guardabascio and Ventura [82]. In particular, F t are computed by using Partial Least Squares (PLS) [83] between y t and X t . Differently from Principal Components, PLS incorporates information from both the target variable and the predictors, for the definition of scores and loadings. In this regard de Jong [84] shows that the scores and loadings can be chosen in a way which describes as much as possible of covariance between the dependent variable and the regressors. We implemented the PLS algorithm on the residuals obtained at the first step mentioned above. The idea is that the residuals contain part of y t which is unexplained, thus, we tried to add information to the explanatory variables by means of the PLS applied on the SSI indicators. Moreover, the orthogonality between the residuals and the hard indicators preserves the orthogonality between the factors and the AR component.
Forecasting Procedure
All the variables included in our models were recorded on a monthly basis. The time span To compare the forecasting performance of the different models, we referred to the mean square forecast error: where is the number of months in the forecast sample and h=1,3,6,12. Finally, we found the set of models that forecasted equally well, relying on the model confidence set analysis of Hansen, Lunde and Nason [85]. The test for the null hypothesis of equal predictive ability at the 10% significance level was implemented using a block bootstrap scheme with 5000 resamples.
Results
This research was conceived with the idea of exploring the social dynamics of communities in online travel forums, to find new variables that could help in the prediction of international tourist arrivals in major European cities airports. Figure 2 shows the time series of the international airport arrivals for each city. We can notice a similar seasonality and an often positive time trend. With regard to the gender distribution of users, men were predominant in almost all forums, except for Paris. As regards users' age, we see that the majority of community members were between 35 and 64, with average ages varying probably depending on the tourist attractions of each city. We think that a high average age can reflect a tendency of young users to look for tourism information using other online sources -such as Twitter, Facebook groups or Google Maps for transportations. However we cannot exclude the possibility that some young users read the forums without posting. In addition, it was not a surprise to see that people who provided tips and comments mostly lived in those very cities [10].
Even if formally modeling the major topics discussed in each forum was not in the objective of our research, we could notice that in general users were asking information about restaurants, hotels and tourist attractions (such as museums). One of the most recurrent topics was about local means of transport, with an associated negative sentiment. Sentiment of opinions about restaurants was generally more positive than comments about hotels. The museums discourse had generally positive feelings, except when discussing ticket prices and the time spent in entrance queues. Table 3 shows the overall mean and standard deviation scores of our variables. In addition, the table reports a preliminary analysis with the results of the Dumitrescu & Hurlin test [86]. We performed this test after the removal of seasonal components, to check if our variables could granger-cause tourist arrivals up to three months in advance. Seasonal components were removed using the "STL" package in R, i.e. with a procedure based on the loess smoother [87]. .03 *** *p < .05; **p < .01; ***p < .001.
Table 3. Descriptive Statistics and Dumitrescu & Hurlin Test
Tests results show a potentially significant association of international airport arrivals with all our social network and semantic metrics, apart from average response time and group betweenness centrality at lags 1 and 2. User level, activity and complexity are the variables that exhibit the strongest associations with the international arrivals. Table 4 four different forecasting horizons. Accordingly, the performance of one model is better than the AR if the corresponding ratio is lower than 1. In the table, we also report the average Root Mean Squared Error, for each model and forecasting horizon.
Asterisks in the table are used to mark those models which are included in the superior set at the 10% significance level. Table 4 shows that information coming from semantic and social network variables can significantly improve the forecasting performance of international airport arrivals provided by AR or BRIDGE-GF models. Indeed, models which included these new variables were the best choice in 79% of cases and 93% of times were included in the superior set. The models with the AR component and our predictors (FAAR), without Google Trend Flights, could outperform the other models in 54% of cases. Worst performance was obtained for six-month forecasts. At this horizon, AR models represented a better choice for 5 cities out of 7 (Lisbon, Madrid, Paris, Prague and Vienna), even if FAAR models were still included in the superior set, for all these cities apart from Prague. On the other hand, our predictors led to better forecasts, even at h = 6, for Amsterdam and Berlin. We only used the Google Trend Flights indicator as this always had a better performance than Google Trend Holidays. In general the inclusion of this last variable in the models did not lead to better results. We found a potential collinearity issue -due to the high correlation of activity, user level and new contacts -which was however efficiently handled by our factor models.
In order to better evaluate the robustness of our models, we tested several other approaches and combinations of predictors, to see whether good forecasts could be obtained using simpler metrics, without the need of calculating SSI indicators. In particular, we tried to understand if the metric of activity could be substituted by two simpler metrics: the number of posts and the average number of replies per thread. Similarly, we considered the heterogeneity of user levels -i.e. their standard deviation -as a possible replacement for group degree and group betweenness centrality, which measure heterogeneity in user centralities. We wanted to be sure that the effort put in the calculation of SSI indicators had a reason. Consistently, Table 4 This led to no better results. Table 5. Analysis of FAAR weights.
Discussion, Limitations and Future Research
The level of tourism impacts the economy of a country [88]. Forecasting tourism demand has important implications for policy makers, company managers working in the tourism industry and several other stakeholders. At the same time, information and opinions exchanged among tourists can influence the number of visitors to specific destinations or attractions and the image formation for places tourists have not yet visited [89,90]. In our case study, we present a ten-year analysis of the TripAdvisor travel forum, carried out by developing a specific web crawler and combining methods and tools from social network and semantic analysis. Descriptive statistics indicate that male users were predominant, and that majority of forum participants were between 35 and 64 years old. Among the most recurring topics, we found requests for information about local means of transport, associated with an average lower sentiment. This is also due to the fact that users' comments about local means of transport are commonly shared in forums and not in separate reviews -as it can happen for hotels, airlines, museums and restaurants.
Our findings indicate that variables coming from the analysis of online forums can significantly improve the forecasting models which consider the volume of online search queries (measured by the Google Trend index). Social network and sematic variables could improve forecasts in 79% of cases and models including them were in the superior set in 93% of cases. There were exceptions to this improved performance at six-month forecasting horizon, where for 5 cities our predictors could not improve the results of AR models.
Overall, forum language complexity and the centralization of communication (group degree and betweenness centrality) emerged as the most important predictors. Higher complexity seems to anticipate more arrivals. It can be indicative of a more informative language [19], as this measure is higher when new words are used in the forum posts. Therefore, one explanation of the link between complexity and arrivals could be that prospective tourists look for information about their destinations, posting questions which demand for new knowledge; answers to these questions bring new content in the forums, making the language more complex. Similarly, higher centralization of online interactions, can be a signal of the presence of eminent contributors, i.e. informal moderators who are probably local experts who share their knowledge with prospective tourists.
The percentage of male users could also contribute to the improvement of forecasting performance. The number of forum posts (activity), on the other hand, was less informative than the number of new users joining the forum. Even if significant in our preliminary analysis of granger causality, other variables -such as average response time and users level -were less important to forecasting purposes. Rotating leadership, which seems to have a pivotal role for online community growth [60], had high model weights in 50% of cases, with a declining trend for longer forecasting horizons. It seems that the presence of local experts who dominate conversations can support prospective tourists more than democracy in interactions and plurality of opinions.
In general, we maintain that traditional models for the prediction of tourism demand -based on more conventional metrics, such as the univariate analysis of tourist arrivals -can be improved by extracting big data from online sources. In this sense, the variables that we presented in this study have a potential which should be studied more, using different forecasting techniques. In addition, the theoretical reasons behind the different contribution of each of them could be investigated further, in dedicated future research.
This study has also other limitations which we plan to address in future studies. The choice of studying the number of international airport arrivals as the dependent variable does not consider visitors entering a country by other means of transport; this variable is also not suitable to identify people traveling for work. Moreover, a part of international travelers might land to a city airport and then move to other places. As already discussed in Section 3, the selection of other measures would imply other limitations. For example, if we count the number of visa requirements, we would need to take into account that citizens of the European Union do not need a visa to travel in EU countries. Similarly, counting the number of nights spent in accommodation establishments is not always a good proxy for the number of tourists [70]. In general, we remind the reader that the main objective of this study is to give evidence to the potential of new variables which can be useful for tourist arrival predictions, and which can be relatively easy to extract from the web. Accordingly, even if international airport arrivals is not the perfect proxy of tourism demand, we maintain this is a reasonable indicator for the purpose of our research. This choice is also consistent with previous research [70,73]. We advocate future research to study online travel communities interacting on Facebook groups or on other social media platformswhere the number of users younger than 35 years old is potentially larger. Could SSI variables measured on other online platforms be more informative than those measured on the TripAdvisor travel forum? It would also be interesting to study the effects of language sentiment and complexity considering languages other than English. Scholars might want to test additional dependent variables, or forecast tourism demand at different levels (for example analyzing online communities to predict the number of visitors to specific museums or attractions). We advocate future research to assess the predictive power of SSI indicators considering cities outside Europe and smaller, less popular, cities which are not capitals. The development of a more resilient and flexible crawler is also in our future plans, in order to reduce/eliminate data quality issues and examine interactions taking place on other web platforms. Lastly, in-depth analysis of the major topics of each city forum could provide further insights.
Conclusions
The use of big data in tourism creates new challenges [91]. We show that applying social network and semantic analysis to big data extracted from online travel forums can help making predictions of international tourist arrivals. Our metrics prove their value in increasing the forecasting accuracy of models which consider the volume of online search queries. This research findings contribute to the research about tourism forecasting, presenting a new approach and new metrics. Past research mostly considered other sources of online data -such as web search queries or online reviews [22,24,[77][78][79], whereas interaction dynamics in travel forums were less explored. Moreover, the use of social network analysis in tourism is recent and new [54]. This research has practical implications for researchers, policy makers and business managers working in the tourism industry -who could, for example, adjust prices or make more accurate sales forecasts. Similarly to Song and Witt [92], we maintain that accurate forecasts are vital for: efficient planning of tourism-related businesses, dealing with extremely perishable products (to avoid for example overbookings or empty hotel rooms); adequate appraisal of public projects and planning of investments in destination infrastructures; appropriate support of governmental decision-making processes, which regard the allocation of resources and the formulation of medium-to-long term tourism strategies. | 2019-07-14T07:01:39.332Z | 2019-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "6418fd7b7c44f38f0664dd96c4bfc56e4a9093f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2105.07727",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "33aebbde87bbe8e15107abbb8a948f9734879640",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Economics",
"Computer Science"
]
} |
17384041 | pes2o/s2orc | v3-fos-license | Preserved skeletal muscle protein anabolic response to acute exercise and protein intake in well-treated rheumatoid arthritis patients
Introduction Rheumatoid arthritis (RA) is often associated with diminished muscle mass, reflecting an imbalance between protein synthesis and protein breakdown. To investigate the anabolic potential of both exercise and nutritional protein intake we investigated the muscle protein synthesis rate and anabolic signaling response in patients with RA compared to healthy controls. Methods Thirteen RA patients (age range 34–84 years; diagnosed for 1–32 years, median 8 years) were individually matched with 13 healthy controls for gender, age, BMI and activity level (CON). Plasma levels of C-reactive protein (CRP), interleukin (IL)-6 and tumor necrosis factor (TNF)-α were measured using enzyme-linked immunosorbent assay (ELISA) in resting blood samples obtained on two separate days. Skeletal muscle myofibrillar and connective tissue protein fractional synthesis rate (FSR) was measured by incorporation of the amino acid 13C6-phenylalanine tracer in the overnight fasted state for 3 hours (BASAL) and 3 hours after intake of whey protein (0.5 g/kg lean body mass) alone (PROT, 3 hrs) and in combination with knee-extensor exercise (EX) with one leg (8 × 10 reps at 70 % of 1RM; PROT + EX, 3 hrs). Expression of genes related to inflammatory signaling, myogenesis and muscle growth/atrophy were analyzed by real-time reverse transcriptase-polymerase chain reaction (RT-PCR). Results CRP was significantly higher in the RA patients (2.25 (0.50) mg/l) than in controls (1.07 (0.25) mg/l; p = 0.038) and so was TNF-α (RA 1.18 (0.30) pg/ml vs. CON 0.64 (0.07) pg/ml; p = 0.008). Muscle myofibrillar protein synthesis in both RA patients and CON increased in response to PROT and PROT + EX, and even more with PROT + EX (p < 0.001), with no difference between groups (p > 0.05). The gene expression response was largely similar in RA vs. CON, however, expression of the genes coding for TNF-α, myogenin and HGF1 were more responsive to exercise in RA patients than in CON. Conclusions The study demonstrates that muscle protein synthesis rate and muscle gene expression can be stimulated by protein intake alone and in combination with physical exercise in patients with well-treated RA to a similar extent as in healthy individuals. This indicates that moderately inflamed RA patients have maintained their muscle anabolic responsiveness to physical activity and protein intake. Electronic supplementary material The online version of this article (doi:10.1186/s13075-015-0758-3) contains supplementary material, which is available to authorized users.
Introduction
Rheumatoid arthritis (RA) is a systemic, inflammatory, autoimmune disease primarily affecting the joints [1]. Patients with RA are often characterized by having a lower muscle mass than their peers [2] and one of the causal mechanisms has been suggested to be related to the chronic inflammatory state [3]. Rat studies show that the development of low-grade inflammation negatively affects muscle mass and attenuates the muscle protein synthesis response to feeding [4,5]. Moreover, plasma from cachectic patients (cancer and septic shock), characterized by high levels of inflammatory markers, can induce inflammatory signaling and loss of muscle protein in cultured muscle cells [6][7][8]. Likewise, an increased level of systemic inflammation may contribute to the muscle loss observed in relation to other diseases like cancer, chronic obstructive pulmonary disease (COPD) and diabetes [8][9][10][11][12][13][14][15]. Evidently, the loss of muscle mass leads to muscle strength deficits and in addition, RA patients may have reduced muscle strength due to greater intramuscular fat infiltration [16] along with pain-related limitations. In addition to the repeatedly reported reduction in muscle strength in RA patients [16][17][18], metabolic changes occur in both preclinical and later RA stages, including deterioration of blood lipid profile and insulin sensitivity [19][20][21] which may increase cardiovascular disease risk, summing up to a reduced life span [22]. All of these conditions could be rejuvenated by improving skeletal muscle mass and quality by means of exercise and nutritional interventions, highlighting the importance of understanding the molecular regulation of muscle mass in RA.
Resistance exercise enhances protein turnover rate, thus increases both protein synthesis and breakdown rates. However, a concomitant intake of dietary protein further stimulates muscle protein synthesis resulting in a net protein synthesis and thus protein accretion. When repeated, it makes up a strategy to counteract loss of muscle mass and strength.
In the present study, we aimed to investigate skeletal muscle mass regulation in methotrexate-treated RA patients, measuring leg muscle protein synthesis and expression of genes involved in myogenesis, inflammatory signaling and growth/atrophy in response to resistance exercise and whey protein supplementation in RA patients compared with that of control subjects. Each RA patient was carefully matched with a control subject based on age, gender, activity level and body mass index (BMI) to rule out direct effects of these parameters and focus on effects specifically related to the RA disease. The age and activity matching was important, since an impaired anabolic response -anabolic resistance -is reported with aging [23], and as even within the normal range of inflammatory indicators, both age and level of physical activity plays a role and could contribute to reductions in muscle mass with RA.
In a rat model of RA, adjuvant-induced arthritis, remarkable changes in skeletal muscle have been demonstrated [24][25][26][27][28][29][30], including increased mRNA expression of tumor necrosis factor (TNF)-α, muscle ring-finger protein (MuRF1), atrogin1, insulin-like growth factor (IGF)-1, MyoD and myogenin in relation to muscle wasting and a reduced body weight gain during growth. Similar alterations in RA patients may underlie the muscle deteriorations observed in these patients. However, whether expression of such anabolic or proteolytic pathway genes is altered in muscle of RA patients and how these are regulated by exercise and protein intake is to our knowledge currently unknown.
In recent years, treatment of RA patients has improved resulting in better quality of life for most patients. Therefore, we included well-treated RA patients receiving the anchor disease-modifying antirheumatic drug (DMARD), methotrexate, which is first-line medical treatment for RA patients. Patients receiving biological anti-TNF-α or steroid therapy were excluded, in order to obtain a homogenous experimental group, and due to the fact that glucocorticoids are expected to markedly affect skeletal muscle per se [31]. Although the level of systemic inflammation in RA is somewhat reduced during antirheumatic treatment, it has been shown not to lower the level completely down to that of healthy peers [32,33], and we anticipated that the well-treated RA patients included in this study would still have increased levels of systemic inflammation [17,32,[34][35][36][37][38].
This study reports for the first time an anabolic response of myofibrillar and collagen protein synthesis and gene expression to acute resistance exercise and protein feeding in RA patients, similar to that of healthy controls. However, expression of the genes coding for TNF-α, myogenin and hepatocyte growth factor (HGF)1 were more responsive in RA patients compared to controls.
Subjects
Thirteen patients diagnosed with RA according to the American College of Rheumatology (ACR) classification criteria from 1987 and not receiving anti-TNF-α or steroid therapy (6 months and 6 weeks washout, respectively) were included. All patients received methotrexate. Time since RA diagnosis ranged from 1 to 32 years (median 8 years). Exclusion criteria were; type 2 diabetes, BMI above 38, cardiovascular disease, cancer or known infections. Disease Activity Score in 28 joints (DAS28) range was 1.8-4.6 (mean 2.6, SD 1.0, n = 6), of these 50 % were seropositive. Each patient (RA) was carefully matched with a healthy control subject (CON) based on gender, age (+/− 2 yrs) and BMI (+/− 2 units). RA patients were classified into one of four groups (1-4) of physical activity level according to the physical activity part of the Copenhagen City Heart Study questionnaire (Østerbroundersøgelsen, [39]). Matching CON subjects had to fit into the same activity group as the corresponding RA patient. For inclusion of matching controls 150 candidates were screened by telephone interviews. The study was approved by the Research Ethics Committees of the Capital Region of Denmark (H-4-2011-028) and conformed to the Declaration of Helsinki. All subjects gave written informed consent before participation. Subjects were asked to refrain from caffeine and alcohol for 1 and 3 days, respectively, prior to the experiment and to avoid exercise for the last 2 days before the experimental day.
Pretest day
Prior to the experimental day, subjects met for a pretest day for anthropometric measures, strength tests, dualenergy X-ray absorption (DXA) scanning, blood sampling, interview etc. Height was measured to the nearest centimeter and weight to the nearest 100 g, wearing light clothes and without shoes. Waist circumference was measured as the smallest circumference between anterior superior iliac spine and the lower ribs, and hip circumference as the largest circumference around the hips, both to the nearest centimeter. Body mass index (BMI) and waist/height ratio were calculated from these measurements. Following 10 min of supine rest, blood pressure and 'resting' heart rate were measured (Table 1a). Physical activity level of included subjects was recorded by use of the Physical Activity Scale (PAS) and converted to metabolic equivalent of task (MET) values as described by Aadahl and Jorgensen [40]. A brief dietary interview was performed to ensure that all included subjects consumed adequate protein and energy.
One repetition maximum (1 RM) was measured in a knee extension device (Technogym, Superexecutive Line, Gambottola, Italy) at range of motion 20°-100°(0°corresponds to full leg extension) and following individual adjustment and a brief warm-up consisting of low loads.
Maximal voluntary contraction (MVC) in isometric knee extension was determined for each leg at 70°of knee flexion using the Good Strength device (Version 3.14 Bluetooth; Metitur Ltd, Jyväskylä, Finland) after a 5-min warm-up on a stationary bike. The subjects were seated and fastened in a rigid chair with hips and knees flexed. A leg cuff, which was connected to a strain gauge through a (Table 1c for strength data). Strength is expressed as moment in Nm, that is, corrected for lever arm length. The recorded moment was corrected for the effect of gravitational pull on the lower leg and foot by calibration before each measurement.
Resting blood samples were obtained by venipuncture for direct analysis of inflammatory cells and blood lipid profile at the Clinical Biochemistry Department, Bispebjerg Hospital, Copenhagen, using standard laboratory procedures. Furthermore, ethylenediaminetetraacetic acid (EDTA) plasma was stored at −80°C pending analyses as described below.
Experimental protocol
On the experimental day, subjects arrived fasted in the morning by taxi to the Institute of Sports Medicine, Bispebjerg Hospital, Copenhagen, Denmark, where they were placed supine and remained rested. A catheter was inserted into an antecubital vein of each arm; one used for tracer infusion and one used for collection of blood samples throughout the study. The trial design and sampling protocol is shown in Fig. 1. After obtaining a background blood sample, the ring-13 C 6 -phenylalanine tracer (sterile and pyrogen-free; Cambridge Isotopes Laboratories, Andover, MA, USA) was administered as a primed (8 umol/kg LBM), continuous (8 umol/kg LBM/hr) infusion. The tracer, which was mixed in sterile saline and sterilized through a 0.2-μm sterile disposable filter (Minisart, Sartorius Stedium Biotech GmbH, Goettingen, Germany), was infused throughout the experimental day (total infusion time 8 hrs). After 1½ hrs of tracer infusion, the first muscle biopsy was obtained from the resting leg (B; baseline). At 4 hrs the subjects moved to the exercise equipment and (after one set of warm-up knee extensions consisting of eight repetitions at 35 % of 1 RM) performed one-legged knee extension exercises consisting of ten sets of eight repetitions at 70 % of 1RM separated by a 1-min break where subjects remained seated in the kneeextensor device. Subjects were randomized to perform the exercise with their dominant or non-dominant leg. The exercise session was completed in approximately 30 min and was supervised by the experiment leader. The contralateral leg remained rested. Immediately after cessation of the exercise session (4½ hrs of tracer infusion) biopsies were obtained from both legs, at least 4 cm away from the previous biopsy. Immediately hereafter a protein drink consisting of 0.5 g intact whey protein isolate (Lacprodan-9224, Arla Foods Ingredients, Viby, Denmark)/kg LBM (12.5 % enriched with ring-13 C 6phenylalanine) dissolved in 190 ml water was consumed (total amount in RA and CON groups: 25.3 (0.7) and 25.7 (1.2) g (mean (SEM)), respectively). Three hours later bilateral biopsies were obtained, at least 4 cm away from Fig. 1 Study design. The stable isotope labeled amino acid 13 C 6 -phenylalanine was infused throughout the study period, and fractional synthesis rate (FSR) of muscle protein (vastus lateralis) was measured in the resting, fasted (BASAL) state and after intake of whey protein alone (PROT) and in combination with unilateral resistance exercise (EX) (PROT + EX). Upper line of grey boxes represents one leg, lower line the contralateral leg. Muscle biopsy time points are marked with B any previous biopsies. The order of biopsies along one leg, and whether the exercised or rested leg was biopsied first was randomized among RA patients whereas identical procedures were followed in the matched CON subject.
Blood samples and analyses
Venous blood samples were drawn into EDTA tubes and cooled on ice for 10 min, followed by centrifugation (10 min at 3060 g at 4°C), and the plasma phase was stored at −80°C until analysis.
Plasma levels of the inflammatory markers C-reactive protein (CRP), interleukin (IL)-6 and TNF-α were measured using enzyme-linked immunosorbent assay (ELISA) in blood samples obtained on the pretest day and the basal blood sample from the experimental day. The mean value of these is presented. ELISA kits (CRP DuoSet DY1707, IL-6 Quantikine HS ELISA Kit HS600B and TNF-α HSTA00D) were from R&D Systems (Minneapolis, MN, USA) and procedures have been described previously [43]. Inflammatory cell profile and blood lipid profile (triglycerides, total cholesterol, high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol) in basal blood samples was measured at the Clinical Biochemistry Department, Bispebjerg Hospital, Copenhagen, as previously described [43].
During the experimental day blood samples (see Fig. 1) were obtained at time points 0, 15, 90, 150, 210 and 270 min after commencement of isotope infusion and at time points +15, +30, +60, +90, +120 and +180 min after consumption of the protein drink, for determination of ring-13 C 6 -phenylalanine enrichment and plasma glucose. Blood glucose was measured immediately using an Accu-Chek, Inform II (Roche Diagnostics, Basel, Switzerland).
Muscle biopsies
As shown in Fig. 1, a total of five biopsies (B) were obtained; baseline, rest, exercise (EX) 0, PROT 3 and PROT + EX 3 (at time points corresponding to 1½, 4½ and 7½ hrs of isotope infusion) providing three intervals for fractional protein synthesis rate (FSR) calculations; BASAL FSR (rest), PROT FSR (0-3 hrs after protein drink in rested leg), PROT + EX FSR (0-3 hrs after protein drink in exercised leg). The same five muscle biopsies were used for gene expression analyses by real-time reverse transcriptase polymerase chain reaction (RT-PCR) as described below. The muscle biopsies were obtained under local anesthetic (1 % lidocaine), through separate incisions at least 4 cm apart, in order to separate them as much as possible while still obtaining biopsies from reasonably similar areas of muscle, this has previously been shown not to affect FSR measurements [44]. The percutaneous needle biopsy technique with a 5-mm biopsy needle [45] and manual suction was used. The biopsy was freed from visible fat and connective tissue, and one piece immediately snap frozen (for later gene expression analyses) and another piece of approximately 30 mg (range 15.1-57.5) for stable isotope enrichments was wiped clean from blood in ice-cold saline, weighed, and snap frozen. Muscle biopsies were stored at −80°C until analyses.
Stable isotope analyses Protein fractionation
Raw skeletal muscle specimens were homogenized (Fastprep, 120A-230; Thermo Savant, Holbrook, NY, USA) for 4 × 15 sec in 1 ml homogenization buffer (0.02 M Tris, pH 7.4, 0.15 M NaCl, 2 mM EDTA, 0.5 % Triton-X 100, 0.25 M sucrose) left overnight at 5°C, then homogenized once again at day 2 and left at 4°C for 1 hr before centrifugation (20 min, 1600 g, 4°C). The supernatant was discarded and 1.5 mL of high-salt buffer (0.7 M KCl and 0.1 M pyrophosphate) was added to the pellet, which was vortexed for 30 sec and left at 4°C overnight. After a spin (20 min, 1600 g, 4°C), the supernatant (myofibrillar protein fraction) was transferred to new vials and the pellet (connective tissue fraction) washed once more with highsalt buffer and left for 2 hrs and centrifuged (20 min, 1600 g, 4°C) again from which the supernatant was discarded. The myofibrillar proteins in the supernatant were precipitated by adding 3.45 mL ice-cold 99 % ethanol and left at 4°C for 30 min. After spinning (20 min, 1600 g, 4°C) the supernatant was discarded. Both myofibrillar and connective tissue protein pellets were added to 1 mL of 6 M HCl and left at 110°C overnight to hydrolyze proteins. The analysis of protein-bound tracer abundances were carried out on the GC-C-IRMS equipment (Finnigan Delta Plus, Bremen, Germany) as described in more detail elsewhere [46].
Precursor enrichment
Plasma-free amino acids were purified on resin columns (AG 50 W-X8 resin; Bio-Rad Laboratories, Hercules, CA, USA). After being washed, eluted and dried down under a stream of nitrogen, the purified amino acids were derivatized using N-methyl-N-(tert-butyldimethylsilyl)trifluoroacetamide + 1 % tert-butyl-dimethylchlorosilane (Regis Technologies, Morton Grove, IL, USA) mixed 1:1 with acetonitrile. The MTBSTFA-derivative of phenylalanine was separated on a CP-Sil 8 CB capillary column (30 m, 0.32 mm ID; coating, 0.25 μm) (ChromPack; Varian, Palo Alto, CA, USA) and the isotope ratios were analyzed on a triple-stage quadrupole mass spectrometer (TSQ Quantum; Thermo Scientific, San Jose, CA, USA) operated in electron ionization mode. Chromatogram integration was carried out in MassRatio 2.72 (FBJ Engineering, Frederiksvaerk, Denmark) and the tracer-to-tracee ratio (TTR) was calculated by subtracting the isotope ratio of a background sample from all the enriched samples.
Fractional synthesis rate calculations
The ring-13 C 6 -phenylalanine enrichment of the myofibrillar and connective tissue muscle protein fractions measured by GC-C-IRMS (Hewlett Packard 5890-Finnigan gas chromatography-combustion III-Finnigan Delta plus isotope ratio mass spectrometry; Thermo Finnigan MAT, Bremen, Germany) were used to calculate the fractional synthesis rate (FSR) in percent per hour. Calculations are based on the incorporation rate of ring 13 C 6 -phenylalanine into muscle proteins using a standard precursor-product model as follows: where ΔEproduct is the change in tracer enrichment of protein-bound ring 13 C 6 -phenylalanine in two biopsies from the same leg taken with a time interval of Δt. Eprecursor is the mean precursor 13 C 6 -phenylalanine enrichment during that time interval. Here we used venous plasma tracer enrichments as a surrogate estimate of the precursor enrichment.
Whole-body rate of appearance (Ra) of 13 C 6 -phenylalanine (an estimate of whole-body protein breakdown rate) was calculated as: Where Eplateau was the weighted average of venous plasma enrichment throughout the basal or protein (PROT) + exercise (EX) periods or the two combined.
RNA extraction
RNA was extracted as described in [47]. Essentially, approximately 15 mg of frozen muscle tissue from each biopsy was homogenized in TRI Reagent (Molecular Research Center, Cincinnati, OH, USA), using 1-bromo-3chloropropane for phase separation and isopropanol to precipitate RNA. The RNA pellet was washed in ethanol and dissolved in RNase-free water. RNA concentrations were determined by spectroscopy at 260 nm. Quality of the RNA was checked by gel electrophoresis and spectrophotometer ratios at 260/240 nm (acceptable range 1.2-1.6 at pH 8) and 260/280 nm (1.8-2.0 at pH 7.5-8.0).
Real-time RT PCR analysis
Expression of a total of 26 different genes was measured (see Table 2 for full list) belonging to the following groups; satellite cell regulators and inflammation, heat shock proteins, myogenic regulatory factors, atrogenes and cytokines and their receptors.
Total RNA (500 ng from each muscle sample) was converted into cDNA in 20 ul using the OmniScript reverse transcriptase (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol.
For each target mRNA, 0.25 ul cDNA was amplified in a 25-ul SYBR Green PCR reaction containing 1 × Quantitect SYBR Green Master Mix (Qiagen) and 100 nM of each primer ( Table 2). The amplification was monitored real time using the MX3005P real-time PCR machine (Stratagene, Santa Clara, CA, USA). The threshold cycle (Ct) values were related to standard curves made with PCR products to determine the relative difference between the unknown samples, accounting for the PCR efficiency. The specificity of the PCR products was confirmed by dissociation curve analysis after amplification. All mRNA data were normalized to ribosomal protein, large, P0 (RPLP0). For normalization control, glyceraldehyde-3phosphate dehydrogenase (GAPDH), see Additional file 1. Baseline data (1.5 hrs rest, relative to mean CON) are shown in Additional file 2. mRNA expression data are presented as fold changes relative to individual baseline values.
Statistical analysis
Results are reported as mean (SE) unless otherwise stated. Protein synthesis and gene expression data were analyzed by two-way repeated-measures (RM) ANOVA (SigmaPlot 12.3, Systat Software Inc, San Jose, CA, USA) and Student-Newman-Keuls (SNK) post hoc test with correction for multiple comparisons. Significant effects of group (CON vs. RA) or biopsy and interaction (group × biopsy) are shown on graphs at a significance level of p < 0.05. P values ≤0.1 are also shown on graphs, for trends. All mRNA data were log-transformed for statistical analyses and shown as geometric mean ± backtransformed SE. For some mRNA targets, the pattern of missing data led to exclusion of subject pairs from the statistical analyses, resulting in exclusion of the following number of subject pairs; IL1β = 4; IGF-IEc = 1; IL1R = 1; cmet = 1. All other data (with only a single data point per subject) were compared by a paired two-tailed t test (Prism 6.02 for Windows, GraphPad Software Inc, La Jolla, CA, USA). Plasma ELISA data were log-transformed for statistical analyses.
Baseline characteristics
As shown in Fig. 2, CRP was higher in the RA patients (2.25 (0.50) mg/l) than in CON subjects (1.07 (0.25) mg/l; p = 0.038), TNF-α was higher in RA (1.18 (0.30) pg/ml) than CON (0.64 (0.07) pg/ml; p = 0.008) and IL-6 tended to be higher in RA (RA 2.89(0.68) pg/ml; CON 1.74(0.32) pg/ml; p = 0.065). Although going in the same direction, these differences in inflammatory markers between RA patients and healthy CON individuals were not as pronounced as previously reported in a majority of studies Fig. 2 Systemic inflammatory markers. Plasma levels of the inflammatory markers tumor necrosis factor α (TNF-α), interleukin 6 (IL-6) and C-reactive protein (CRP) measured in controls (CON) and RA patients (RA). Individual data (mean of two sampling days) were log-transformed for statistical analyses and data are shown on log scales with line at geometric means. Significance level of paired comparisons is given on graphs [32,35,37,38,48,49], but were similar to the moderate levels observed by Crowson et al. [50]. Most likely, the limited elevation of systemic inflammatory markers emphasizes the very well-functioning state of the RA patients participating in this study. The overview of subject characteristics in Table 1 reveals many similarities between RA and CON groups. However, waist/hip ratio (Table 1a) tended to be higher in RA than CON (p = 0.07). Among RA patients, six were smokers (5-20 cigarettes/day, mean 10) and among controls four were smokers (1-17 cigarettes/day, mean 10). Use of medication in the two groups is shown in Table 3. These records show that all RA patients use medication of some type, mostly DMARDs like methotrexate (all 13 patients) and salazopyrin (5/13 patients). Pain relief medication was frequently used by RA patients, mostly paracetamol (7/13 patients) and nonsteroidal antiinflammatory drugs (NSAIDs) (3/13) whereas only a single CON participant reported use of each of these drugs. Most RA patients had some comorbidity (see Additional file 3 for complete list), except for two patients. No subjects from the CON group used any rheumatic drugs and had only a minor intake of painkillers. Seven CON subjects were completely free from comorbidities. These differences in comorbidities and medication could be confounding the systemic inflammation data (Fig. 2), since some comorbidities may contribute to an elevation of inflammatory markers while antirheumatic medication is likely to reduce these markers. Body composition measures (Table 1b) were not different between RA and CON, likewise, no differences in knee extensor muscle strength (Table 1c) measured as one repetition maximum (1 RM) and maximal voluntary contraction (MVC) were observed between groups. The total number of kg lifted during the experimental acute exercise session was not different between groups (Table 1c). When estimated by PAS, physical activity level turned out higher in CON than in RA (p = 0.026). The similarities in body composition between RA patients and CON were somewhat surprising and in contrast to previous reports of reduced muscle mass [2,16,42] and increased fat accumulation in RA patients [51,52]. Furthermore, the similar muscle strength between RA and CON indicate that the patients were well-functioning in comparison to those participating in previous studies [16,17,38,53,54]. Data on blood lipid profile and circulating inflammatory cells is given in Table 4. Although metabolic changes are usually reported at all stages of RA disease [19], we detected neither differences between RA and CON in blood lipid profile, blood pressure, metabolic syndrome biomarkers, fasting glucose, nor in the circulating inflammatory cell profile for all of which changes have previously been reported in RA patients [19,48,50]. Again, this reflects the clinically well-controlled condition of the participating patients. Throughout the experimental day, blood glucose level was not different between groups and was stable around 5 mmol/l ( Fig. 3 and Table 4b).
Muscle protein synthesis
Fractional synthesis rate (FSR) of muscle myofibrillar and connective tissue protein is shown in Fig. 4. The myofibrillar protein synthesis was enhanced in response to protein intake (p < 0.05) and was further increased when combined with heavy resistance exercise (p < 0.001). This response was similar in CON and RA groups. Connective tissue protein synthesis was increased after exercise combined with protein intake (p < 0.001), but not by protein intake alone (p > 0.1). Irrespective of state (fasting, protein fed alone or in combination with exercise), connective tissue FSR tended to be higher in RA than CON (p = 0.060). Plasma tracer enrichment (Fig. 5 in RA vs. CON throughout the infusion period (p = 0.028) (ranged between 0.11 and 0.14 in RA and 0.12 and 0.18 in CON). Whole-body protein breakdown rate (rate of tracer amino acid appearance) was reduced following protein intake (PROT) and one-legged resistance exercise (EX) (BASAL = 64.4 (SE 3.8) and 71.5 (SE 3.2) and PROT + EX = 58.5 (SE 2.4) and 63.6 (SE 2.1) μmol/kg LBM/hr in CON and RA, respectively; time p < 0.001) and tended to be higher in RA than in CON as an average over the entire study period (TOTAL; p = 0.11). However, the whole-body assessment of protein turnover is neither protein nor tissue specific and we cannot say whether the tendency to a higher protein turnover rate in RA patients is a general phenomenon or may be related to a specific tissue (i.e., skeletal muscle) or protein type. We showed a comparable basal muscle protein synthesis rate in RA and CON ( Fig. 4a and b). Muscle protein turnover in patients with RA has to our knowledge been investigated in only one human study previously [55], showing that the resting, fasted FSR in RA patients not receiving steroid therapy was similar to osteoarthritis patients serving as controls. Further, the present study shows for the first time an anabolic response (elevated myofibrillar FSR) to acute whey protein feeding alone, which was amplified when combined with acute resistance exercise in patients with RA. Additionally, this response was not different from that observed in healthy control subjects matched for age, gender, BMI and physical activity. Apparently, in our RA patients the connective tissue fraction was less responsive to nutritional intervention than the myofibrillar fraction, which is in accordance with previous findings [56]. For further details of the anabolic response to acute exercise and protein feeding, protein expression and signaling analyses of targets of the Akt-mTOR signaling pathway would have been relevant, however, since we observed similar FSR responses in RA and CON, we chose to focus on transcriptional regulation of genes involved in other aspects of muscle adaptation as described in the following section.
Muscle gene expression
In the present study, expression of genes related to inflammatory signaling, myogenesis and muscle growth/ atrophy as well as heat shock proteins responded similarly in RA and CON. No differences were observed in basal gene expression level between RA and CON (Additional file 2). Changes in mRNA expression from baseline is shown in Figs. 6, 7, 8, 9 and 10, divided into subgroups related to satellite cell (SC) regulators and inflammation (Fig. 6), heat shock proteins (Fig. 7), myogenic regulatory factors (Fig. 8), atrogenes (Fig. 9), as well as cytokines and receptors (Fig. 10). As shown in Fig. 6, HGF1 expression was overall higher in RA vs. CON (Group, p = 0.026), specifically HGF1 expression was higher in RA patients than CON at EX0 and PROT + EX3 (p < 0.001 and p = 0.004, respectively). The higher expression of HGF1 in RA than CON indicate an increased sensitivity toward signaling via this pathway in RA patients, which could be located to the skeletal muscle stem cells, SCs, since HGF1 signaling is involved in activation of SC [57,58], although our gene expression analysis is not specific to the SCs. HGF1 activates SCs via binding to the cmet receptor [59].
However, no significant changes in gene expression of the HGF1 receptor, cmet, were observed, indicating that this is not the regulatory site for this pathway. Macrophage chemoattractant protein 1 (MCP-1, also known as Fig. 3 Blood glucose levels. Level of blood glucose throughout the experimental day. Subjects arrived fasted in the morning, and ingested only the protein drink (0.5 g whey/kg lean body mass) as marked by the arrow. Mean ± SE, n = 6-12 CCL2) was induced by exercise (but not protein feeding), both acutely (EX 0, p = 0.02) and even more 3 hrs later (p < 0.001) in both groups combined. This indicates that it is involved in the adaptive response to resistance exercise. Potentially, it plays a role in crosstalk between inflammatory cells (macrophages) and SC, as indicated by its colocalization with these cells [60]. Cyclooxygenase 2 (COX2) expression was induced immediately after exercise (p = 0.011), in line with previous reports [61,62], although at later time points. In contrast, we and others have previously reported COX2 induction only when exercise was combined with COX inhibition [47,63]. Taken A B Fig. 4 Fractional synthetis rate (FSR) of muscle myofibrillar and connective tissue protein. FSR of muscle myofibrillar (a) and connective tissue (b) protein given in %/hr in control (CON) and rheumatoid arthritis (RA) patient groups, measured in the resting, fasted (BASAL) state and after intake of whey protein alone (PROT) and in combination with unilateral resistance exercise (EX) (PROT + EX). Black bars denote rheumatoid arthritis patients (RA, n = 13) and grey bars healthy controls (CON, n = 12-13). Letters a, b and c denote significant differences between sampling time points (two-way RM ANOVA) together, we observed some indications on involvement of HGF1, MCP-1 and COX2 in the adaptive response to exercise, however, differential regulation between RA and CON was only observed for HGF1.
In Fig. 7 mRNA expression of heat shock proteins (HSPs) is shown. All three HSPs (HSP70, HSP27 and αB-crystallin) were induced by exercise both immediately after (EX 0, p < 0.001) and 3 hrs later (PROT + EX 3, p < 0.001), but not by protein intake. The induction of HSPs a few minutes after exercise (EX0), suggests that the HSP response to unaccustomed exercise is even more acute than previously shown (as discussed in [64]) and that muscle of RA patients is equally responsive as in CON.
Myogenic regulatory factors (Fig. 8) were induced by exercise combined with protein intake but not by protein intake alone. Myogenin expression was higher in RA than CON (Group; p = 0.021), pointing at an increased responsiveness in RA patients, although this was not apparent for the other myogenic regulatory factors Myf6 and MyoD. Myf6 expression was increased both immediately after exercise (EX 0, p < 0.001) and 3 hrs later (PROT + EX 3, p < 0.001). Expression of MyoD was increased only 3 hrs after exercise (PROT + EX 3, p < 0.001). In general, myogenic regulatory factors were induced by exercise, and mainly after 3 hrs compared with immediately after, which is in line with previous observations A B Fig. 5 Enrichment and rate of appearance of 13 C 6 -phenylalanine. a Plasma 13 C 6 -phenylalanine ( 13 C 6 -Phe) enrichment (tracer-to-tracee ratio (TTR)) during the entire infusion period. Mean ± SE, n = 13. b Rate of appearance (μmol/kg LBM/hr) of 13 C 6 Phe (Ra) during the basal period (BASAL, 3 hrs), after protein intake and exercise (PROT + EX, 3 hrs) and over the two periods combined (TOTAL, 6 hrs). Over the total period, Ra tended to be higher in rheumatoid arthritis patients (RA) vs. healthy controls (CON) (p = 0.110). Individual data are shown with line at mean ± SE, n = 13 [65,66] and the response was not different between RA and CON. Nor did we observe a difference in resting gene expression between RA and CON (Additional file 2) in the current study. In muscle from a rat model of RA (adjuvant-induced arthritis) both protein and mRNA expression of MyoD and myogenin were increased at rest, however, this was not investigated in relation to exercise [24][25][26].
Expression of myostatin and the atrogenes Atrogin1 and MuRF1 is shown in Fig. 9. The negative regulator of Fig. 6 Satellite cell regulators and inflammation. mRNA expression relative to 1.5 hrs (baseline). For hepatocyte growth factor 1 (HGF1), letters a-b denote significant differences within controls (CON) muscle mass, myostatin, was downregulated 3 hrs after exercise + protein (PROT + EX 3, p < 0.001) and responds to exercise in an overall similar manner in RA patients and CON. Atrogin1 was downregulated 3 hrs after protein intake alone (PROT 3, p < 0.001) and in combination with exercise (PROT + EX 3, p < 0.001), whereas MuRF1 was downregulated 3 hrs after protein intake alone (PROT 3, p < 0.001), but upregulated 3 hrs after exercise combined with protein intake (PROT + EX3, p < 0.001). Also for the atrogenes, no impact of RA could be observed. In muscle of the rat model of RA, mRNA expression of MuRF1 and atrogin1 was markedly increased [27,29,30], however, this difference was not apparent in our human subjects. Similarly, COX2 expression was markedly increased in muscle of arthritic rats, which was not reproduced in the RA patients of the present study either (Fig. 6). Figure 10 displays mRNA expression of selected cytokines and their receptors. TNF-α expression was higher in RA than in CON across all biopsy points (Group, p = 0.036) and was induced immediately after exercise (EX 0, p < 0.001); the former is in line with the increased mRNA expression of TNF-α found in gastrocnemius muscle of rats with adjuvant-induced arthritis [27,29] indicating a more responsive TNF-α expression in muscle from RA patients. TNF-α is believed to be a central mediator of muscle wasting in rheumatoid arthritis by alteration of the balance between muscle protein synthesis and breakdown. Via inhibition of signaling from the insulin receptor [67] and IGF-1 receptor via JNK and IRS-1 [68], TNF-α can reduce peripheral insulin action and interfere with IGF-1 signaling, leading to a reduction of the anabolic responsiveness. Anti-TNF-α therapies have proven effective in RA although muscle mass is not necessarily reversed by anti-TNF-α treatment [32]. However, the anabolic response to a positive energy balance was improved by anti-TNF-α treatment, seen by a larger gain of fat-free mass compared with methotrexate treatment [35] and supporting a role for TNF-α in the regulation of muscle mass. Interestingly, the higher TNF-α expression in the present study did not result in such differences in muscle mass or acute anabolic response, leaving the significance of this differential TNF-α expression an open question. Fig. 7 Heat shock proteins. mRNA expression relative to 1.5 hrs (baseline) TNFR1 expression was slightly increased 3 hrs after protein + exercise (PROT + EX 3, p < 0.001), whereas no changes were observed for TNFR2 or IL-1β. Induction of TNFR expression 3 hrs after exercise + protein has to our knowledge not been reported before, although higher levels of TNFR1 gene expression was recently reported in older (61 and 76 years) compared to younger subjects (40 years) [69], both at rest and 24 hrs after acute resistance exercise. Generally, regulation of TNF receptors in human muscle is not well understood. Both TNFR1 and 2 were expressed at high levels in the present study, while correlations were observed between TNFR1 and 2 expression (r = 0.57, p = 0.003, data not shown). Inflammatory signaling via IL-6, IL-6R and IL-1R was induced by exercise with an early upregulation of IL-6 immediately after exercise (EX 0, p < 0.001) and to a lesser extent in the resting leg (REST, p < 0.001) whereas the receptors were upregulated 3 hrs after exercise + protein (PROT + EX 3, p < 0.001). None of these responses were different between RA and CON, indicating a normal cytokine response to acute resistance exercise in RA patients.
At baseline no differences in mRNA expression between RA and CON were observed for any of the investigated target genes (Additional file 2), although in skeletal muscle of arthritic rats, marked changes in gene expression induced by the disease have been consistently reported.
Thus, our human data from RA patients do not confirm the upregulation of muscle regulatory, inflammatory and catabolic markers found in animal models of RA, which is in line with the overall healthy state and preserved anabolic response of RA patients in the present study.
In contrast to the present findings in RA patients, remarkable differences in gene expression between elderly and young muscle previously have been reported including an elevated expression of inflammatory genes [69,70], atrogenes [71] and MRFs [65] in resting skeletal muscle. Within the same time frame as used in the present study, Atrogin1 is induced by exercise only in old muscle [71], and IL-6 induction and myostatin downregulation by resistance exercise are more pronounced in old compared to young muscle [70,72]. Together these results from elderly muscle indicate increased muscle inflammation susceptibility [69] and an altered acute muscle adaptive Fig. 8 Myogenic regulatory factors. mRNA expression relative to 1.5 hrs (baseline). Myogenin expression at REST was significantly different from baseline response to exercise in elderly muscle, which could also contribute to the muscle deficits in RA patients. However, apart from a more pronounced induction of TNF-α, HGF1 and myogenin in RA vs. CON, this was not the case in the present study. Previously, knowledge about regulation of muscle gene expression in RA has relied only on animal studies, but from the current study we can now add human data.
Taken together, our gene expression data indicate that specific targets involved in muscle and SC regulation (HGF1, myogenin and TNF-α) are induced to a larger extent in RA patients than in healthy CON subjects, however, the majority of genes investigated showed similar responses in RA vs. CON indicating that skeletal muscle tissue of RA patients responds equally well to an acute exercise stimulus compared to healthy CON subjects.
Limitations
Keeping in mind that results may not apply for RA patients in general, the present study indicate that skeletal muscle of RA patients does not differ markedly from healthy control muscle and that they respond to protein intake alone and in combination with exercise in a similar way. The patients participating in the present study were a selected group of well-functioning RA patients, and thus no changes in either muscle strength or muscle mass were detected, which reduces the external validity of the study and leaves the question open of how RA patients with highly elevated systemic inflammatory levels and/or cachexia are characterized with respect to molecular (signaling) regulators of muscle mass and muscle protein turnover in response to the same interventions.
Conclusions
In conclusion, muscle protein synthesis and transcriptional regulation can be stimulated with both protein intake and physical exercise in patients with RA to a similar degree as in healthy individuals. These findings show that characteristics inherent of RA disease do not affect the muscle protein synthesis and gene expression response to acute exercise and protein intake, when factors like BMI, age and activity level are controlled by carefully matching each patient with a corresponding healthy control subject. | 2016-05-17T14:12:45.684Z | 2015-09-25T00:00:00.000 | {
"year": 2015,
"sha1": "2c130ab8806317124306bb13e168f85499c04283",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-015-0758-3",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "2c130ab8806317124306bb13e168f85499c04283",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249128896 | pes2o/s2orc | v3-fos-license | Variables and Mechanisms Affecting Electro-Membrane Extraction of Bio-Succinic Acid from Fermentation Broth
The production of succinic acid from fermentation is a promising approach for obtaining building-block chemicals from renewable sources. However, the limited bio-succinic yield from fermentation and the complexity of purification has been making the bio-succinic acid production not competitive with petroleum-based succinic acid. Membrane electrolysis has been identified to be a promising technology in both production and separation stages of fermentation processes. This work focuses on identifying the key operational parameters affecting the performance of the electrolytic cell for separating succinic acid from fermentation broth through an anionic exchange membrane. Indeed, while efforts are mainly focused on studying the performance of an integrated fermenter-electrolytic cell system, a lack of understanding remains in how to tune the electrolytic cell and which main parameters are involved. The results show that a single electrolytic cell of operating volume 250 mL was able to extract up to 3 g L−1 h−1 of succinic acid. The production of OH− ions by water electrolysis can act as a buffer for the fermenter and it could be tuned as a function of the extraction rate. Furthermore, as the complexity of the solution in terms of the quantity and composition of the ions increased, the energy required for the separation process decreased.
Introduction
A biorefinery is a promising alternative for overcoming the dependency on fossil fuels while at the same time addressing several contemporary challenges such as environmental problems, the depletion of petroleum resources, waste management, and political concerns [1]. Nowadays, worldwide efforts are being made to produce chemicals via biological routes. In this regard, succinic acid is widely recognized as a key building block for deriving both commodity and specialty chemicals [2]. Succinic acid is in fact one of the top 12 most used platform chemicals, being the precursor to 30 commercially valuable products, such as plasticizers, lubricants and solvents, pharmaceutical intermediates, and food and beverage additives [3]. The succinic acid market is projected to reach USD182.8 million by 2023, increasing at a CAGR of 6.8% from 2018 [4]. During recent years, four companies-Biosuccinium (former Reverdia), Succinity, BioAmber, and Myriant-have played a major role in the commercialization of succinic acid based on microbial fermentation [5]. However, despite efforts to make bio-succinic acid economically competitive, most of the succinic acid today is still produced from petrochemical derived sources [6], indicating that a higher market share of bio-succinic acid can only become possible by decreasing the total production costs. The major challenges lie in the availability of feedstock to secure a long-term biomass supply [7], the low productivity associated with fermentation [4], and the cost-intensive purification processes [8]. Regarding the feedstock challenge, intensive research has been done recently in terms of process synthesis and techno-economic analysis to find the best While recent studies have given promising results regarding integrating membrane electrolysis with the fermenter, no studies have shown how to tune the electrolytic cell for such use. Our work thus focuses on characterizing an electrolytic cell for bio-succinic acid extraction by analyzing the different variables involved in the extraction rate. These variables include the voltage applied, the initial concentration and the distribution of organic acids in the electrolytic cell, the membrane area, the nature of the ions, and the batch versus the continuous setup. The experiments were performed in a 300 mL handmade electrolytic cell in batch mode with a solution of pure succinic acid. The complexity of the solution was then increased to a mixture of acids, a synthetic broth of A. succinogenes, and a real fermentation broth of A. succinogenes. Finally, a continuous extraction of fermentation broth of A. succinogenes was simulated by recirculating the fermentation though a volume of 5 L of fermentation broth.
Theory
The electrolytic cell consists of two electrodes that behave as the active site for electron transfers for reactions that take place in an ionic solution that act as an electrolyte and can conduct electricity. Inside the electrolytic cell, two phenomena occur: electrolysis and the movement of charged ions towards the opposite electrode. The electrolysis of water produces molecular hydrogen and OH − in the cathode chamber and oxygen and H + at the anode site [15].
The equation describing the flux of ions under both a concentration gradient and an electric field is the Nernst-Planck equation: Figure 1. Schematic representation of succinic acid extraction in an electrolytic cell. The negatively charged succinate in the cathode chamber is driven through an anionic exchange membrane into the anode chamber where it is protonated to succinic acid. The applied voltage is the driving force that as a side effect brings about electrolysis of water to produce molecular hydrogen and hydroxide ions at the cathode and oxygen and protons at the anode side of the membrane.
Theory
The electrolytic cell consists of two electrodes that behave as the active site for electron transfers for reactions that take place in an ionic solution that act as an electrolyte and can conduct electricity. Inside the electrolytic cell, two phenomena occur: electrolysis and the movement of charged ions towards the opposite electrode. The electrolysis of water produces molecular hydrogen and OH − in the cathode chamber and oxygen and H + at the anode site [15].
The equation describing the flux of ions under both a concentration gradient and an electric field is the Nernst-Planck equation: where J i is the ionic flux (mol m −2 h −1 ), ν is the convective velocity of the solvent (m s −1 ), D i is the diffusion coefficient of ion (m 2 s −1 ), F is the Faraday constant (C mol −1 ), R is the ideal gas constant (J mol −1 K −1 ), φ is the electric potential (J K −1 ), T is the temperature (K), C i is the molar concentration, z i is the valence of ion, and x is a directional coordinate. The first term represents the convective transport of ions across the membrane, the second term is associated with the diffusion caused by the difference of concentration while the third term is associated with the diffusion caused by the voltage applied [16].
In two ionic solutions separated by an ionic perm-selective membrane as an anionic exchange membrane, a phenomenon called the Donnan effect (or the Gibbs-Donnan equilib-Membranes 2022, 12, 542 4 of 18 rium) arises. Without a selective membrane between two solutions, the electric potential (ϕ) from the charged ions is in equilibrium (ϕ = 0). This is also called electroneutrality. When two solutions containing different concentrations of ions are separated by a fixed charged membrane, the two solutions are not in equilibrium and they create an electric potential gradient, either positive or negative [17]. This will result in the transfer of counter-ions through the membrane to reach an equilibrium between the solutions.
The relationship between current and voltage, the resistance, is an important factor to be investigated since the power needed for the system is directly proportional to the resistance, according to the formula: where P is the power (W), I is the current (A) and R is the resistance (Ω). The resistance is related to both voltage and current according to Ohm's law: where V is the voltage (V). The resistance is also directly proportional to the resistivity of the object, in this case the ionic solution, the electrode area, and the membrane, and it is indirectly proportional to the distance between the electrodes [18]. The conductivity of an ionic solution increases with the concentration of ions in the solution and with the temperature in step with an increase in the mobility of ions [19].
In this work the different terms of the Nernst-Plank equation, the Donnan effect, and the relationship between voltage and current were investigated experimentally to establish their influence on the extraction rate of succinic acid.
Experimental Organic Acid and Broth Solutions
The experiments were conducted using increasing complexity solutions, from pure succinic acid to real broth of A. succinogenes. In total, four kinds of organic acid solutions were used with different components and concentrations. The first solution consisted of different concentrations of pure succinic acid. For the second solution, the complexity was increased to a mixture of succinic acid, formic acid, acetic acid and, pyruvic acid at a concentration ratio of 5:1:1:0.5, respectively. The reason behind this choice was that fermentation broths produced by A. succinogenes typically contain this kind of mixture and concentration of organic acids [20]. The third solution was similar to the previous solution but was modified by adding typical nutrients of a fermentation of A. succinogenes, as shown in Table 1. The fourth solution was a real fermentation broth of A. succinogenes based on Ferone [21]. However, the concentrations of succinic acid, formic acid, acetic acid, and pyruvic acid in this broth were adjusted to match their concentrations in the previous solutions in order to make the results comparable. A simple schematic representation of the solutions used for the experiments is shown is Figure 2. Sodium hydroxide was used as a pH neutralizer for the solutions prior to the experiments to ensure that all the organic acids in the solutions were in ionic form and the solutions were electrically conductive. The above-mentioned chemicals used were obtained from Sigma Aldrich (Saint Louis, MO, USA).
Electrolytic Cell and Experimental Setup
The design of the cell was based on the work of Thygesen [22] and consisted of two cylindrical acrylic chambers of the same size connected with a circular tube where the anionic exchange membrane was installed, as showed in Figure 3. Each chamber had an internal diameter of 5 cm and a height of 14.5 cm and could accommodate a maximum volume of 300 mL. A custom-made stainless-steel cathode electrode (source: in-house workshop, Technical University of Denmark) and an iridium-coated titanium dioxide anode electrode from Magneto Special Anodes B.V (Schiedam, Netherlands) were used to activate the electrolytic reactions. Both the anode and cathode electrodes had an area of 7.8 × 3.8 cm 2 and were placed at a distance of 6 cm from each other. The electrodes were attached to a power supplier EL302RT TRIPLE POWER SUPPLY from TTi. The anionic exchange membrane was a polystyrene membrane cross-linked with divinylbenzene from Membranes International inc. (Ringwood, NJ, USA) pre-treated accordingly, and the maximum membrane diameter used was 3.4 cm.
The four types of solutions containing organic acids were used initially in batch mode. Each chamber of the electrolytic cell was filled with 250 mL of solution that was recirculated between the bottom and top part of each respective chamber using a peristaltic pump from Watson-Marlow (Ringsted, Denmark), model 523Du, with the liquid exiting the bottom section and entering the upper section of the chamber for mixing purposes, as shown in Figure 4. The experiments with real broth of A. succinogenes were performed both in batch and continuous mode. The continuous mode setup is illustrated in Figure 5. This setup was designed as far as possible to mimic coupling of the electrolytic cell with a fermentation tank. In practice, the solution in the cathode chamber was recirculated between a 5 L tank containing the fermentation broth of A. succinogenes, while the anode chamber contained 250 mL of solution as in batch mode. In this way the decrease in the concentration of succinic acid at the cathode was negligible because the succinic acid was extracted at the anode. In both setups, samples were taken from a sampling port attached to the recirculation tubes using single-use syringes. The four types of solutions containing organic acids were used initially in batch mode. Each chamber of the electrolytic cell was filled with 250 mL of solution that was recirculated between the bottom and top part of each respective chamber using a peristaltic pump from Watson-Marlow (Ringsted, Denmark), model 523Du, with the liquid exiting the bottom section and entering the upper section of the chamber for mixing purposes, as shown in Figure 4. The experiments with real broth of A. succinogenes were performed both in batch and continuous mode. The continuous mode setup is illustrated in Figure 5. This setup was designed as far as possible to mimic coupling of the electrolytic cell with a fermentation tank. In practice, the solution in the cathode chamber was recirculated between a 5 L tank containing the fermentation broth of A. succinogenes, while the anode chamber contained 250 mL of solution as in batch mode. In this way the decrease in the concentration of succinic acid at the cathode was negligible because the succinic acid was extracted at the anode. In both setups, samples were taken from a sampling port attached to the recirculation tubes using single-use syringes. The organic acid or broth solution was recirculated between each chamber by peristatic pumps while the electrodes were attached to a power supply to create an electrical driving force from the cathode to the anode chamber. An anionic exchange membrane was inserted between the electrode chambers to ensure a perm-selective extraction of succinic acid and other anions in the solution.
Methods
The performance of the electrolytic cell was assessed based on multiple parameters. The key parameters were the extraction rate of succinic acid (g L -1 h -1 ) as a function of the current variation, concentration variation, ions composition and distribution, solution complexity, membrane area, and batch/continuous mode configuration. The experiments
Methods
The performance of the electrolytic cell was assessed based on multiple parameters. The key parameters were the extraction rate of succinic acid (g L −1 h −1 ) as a function of the current variation, concentration variation, ions composition and distribution, solution complexity, membrane area, and batch/continuous mode configuration. The experiments were performed as duplicates and the duration of each experiment was 3 h with sampling every hour. The first sample was taken at time = 0 and was not exposed to applied voltage. All the remaining samples were exposed to applied voltage during the experiment. For both anode and cathode chambers, the same solution was used. The reason for not using an inorganic anolyte, which could assist the Donna-Gibbs effect, was to avoid risking unwanted reactions and inserting a new variable. We arrived at this solution after demonstrating that the extraction rate was not improved by using a common anolyte such as sodium sulphate instead of succinic acid as anolyte. Table 2 shows the types of experiments performed with the different kinds of solutions. The aim of this experiment was to assess whether the concentration gradient between the two chambers of the electrolytic cell has a relevant effect on the extraction rate of succinic acid in the absence of applied voltage, and also to determine the influence of the second term in the Nernst-Planck equation. The initial concentration of succinic acid was Membranes 2022, 12, 542 8 of 18 50 g L −1 adjusted with sodium hydroxide to reach pH 7 at the cathode, and ultrapure water was present at the anode. However, the Nernst-Plank equation does not consider the Gibbs-Donnan effect, which can contribute to diffusion of the ions from one chamber to the other in the absence of electric driving forces. Actually, at the cathode at time 0 there is a solution of negative ions (succinate and OH − from sodium hydroxide) that is locally neutral. If the succinate crosses the anionic exchange membrane to pass to the anode chamber, the cathode chamber becomes positively charged due to a lack of these negative charged ions. At this point a potential gradient is established across the membrane, with succinate in the anode chamber being attracted back to the cathode chamber. To demonstrate the effect of the Gibbs-Donnan effect, an experiment was performed in which initially the cathode chamber contained 50 g L −1 of succinic acid at pH 7 and the anode chamber contained a solution of sodium chloride instead of ultra-pure water. The concentration of sodium chloride was 24.76 g L −1 that corresponded to a molar concentration twice that of succinate because succinate carries a double negative charge. In this situation, the negative chlorine ions could substitute for succinate ions by crossing the anionic exchange membrane from the anode to the cathode chamber and theoretically maintain local charge neutrality.
Variation in the Distribution of Ions
Two experiments were performed and compared to assess what initial distribution of ions inside the two chambers was necessary to maximize the extraction rate of succinic acid. In one experiment the initial concentration of succinic acid in the cathode chamber was 50 g L −1 while the initial concentration of succinic acid in the anode chamber was 5 g L −1 .
In the other experiment this distribution was reversed and the concentration of succinic acid in the anode chamber was more concentrated. The current was kept at 192 mA.
Current Variation
This investigation aimed to assess extraction rate as a function of the current. The study consisted of five different experiments performed for 3 h with the same initial succinic acid concentration of 50 g L −1 in the cathode chamber and 5 g L −1 in the anode chamber. The potentiometer attached to the electrodes allows voltage to be controlled and current to be read, or the opposite way around. The authors chose to manipulate the current to keep it fixed and to read the voltage, which was slightly fluctuating during the experiment. This phenomenon might be due to the fact that the local concentration of ions changes continuously with time because the succinic acid is migrating and causes the conductivity of the solution to change. As a consequence, the solution resistance, to which the current and voltage are correlated, also changes. Another source of fluctuations could be change in temperature over time due to the exothermic oxidation reaction at the anode. In effect, the conductivity of an ionic solution increases with temperature. The currents at which the above-mentioned study was performed were 7 mA, 36mA, 96 mA, 192 mA, and 420 mA.
Concentration Variation
In this part of the study, a comparison between different concentrations was performed at a fixed current of 36 mA, with 50 g L −1 pure succinic acid in the cathode chamber and 5 g L −1 pure succinic acid in the anode chamber, and 5 g L −1 in the cathode chamber and 0.5 g L −1 in the anode chamber. The aim of this experiment was to assess the importance of the initial concentration in the electrolytic cell for succinic acid extraction.
Organic Acids Variation
The solutions of mixed acids were composed of four types of organic acids, succinic acid 5 g L −1 , formic acid 1 g L −1 , acetic acid 1 g L −1 , and pyruvic acid 0.5 g L −1 . This mixture was similar to the composition of a real broth of A. succinogenes without the medium, based on a study by Lin [20]. The aim of this experiment was to investigate whether and how the addition of other monovalent ions changes the extraction rate of succinic acid in respect to the experiments conducted with pure succinic acid only (see Section 3.1). It was also important to gain knowledge on whether this configuration changes the voltage needed to maintain the same current density as in the pure succinic acid experiments. The experiments were conducted for 3 h at 36 mA. The composition of the synthetic broth is given in Table 3. Table 3. Composition of mixed acids solutions at the onset of the experiments in the anode and cathode chambers.
Chamber
Succinic Acid Pyruvic Acid Acetic Acid Formic Acid An experiment was performed with equimolar concentrations of the different organic acids to understand whether there is a difference in the extraction rate between the different organic acids. The aims of the experiment also included examining whether a divalent ion such as succinate has a higher extraction rate than a monovalent ion such as acetic acid, pyruvic acid, and formic acid. The experiment was performed with an initial concentration of 40 mmol L −1 of the 4 organic acids in the cathode chamber and 20 mmol L −1 in the anode chamber, which corresponded to 5 g L −1 of succinic acid in the cathode and 2.5 g L −1 of succinic acid in the anode at 36 mA.
Variation of Membrane Area
In this experiment the diameter of the membrane was reduced from 3.4 cm to 2.4 cm, an approximate 30% decrease in diameter size. This was done by covering parts of the membrane with impermeable, non-conductive polystyrene (PP) plastic that blocked the passage of ions and solute. The intention was to observe whether the extraction rate of succinic acid decreased proportionally and what voltage was required to achieve the same current.
Synthetic Broth of A. succinogenes Ion and Composition Variation
The synthetic broth of A. succinogens is a solution of mixed organic acids with the addition of the medium nutrients that constitute the typical synthetic fermentation broth of A. succinogenes. The aim of the experiment was to evaluate whether the addition of minerals and other compounds has a significant influence on the succinic acid extraction rate compared to the simpler solutions examined previously. The composition of the synthetic medium is given in Table 4, based on the study of Ferone [21]. Table 4. Initial composition and concentration of synthetic broth of A. succinogenes based on Ferone [21] at the onset of the experiment in the anode and cathode chambers.
Real Broth of A. succinogenes
For these experiments real fermentation broth of A. Succinogenes was used, based on the paper of Ferone [21]. The broth was first centrifuged to remove cells and solids and the composition was adjusted to have a concentration of succinic acid 5 g L −1 , acetic acid 1 g L −1 , formic acid 1 g L −1 , and pyruvic acid 0.5 g L −1 to facilitate comparison with the previous experimental setups. The experiments were performed both in batch and continuous mode. For the continuous mode the solution in the cathode chamber was recirculated between a 5 L bottle of fermentation broth that had the same composition as that used in batch mode.
Effect of Concentration on Succinic Acid Extraction
The experiment with a 50 g L −1 initial concentration of succinic acid in the cathode chamber and ultra-pure water in the anode chamber aimed to assess the influence of the concentration gradient on succinic acid extraction in the absence of an applied voltage. At the same time the second term of the Nernst-Plank equation was investigated. The results showed that the concentration gradient is a very small driving force whose effect may be considered negligible within the scope of the experiment. In fact, no substantial amount of succinic acid was detected in the anode chamber at the end of the experiment, which is in accordance with the Nernst-Plank equation here: A brief comparison of the order of magnitude of the second term, which is associated with the concentration gradient, with the third term of the equation, which is associated with the diffusion of ions due to applied voltage, showed that the main driving force is associated with the voltage applied. For an initial succinic acid concentration of 50 g L −1 and an applied voltage of 30 V, the ration between the third and second is examined in the example below: The above calculation shows that the contribution of the diffusion of ions due to a concentration gradient to ions transport is very low compared to electromigration, and that this outcome is valid for any concentration gradient chosen. The first term of the Nernst-Plank equation associated with convection was not investigated because the convection was not present in the system investigated.
However, the Gibbs-Donnan effect still seemed to be relevant in the absence of an applied voltage. In fact, repeating the previous experiment with a solution of NaCl in the anode chamber instead of ultra-pure water gave different results in terms of the extraction rate of succinic acid. Figure 6 shows that the concentration gradient is more relevant if there are negatively charged ions in both chambers of the cells, which can be exchanged across the membrane, and up to 0.2 g L −1 of succinic acid was extracted in 3 h. This was due to the fact that the succinic acid crossing the anionic exchange membrane from the cathode to the anode can be substituted by chloride ions, which are also negative, passing from the anode to the cathode, which enables the equilibrium to be maintained locally according to the Donnan law. This result did not occur for the experiments where ultrapure water was present in the anode chamber. In that situation, the succinic acid crossing the membrane from the cathode to the anode could not be substituted by any other anion. As a result, a potential gradient developed across the membrane with a direction from the (4) A brief comparison of the order of magnitude of the second term, which is associated with the concentration gradient, with the third term of the equation, which is associated with the diffusion of ions due to applied voltage, showed that the main driving force is associated with the voltage applied. For an initial succinic acid concentration of 50 g L −1 and an applied voltage of 30 V, the ration between the third and second is examined in the example below: Applied voltage term The above calculation shows that the contribution of the diffusion of ions due to a concentration gradient to ions transport is very low compared to electromigration, and that this outcome is valid for any concentration gradient chosen. The first term of the Nernst-Plank equation associated with convection was not investigated because the convection was not present in the system investigated.
However, the Gibbs-Donnan effect still seemed to be relevant in the absence of an applied voltage. In fact, repeating the previous experiment with a solution of NaCl in the anode chamber instead of ultra-pure water gave different results in terms of the extraction rate of succinic acid. Figure 6 shows that the concentration gradient is more relevant if there are negatively charged ions in both chambers of the cells, which can be exchanged across the membrane, and up to 0.2 g L −1 of succinic acid was extracted in 3 h. This was due to the fact that the succinic acid crossing the anionic exchange membrane from the cathode to the anode can be substituted by chloride ions, which are also negative, passing from the anode to the cathode, which enables the equilibrium to be maintained locally according to the Donnan law. This result did not occur for the experiments where ultra-pure water was present in the anode chamber. In that situation, the succinic acid crossing the membrane from the cathode to the anode could not be substituted by any other anion. As a result, a potential gradient developed across the membrane with a direction from the anode to the cathode, making the diffusion of succinate difficult.
Membranes 2022, 12, x FOR PEER REVIEW 12 of 19 Figure 6. Measured concentration of succinic acid (SA) in the anode chamber during the concentration gradient experiment in the absence of applied voltage. Initial concentration in the cathode chamber was 50 g L −1 and initial concentration of NaCl was 24.75 g L −1 , which corresponds to twice molar concentration with respect to succinic acid because succinate is double charged. The succinic acid was extracted in the anode chamber due to the Gibbs-Donnan effect.
The Donnan potential acts in different directions with respect to the applied voltage and can be considered to be almost negligible under applied external voltage. In fact, for the current experiments, the Donnan potential was significantly lower, in orders of magnitude of dozens of millivolts [17], when external voltage was applied in the range of 1.5-30 volts. A schematic representation of the main driving forces occurring in the electrolytic cell is shown in Figure 7. . Schematic representation of the electrolytic cell and the main driving forces across the membrane: concentration gradient, Donnan potential, and external voltage applied. Sodium succinate (the orange double negatively charged circles) and sodium (the blue circles) are assumed to be present in the cathode chamber. Locally, the cathode chamber is neutral if no external voltage is applied. In the anode chamber sodium chloride is assumed to be present, with negatively charged chloride ions (the green circles). The anode is locally neutral in the absence of applied voltage.
The electrolytic reaction occurs inside the cell if an electrolyte is present in both the anode and cathode chambers. Both anolyte and catholyte solutions were prepared using succinic acid adjusted with sodium hydroxide to reach a pH of 7. However, it was not The Donnan potential acts in different directions with respect to the applied voltage and can be considered to be almost negligible under applied external voltage. In fact, for the current experiments, the Donnan potential was significantly lower, in orders of magnitude of dozens of millivolts [17], when external voltage was applied in the range of 1.5-30 volts. A schematic representation of the main driving forces occurring in the electrolytic cell is shown in Figure 7.
The electrolytic reaction occurs inside the cell if an electrolyte is present in both the anode and cathode chambers. Both anolyte and catholyte solutions were prepared using succinic acid adjusted with sodium hydroxide to reach a pH of 7. However, it was not initially clear what distribution of ions was needed inside the two chambers to maximize the extraction rate of succinic acid. To determine the initial ion distribution inside the cell, an experiment was designed with an initial succinic acid concentration of 50 g L −1 in the cathode chamber and 5 g L −1 in the anode chamber, respectively. The results were then compared with another experiment where this distribution was reversed so that the anode chamber contained the higher succinic acid concentration. The results of these experiments show a decrease in the succinic acid extraction rate of around 38% when the anode chamber contained a higher succinic acid concentration than the cathode chamber. This outcome must be due to the fact that the cathode chamber solution, when more diluted, became less and less concentrated over time as ions were transported across the membrane to the anode chamber, consequently decreasing the conductivity of the solution in the cathode chamber and thus decreasing the whole extraction process. The results convinced us to perform the next experiments with the cathode chamber solution more concentrated than for the anode chamber.
Once we established that it is more convenient to operate the electrolytic cell with the cathode chamber more diluted than the anode chamber to increase the extraction rate of succinic acid, it was then necessary to establish the optimal anolyte and catholyte concentrations inside the cell. For this purpose, two experiments were designed and compared: one with an initial concentration of 5 g L −1 succinic acid in the cathode chamber and 0.5 g L −1 in the anode chamber, and one with 50 g L −1 succinic acid in the cathode chamber and 5 g L −1 in the anode chamber. The concentration in the cathode in the latter experiment was close to the maximum solubility of succinic acid in water, which is 58 g L −1 at 20 • C. The current was kept constant at 36 mA and the voltage was measured.
concentration gradient experiment in the absence of applied voltage. Initial concentration in the cathode chamber was 50 g L −1 and initial concentration of NaCl was 24.75 g L −1 , which corresponds to twice molar concentration with respect to succinic acid because succinate is double charged. The succinic acid was extracted in the anode chamber due to the Gibbs-Donnan effect.
The Donnan potential acts in different directions with respect to the applied voltage and can be considered to be almost negligible under applied external voltage. In fact, for the current experiments, the Donnan potential was significantly lower, in orders of magnitude of dozens of millivolts [17], when external voltage was applied in the range of 1.5-30 volts. A schematic representation of the main driving forces occurring in the electrolytic cell is shown in Figure 7. Figure 7. Schematic representation of the electrolytic cell and the main driving forces across the membrane: concentration gradient, Donnan potential, and external voltage applied. Sodium succinate (the orange double negatively charged circles) and sodium (the blue circles) are assumed to be present in the cathode chamber. Locally, the cathode chamber is neutral if no external voltage is applied. In the anode chamber sodium chloride is assumed to be present, with negatively charged chloride ions (the green circles). The anode is locally neutral in the absence of applied voltage.
The electrolytic reaction occurs inside the cell if an electrolyte is present in both the anode and cathode chambers. Both anolyte and catholyte solutions were prepared using succinic acid adjusted with sodium hydroxide to reach a pH of 7. However, it was not Figure 7. Schematic representation of the electrolytic cell and the main driving forces across the membrane: concentration gradient, Donnan potential, and external voltage applied. Sodium succinate (the orange double negatively charged circles) and sodium (the blue circles) are assumed to be present in the cathode chamber. Locally, the cathode chamber is neutral if no external voltage is applied. In the anode chamber sodium chloride is assumed to be present, with negatively charged chloride ions (the green circles). The anode is locally neutral in the absence of applied voltage.
The results, shown in Figure 8, demonstrate that the extraction rate of succinic acid was 81% higher for the most concentrated solution, which follows the Nernst-Plank equation that describes ionic flux as directly proportional to ion concentration. This means that in order to maximize the extraction rate of succinic acid, it is preferable to operate the cell at a high cathodic concentration. However, it should be noted that succinic acid fermentation does not usually produce such a high concentration of succinic acid. For example, Pateraki et al. [11] reported an extraction rate of 0.3 g L −1 h −1 in a fed-batch fermenter. Thus, in the case of a coupled continuous fermenter-electrolytic cell operation, an optimal residence time of succinic acid inside the fermenter must be defined before starting the extraction of succinic acid. With regard to the measured voltage for maintaining a current of 36 mA, an increase from 5 to 20 V was observed when passing from a more concentrated initial solution to a less concentrated solution. This observation strongly indicates that the conductivity of the solution increases when more ions are available. The result is a decrease in the resistance and consequently the voltage and power required. Therefore, working with a high initial ionic concentration would be beneficial both for the extraction rate of succinic acid and the energy consumption.
Extraction Rate of Succinic Acid as a Function of Current Variation
Since the concentration gradient and the Gibbs-Donnan effect were excluded as significant driving forces when a difference of voltage was applied, the electric potential gradient was examined by observing the extraction rate of succinic acid as a function of the applied current. The initial concentration of succinic acid was of 50 g L −1 in the cathode chamber and 5 g L −1 in the anode chamber, with the solution in the cathode chamber more diluted to maximize the succinic acid extraction rate, based on the previous results.
However, in comparing the experimental results with the theoretical Nernst-Plank, it is important to remember that the equation considers the voltage applied rather than the current. A relationship between voltage and current must therefore be established first for the system under investigation. Figure 9A shows that the voltage varies linearly with the current at the maximum interval allowed by the potentiometer, which indicates that the resistance is constant for the same solution. Figure 9B shows that the extraction rate changes with the change in current. This result is in accordance with the Nernst-Plank equation that states that flux across the membrane increases with an increase in potential gradient in a linear way. And since the relationship between current and voltage is linear, the extraction rate must also be linear with the voltage, in accordance with the Nernst-Plank law.
Extraction Rate of Succinic Acid as a Function of Current Variation
Since the concentration gradient and the Gibbs-Donnan effect were excluded as significant driving forces when a difference of voltage was applied, the electric potential gradient was examined by observing the extraction rate of succinic acid as a function of the applied current. The initial concentration of succinic acid was of 50 g L −1 in the cathode chamber and 5 g L −1 in the anode chamber, with the solution in the cathode chamber more diluted to maximize the succinic acid extraction rate, based on the previous results.
However, in comparing the experimental results with the theoretical Nernst-Plank, it is important to remember that the equation considers the voltage applied rather than the current. A relationship between voltage and current must therefore be established first for the system under investigation. Figure 9A shows that the voltage varies linearly with the current at the maximum interval allowed by the potentiometer, which indicates that the resistance is constant for the same solution. Figure 9B shows that the extraction rate changes with the change in current. This result is in accordance with the Nernst-Plank equation that states that flux across the membrane increases with an increase in potential gradient in a linear way. And since the relationship between current and voltage is linear, the extraction rate must also be linear with the voltage, in accordance with the Nernst-Plank law.
The electrolysis of water, which is a side effect of the applied voltage, occurs and produces molecular hydrogen and hydroxide in the cathode chamber, and molecular oxygen and hydrogen ion in the anode chamber. This effect causes the pH to rise in the cathode chamber and decrease in the anode chamber as a function of the current or voltage applied, as shown in Figure 10. The electrolysis of water, which is a side effect of the applied voltage, occurs and produces molecular hydrogen and hydroxide in the cathode chamber, and molecular oxygen and hydrogen ion in the anode chamber. This effect causes the pH to rise in the cathode chamber and decrease in the anode chamber as a function of the current or voltage applied, as shown in Figure 10. In both the anode and cathode chambers, the initial pH of the solution was seven. The pH at the cathode increased with increasing current applied, and the rate was faster at a higher current. The situation was particularly critical for the cathode chamber. Here pH increased greatly, but the chemical stability of the membrane lies within the range pH 1-10, and therefore subsequent experiments were performed at low current/voltage settings. The anode chamber pH was monitored only at the end of the experiments and was demonstrated as expected to decrease with the increasing applied current. The electrolysis of water, which is a side effect of the applied voltage, occurs and produces molecular hydrogen and hydroxide in the cathode chamber, and molecular oxygen and hydrogen ion in the anode chamber. This effect causes the pH to rise in the cathode chamber and decrease in the anode chamber as a function of the current or voltage applied, as shown in Figure 10. In both the anode and cathode chambers, the initial pH of the solution was seven. The pH at the cathode increased with increasing current applied, and the rate was faster at a higher current. The situation was particularly critical for the cathode chamber. Here pH increased greatly, but the chemical stability of the membrane lies within the range pH 1-10, and therefore subsequent experiments were performed at low current/voltage settings. The anode chamber pH was monitored only at the end of the experiments and was demonstrated as expected to decrease with the increasing applied current. In both the anode and cathode chambers, the initial pH of the solution was seven. The pH at the cathode increased with increasing current applied, and the rate was faster at a higher current. The situation was particularly critical for the cathode chamber. Here pH increased greatly, but the chemical stability of the membrane lies within the range pH 1-10, and therefore subsequent experiments were performed at low current/voltage settings. The anode chamber pH was monitored only at the end of the experiments and was demonstrated as expected to decrease with the increasing applied current.
Mixed Organic Acid: Effect of Ion Valence on Acids Extraction Rate and the Pyruvic Acid Phenomenon
According to Ferone [21], in a typical fermentation of Actinobacillus succinogenes, the other main organic acids produced, apart from succinic acid, are acetic acid, formic acid, and pyruvic acid. Only succinic acid is divalent, while the other three organic acids are monovalent. According to the Nernst-Plank equation, this should result in a succinic acid extraction rate that is twice as high compared to the other monovalent acids, because ionic flux is directly proportional to the ion valence. An experiment with equimolar concentrations of the four organic acids resulted in an extraction rate of succinic acid at the anode chamber that was twice that of acetic acid, in perfect accordance with the Nernst-Planck equation. However, the extraction rate of formic acid in the anode chamber was approximately and significantly three times lower than the extraction rate of acetic acid, even though both acids are monovalent. A hypothesis that may explain this phenomenon is that formic acid in the electrolytic cell could react to form some other products not detected by the HPLC, since the total mass balance of formic acid was slightly decreased at the end of the process. The probable interaction between ions could also be a reason behind the different extraction rate of monovalent ions [23].
Regarding pyruvic acid, its mass balance was noted to not remain constant with time. In particular, the concentration in the cathode chamber decreased significantly with time, while the concentration in the anode chamber did not increase. This result indicates that pyruvic acid was lost or converted into another product at the cathode. No significant volume change was observed in the chambers throughout the experiment. A possible explanation is that pyruvate was converted to lactate in the presence of hydrogen according to the following reaction:
Mixed Organic Acid: Effect of Ion Valence on Acids Extraction Rate and the Pyruvic Acid Phenomenon
According to Ferone [21], in a typical fermentation of Actinobacillus succinogenes, the other main organic acids produced, apart from succinic acid, are acetic acid, formic acid, and pyruvic acid. Only succinic acid is divalent, while the other three organic acids are monovalent. According to the Nernst-Plank equation, this should result in a succinic acid extraction rate that is twice as high compared to the other monovalent acids, because ionic flux is directly proportional to the ion valence. An experiment with equimolar concentrations of the four organic acids resulted in an extraction rate of succinic acid at the anode chamber that was twice that of acetic acid, in perfect accordance with the Nernst-Planck equation. However, the extraction rate of formic acid in the anode chamber was approximately and significantly three times lower than the extraction rate of acetic acid, even though both acids are monovalent. A hypothesis that may explain this phenomenon is that formic acid in the electrolytic cell could react to form some other products not detected by the HPLC, since the total mass balance of formic acid was slightly decreased at the end of the process. The probable interaction between ions could also be a reason behind the different extraction rate of monovalent ions [23].
Regarding pyruvic acid, its mass balance was noted to not remain constant with time. In particular, the concentration in the cathode chamber decreased significantly with time, while the concentration in the anode chamber did not increase. This result indicates that pyruvic acid was lost or converted into another product at the cathode. No significant volume change was observed in the chambers throughout the experiment. A possible explanation is that pyruvate was converted to lactate in the presence of hydrogen according to the following reaction: Actually, the HPLC showed a significant amount of lactic acid both in the cathode and in the anode chamber, which indicated that pyruvate was initially converted into lactate that in turn was extracted through the membrane because lactate is a negatively charged ion. This theory can be supported by literature [24] showing that pyruvate is converted into lactate in the presence of hydrogen, but this process does not seem to have been performed previously in an electrolytic cell. However, only 31% of the pyruvic acid that disappeared from the cathode chamber was converted into lactic acid, and the unaccounted-for pyruvic acid could have been reduced further, according to the literature, to propylene glycol or other diols [25].
Reducing the membrane area by 30% led to a decrease of 33% in the extraction rate of succinic acid in a mixed acid solution, while the voltage required to maintain the same current of 36 mA increased by 22%. This result suggests that a good strategy would be to use a larger membrane area that would therefore allow both a higher extraction rate and the use of a lower voltage to maintain the same current.
Comparison of the Extraction Rate and Energy Requirements of the Different Solutions
The extraction rate of the desired compound, succinic acid, was compared using different solutions and experimental setups as described in the Materials and Methods section. Specifically, 5 g L −1 of pure succinic acid was compared with a solution of mixed Actually, the HPLC showed a significant amount of lactic acid both in the cathode and in the anode chamber, which indicated that pyruvate was initially converted into lactate that in turn was extracted through the membrane because lactate is a negatively charged ion. This theory can be supported by literature [24] showing that pyruvate is converted into lactate in the presence of hydrogen, but this process does not seem to have been performed previously in an electrolytic cell. However, only 31% of the pyruvic acid that disappeared from the cathode chamber was converted into lactic acid, and the unaccounted-for pyruvic acid could have been reduced further, according to the literature, to propylene glycol or other diols [25].
Reducing the membrane area by 30% led to a decrease of 33% in the extraction rate of succinic acid in a mixed acid solution, while the voltage required to maintain the same current of 36 mA increased by 22%. This result suggests that a good strategy would be to use a larger membrane area that would therefore allow both a higher extraction rate and the use of a lower voltage to maintain the same current.
Comparison of the Extraction Rate and Energy Requirements of the Different Solutions
The extraction rate of the desired compound, succinic acid, was compared using different solutions and experimental setups as described in the Materials and Methods section. Specifically, 5 g L −1 of pure succinic acid was compared with a solution of mixed acids typically found in fermentation broth of A. succinogenes (5 g L −1 succinic acid, 1 g L −1 lactic acid, 1 g L −1 formic acid, and 0.5 g L −1 pyruvic acid) and to a synthetic broth of A. succinogenes containing the same components at the same concentrations to those in the mixed acid solution but including the addition of nutrients. The final comparison was performed using the real fermentation broth of A. succinogenes and with the operation of the electrolytic cell both in batch mode and in continuous mode. In continuous mode the solutions were recirculated between the cathode chamber of the electrolytic cell and a 5 L bottle containing the fermentation broth to mimic a coupled fermenter-electrolytic cell.
For the same current and initial concentration of succinic acid in the different solutions, the extraction rate of succinic acid was highest for the pure succinic acid solution. This is reasonable because the presence of the other organic acids and ions could induce a competition between the ions and decrease the extraction rate of the succinic acid itself while maintaining the same current. In fact, the lowest extraction rate was obtained for the synthetic broth of A. succinogens due to presence of many other ions in the solution. However, similar extraction rates were obtained for the batch and continuous modes. To highlight the competition of ions in the solution, the ratio between the total negative ions in the test solution (succinic acid solution and synthetic broth) and the ions of succinic acid extracted after 3 h, was calculated as a percentage. As expected, this ratio was found to decrease with increasing complexity of the solution due to the high probability of competition between ions. The real broth was not included in this comparison due to the uncertainty over total ions concentration and composition.
With regard to voltage and thus the energy requirements, the result indicates that the electrical resistance was lower due to the presence of more ions in solution, which is similar to what was observed when the initial concentration of succinic acid was more concentrated. The lower voltage in continuous mode compared to the batch mode probably reflects the recirculation of ions between the cathode chamber and the 5L tank, and therefore no decrease in ions when succinic acid is extracted in continuous mode compared to batch mode. Table 5 presents the results discussed.
Conclusions
The aim of this study was to assess the performance of an electrolytic cell with an anionic exchange membrane for succinic acid extraction and a range of parameters: the applied current, the initial ions concentration and ions distribution inside the electrolytic cell, the nature of the ions and the complexity of the initial solution, the membrane area, and batch versus continuous mode. We demonstrated that an electrolytic cell configured in this is able to extract succinic acid and that the extraction rate of succinic acid, given a constant current, decreases with increasing complexity of the solution, probably as a result of the competition between the ions in solution. However, the voltage needed to maintain the same current in the cell also decreases because the initial solution becomes more electrically conductive, meaning that less energy is required. There was no evidence that a continuous extraction would be more advantageous for extraction compared to a batch extraction. Other significant advantages of continuous extraction must be considered, however, such as the avoidance of product inhibition and the reduced use of buffer. Indeed, the experiments showed that a very high pH is eventually reached in the cathode chamber due to water electrolysis, which could be used as a potential way to control pH. Another advantage is the fact that the voltage required in the continuous mode was lower than in the batch mode. Future work could determine the optimal voltage needed to optimize the extraction rate and control the pH in a fermenter in an electrolytic cell-fermenter coupled system. If the intention is to reduce electrical resistance and power demands, a future cell must be designed with a larger membrane area and a smaller gap between the electrodes. Furthermore, membrane choice must also be carefully investigated to ensure that the membrane is chemically stable over the whole operational pH range.
Even though this study focused on succinic acid, the proposed methodology can possibly be extrapolated to other carboxylic acids produced by fermentation, such as acetic acid, and potentially to other ionic products. | 2022-05-29T06:22:42.122Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "5d88656159ebb399491290bac260d6ba2c81bd84",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1efadf76677a53e8ff8b109444942145552756dd",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221465107 | pes2o/s2orc | v3-fos-license | The importance of distinguishing COVID-19 from more common respiratory illnesses
We recruited 1591 patients who presented to our fever clinics from 23 January 2020 to 16 February 2020. The different imaging findings between COVID-19 pneumonia and influenza A viruses, influenza B virus pneumonia were also investigated. Most patients were infected by influenza A and B viruses in the flu-season. A laboratory kit is urgently needed to test different viruses simultaneously. Computed tomography can help early screen suspected patients with COVID-19 and differentiate different virus-related pneumonia.
2019 novel coronavirus (COVID-19) that began in Wuhan (Hubei province, China) has raised intense attention and panic emotions around the world [1]. Thanks to the timely and effective control measurements conducted by the Chinese Government, the outbreak of COVID-19 is basically controlled. However, the situation of COVID-19 outbreak is getting worse in other countries over the last few days, such as Italy, America and Spain [2]. The World Health Organization (WHO) has increased the assessment of the risk of spread and the risk of impact of COVID-19 to very high at a global level [2]. Therefore, it is urgent to know the epidemic characteristics of COVID-19 and then to make appropriate strategies to fight the outbreak of COVID-19. The specific clinical symptoms (i.e. fever and leucopaenia) and the exposure history to Wuhan and close contact history with confirmed cases can substantially help us to diagnose the disease [3][4][5]. However, other virus-related infections can also cause similar symptoms (i.e. influenza A viruses and influenza B virus), especially in this flu-susceptible season, that makes the clinical diagnosis of COVID-19 difficult. The laboratory test remains the standard diagnostic procedure. Nevertheless, the problems of false negative and the potential of turning into positive again after identifying as negative are faced by clinicians. Computed tomography (CT) is proved to have the ability to provide valuable information for the diagnosis in clinical practice [6,7] and may has the ability to distinguish COVID-19 pneumonia from influenza A viruses and influenza B virus pneumonia.
This retrospective study was approved by Our Medical Ethical Committee (Approval Number KL-2020001), which waived the requirement for patients' informed consent. We recruited the patients who presented to our fever clinics from 23 January 2020 to 16 February 2020. Our institution was one of the 10 authorised institutions that can diagnose COVID-19 in Hunan province (one of five authorised institutions in Changsha City, Hunan province, China). Patients usually come to our designated fever clinics by the following reasons: (1) patients have symptoms of lung infection (i.e. fever and cough) and (2) patients have an exposure history to Wuhan or close contact with confirmed cases. All their available clinical and epidemic characteristics were collected.
Our fever clinics have continually received and diagnosed 1591 patients from 23 January to 16 February 2020. In total, 1581 of 1591 patients have been received laboratory examination for influenza A virus, influenza B virus and COVID-19 simultaneously. The distributions of the different virus-related pneumonia are presented in Figure 1. We can observe that the incidence of COVID-19 is 2% (31/1581), lower than that of influenza A virus (2.5%, 40/1581) and influenza B virus (4.6%, 73/1581). The statistics indicate that influenza A virus and influenza B virus (78.5%, 113/144) remain the most common type of virus in such a special flu season. Among the 31 COVID-19 patients, 27 patients were diagnosed by first reverse transcription polymerase chain reaction (RT-PCR) as COVID-19 positive, whereas one and three patients were diagnosed as COVID-19 positive at the second and fourth RT-PCR with throat swab samples. Eight patients were tested as COVID-19 positive using anal swab samples.
All the 31 patients underwent chest CT scans before treatment. Six of 31 patients had no abnormal chest CT findings. Twenty-four of 25 patients (96%) presented ground glass opacities (GGOs) and vascular enlargement (Table 1, Fig. 2). Regarding the lesion distribution, 20 of the 25 patients (80%) were more likely to be peripheral distribution (central, peripheral or no transverse predilection) and bilateral involvement (unilateral or bilateral). Seven of 40 patients with influenza A virus and 11 of 73 patients with influenza B virus underwent chest CT scans. Six of 11 patients with influenza B virus had normal CT findings. Patients with influenza A and B viruses were more likely to present consolidations and had pleural effusions (Fig. 2). All the patients tested as COVID-19 positive using anal swab samples had abnormal CT findings.
Timely diagnosis and treatment of different types of virus-infected patients, especially the identification of patients with COVID-19, is important to control the outbreak of COVID-19. An RT-PCR is considered as the golden standard for diagnosis of 2019-nCoV [5,8], however, it has inherent disadvantages related to false negative and long turnaround time. In our institution, 1 and 3 of 31 patients were diagnosed with COVID-19 positive at the second and fourth time RT-PCR with throat swab samples. All the four patients have abnormal CT findings in initial CT scans, which help us to screen out these patients for further laboratory tests. Therefore, CT actually plays a vital role in the screening of suspected 2019-nCoV-infected patients [6][7][8][9]. In our cohort, the incidence of COVID-19 (2%) is lower than the incidence of influenza A virus (2.5%) and influenza B virus (4.6%), which requires us to pay attention to other virus-related pneumonia. However, most of the kits for diagnose virus only can test one kind of virus, which is inconvenient for timely separation of different types of viruses. It is clinically important to separate COVID-19-infected patients from those infected by another virus, which can efficiently avoid the cross transmission and also the mixed virus infection. One patient had been diagnosed as COVID-19 by using bronchoalveolar lavage samples after three times negative diagnosis of COVID-19 by using swab samples in a China-Japan friendship hospital. Of note, the patients had been identified as influenza A virus infection. If the mixed virus infection happens, the treatment may be more comprehensive and difficult. However, different viral pneumonia may present the same imaging features [9], making it very difficult to differentiate them. Patients with COVID-19 have typical image features of GGOs, which is consistent with previous studies [6,7,9,10,11]. The GGOs reflect the pathologic changes in gross specimen [7,12]. However, patients with influenza A and B viruses are more likely to present consolidations and have pleural effusions, which can help us to screen out the patients with COVID-19 pneumonia. Moreover, eight patients (72.7%) with influenza B virus have no exudative lesions in CT images, indicating a relatively lower incidence of lung involvement.
In conclusion, common respiratory illnesses should pay attention to in the flu season. A laboratory kit is urgently needed to test different viruses simultaneously. CT can help with early screening of patients suspected to have COVID-19 and differentiate different virus-related pneumonia.
Financial support. The authors acknowledge financial support from the Key Emergency Project of Pneumonia Epidemic of novel coronavirus infection (2020SK3006), Emergency Project of Prevention and Control for COVID-19 of Central South University (160260005) and Foundation from Changsha Scientific and Technical bureau, China (kq2001001). The funder of the study had no role in study design, data collection, data analysis, data interpretation or writing of the report.
Conflict of interest. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability. Our Medical Ethics Committee has imposed data sharing restrictions because the data used in our study contain potentially identifying or sensitive patient information. To request data access, please contact the corresponding author at: junliu123@csu.edu.cn. | 2020-09-03T09:11:43.203Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "4a38c35976a62f6993a87ccab930d2f8a2160446",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/597EFDC64A217822EBEE8EBAD6327ED7/S0950268820001971a.pdf/div-class-title-the-importance-of-distinguishing-covid-19-from-more-common-respiratory-illnesses-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e44febff058d92b69b03be9dd48174e2636cc6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251323377 | pes2o/s2orc | v3-fos-license | Life in lockdown: a longitudinal study investigating the impact of the UK COVID-19 lockdown measures on lifestyle behaviours and mental health
Background The COVID-19 pandemic led to the UK government enforcing lockdown restrictions to control virus transmission. Such restrictions present opportunities and barriers for physical activity and healthy eating. Emerging research suggests that in the early stages of the pandemic, physical activity levels decreased, consumption of unhealthy foods increased, while levels of mental distress increased. Our aims were to understand patterns of diet, physical activity, and mental health during the first lockdown, how these had changed twelve-months later, and the factors associated with change. Methods An online survey was conducted with UK adults (N = 636; 78% female) during the first national lockdown (May–June 2020). The survey collected information on demographics, physical activity, diet, mental health, and how participants perceived lifestyle behaviours had changed from before the pandemic. Participants who provided contact details were invited to complete a twelve-month follow-up survey (May–June 2021), 160 adults completed the survey at both time-points. Descriptive statistics, T-tests and McNemar Chi Square statistics were used to assess patterns of diet, physical activity, and mental health at baseline and change in behaviours between baseline and follow-up. Linear regression models were conducted to explore prospective associations between demographic and psycho-social variables at baseline with change in healthy eating habit, anxiety, and wellbeing respectively. Results Between baseline and follow-up, healthy eating habit strength, and the importance of and confidence in eating healthily reduced. Self-rated health (positively) and confidence in eating healthily (negatively) were associated with change in healthy eating habit. There were no differences between baseline and follow-up for depression or physical activity. Mean anxiety score reduced, and wellbeing increased, from baseline to follow-up. Living with children aged 12–17 (compared to living alone) was associated with an increase in anxiety, while perceiving mental health to have worsened during the first lockdown (compared to staying the same) was associated with reduced anxiety and an increase in mental wellbeing. Conclusions While healthy eating habits worsened in the 12 months since the onset of the pandemic, anxiety and mental wellbeing improved. However, anxiety may have increased for parents of secondary school aged children. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-022-13888-1.
Solomon-Moore et al. BMC Public Health (2022) 22:1495 Background The COVID-19 pandemic has led to unprecedented measures worldwide to control virus transmission. In the UK, on March 23rd 2020, the government announced a nationwide lockdown ordering the public to stay home and leave only for a limited number of reasons, including for exercise (once a day only), to purchase household essentials, for a medical emergency, or to go to work if classed as a key worker (e.g., emergency services, healthcare workers, food delivery drivers). All non-essential businesses were closed and visiting family or friends outside the individual's household was prohibited. Over time, the initial lockdown was eased in stages with homenation variations; including being allowed to leave the house to exercise more than once a day, the opening of non-essential shops, and the opening of the hospitality sector. Additional regional and national lockdowns and restrictions were enforced throughout Autumn-Winter 2020-2021, with lockdown restrictions gradually eased across Spring 2021, and almost all restrictions removed in July 2021.
Several studies have observed a reduction in physical activity levels through the start of the pandemic [14][15][16]. A large study using daily step count measurements from smartphone accelerometers provided by 455,404 users from 187 countries within 30 days of the pandemic being declared, identified a 27.3% decrease in mean steps worldwide [16]. Regional variation was evident, for example, in Italy -which declared a nationwide lockdown -a 48.7% maximal decrease in steps was found, whereas in Sweden, where social distancing and limitations on gatherings were advocated rather than legally enforced, there was a 6.9% maximal decrease. Even in countries that did not institute lockdowns people still exhibited decreases in overall step count, suggesting that social distancing measures or concerns for health related to the pandemic, may have had a negative effect on overall physical activity [16]. A cross-sectional survey of Italian adults (n = 2524) suggested that self-reported physical activity decreased in all age groups during the first phase of the COVID-19 pandemic (Mean: 2429 vs. 1577 metabolic equivalent task minutes per week, p < 0.0001) [15]. However, the study was limited by its reliance on participant recall of physical activity behaviour from before the COVID-19 pandemic. Overall, the emerging research signalled that in the early stages of the pandemic physical activity levels decreased.
Looking at physical activity alongside other lifestyle factors including diet, an international cross-sectional survey examined lifestyle changes that occurred during COVID-19 lockdowns in 1047 adults primarily from Western Asia, North Africa and Europe [14]. This study found self-reported levels of physical activity and alcohol binge drinking decreased, while sedentary time, consumption of unhealthy food, eating out of control, and snacking between meals increased during the lockdowns [14]. In an observational retrospective study, Pellegrini and colleagues [17], examined changes in weight and nutritional habits in 150 Italian adults with obesity during the COVID-19 lockdown period. Mean self-reported weight gain was 1.5 kg, with lower education levels, self-reported anxiety/depression, and not consuming healthy foods positively associated with weight gain [17]. Another study examined dietary changes during the COVID-19 lockdown in Spain by examining food purchases, finding that energy intake increased by 6% while nutritional quality decreased by 5% compared to pre-COVID-19 eating patterns [18]. At the time of writing, however, few published studies have focused on physical activity, diet and mental health in combination.
The pandemic and control measures have had an impact on people's mental wellbeing. In a secondary analysis of the UK Household Longitudinal Study (UKHLS) panel (n = 42,330), population prevalence of clinically significant levels of mental distress in adults rose from 18.9% in 2018-19 to 27.3% in April 2020, 1 month into UK lockdown [19]. Increases in mental distress were also found to be greatest for those aged between 18 and 34 years old, women, and people living with young children [19]. In an online survey of 1005 Austrian adults, depressive symptoms (21%) and anxiety symptoms (19%) were higher during the COVID-19 lockdown compared to a large Austrian survey conducted before COVID-19 [20]. Similarly, in a survey of 1210 adults in China [21], 53.8% rated the psychological impact of the outbreak as moderate or severe, 16.5% reported moderate to severe depressive symptoms, and 28.8% reported moderate to severe anxiety symptoms. While there is some research available on how COVID-19 lockdown restrictions have had an impact on mental health for UK adults [19,22], data are limited, and not enough is known about potential long-term effects of the pandemic.
The emerging evidence highlights the impact of the varied COVID-19 restrictions on lifestyle behaviours and mental health across the globe. However, much of the research to date has relied on cross-sectional data in the immediate aftermath of the pandemic, thus, it would be useful to explore how diet, physical activity and mental health have changed throughout the course of the pandemic in order to understand and respond to the likely long-term impact on health and wellbeing. Therefore, the current study aimed to use longitudinal survey data to explore the following research questions a) what were the patterns of lifestyle behaviour in the UK during the initial COVID-19 lockdown measures?, b) how diet, physical activity, and mental health changed between the first UK lockdown measures and twelve-months later?, and c) what factors were associated with change in diet, physical activity and mental health between baseline and twelvemonth follow-up?
Methods
All methods were carried out in accordance with relevant guidelines and regulations. An online survey focusing on physical activity, diet and mental health was hosted using JISC Online Surveys (see supplementary materials). The survey was promoted through social media (Twitter and Facebook), a press release and interviews with local radio stations. The survey was open to all adults aged 18 years and over living in the UK through the COVID-19 lockdown measures as long as they could read, write and understand written English and had capacity to provide informed consent to participate. Upon accessing the survey link, participants were asked to read the information sheet and complete an online consent form to access the survey. The survey was open during the first national lockdown from May 7th to June 14th 2020, with the closing date reflecting a change in lockdown restrictions with non-essential shops opening on June 15th 2020.
Participants were able to choose to complete the survey anonymously or, if they were interested in completing any additional elements of the study (follow-up survey 12 months from baseline ora semi-structured qualitative interview), they could provide their contact details at the end of the baseline survey. Participants who provided their contact details were emailed with a link to the diet recall and an invitation to contact the team if they were interested in taking part in an interview. The methods and results from the qualitative interview study are presented elsewhere. The study received ethical approval from the University of Bath Research Ethics Approval Committee for Health (REACH).
A follow-up survey was scheduled to take place 12 months after the initial survey. On January 6th 2021, with COVID-19 cases rising, England entered its third national lockdown. The government set out a roadmap to gradually ease restrictions, including groups of six being able to meet outdoors (March 29th 2021), non-essential retail and outdoor hospitality reopening (April 12th 2021), increased social contact indoors and outdoors and indoor hospitality reopening (May 17th 2021), and a planned removal of all social contact restrictions (June 21st 2021), although this was delayed (July 19th 2021). Data were collected for the twelve-month follow-up survey between May 23rd and June 20th 2021, where indoor socialising was permitted but some restrictions were still in place. Participants who provided their contact details when completing the baseline survey were emailed a link to the follow-up survey. Participants were provided with an anonymised ID number that they were instructed to enter when completing the follow-up survey so that their data could be matched with their baseline survey data.
Baseline survey measures
The baseline survey was used to collect demographic information and self-reported physical activity, diet, and mental health during the first UK lockdown, as well as how participants perceived these lifestyle behaviours had changed from before the pandemic.
Demographic measures
Demographic questions included gender, age category, ethnic group, and number/relationship of other people living in the household. Participants provided their postcode to determine which part of the UK they resided in, and this was also used to assign Indices of Multiple Deprivation (IMD) scores, based upon the English Indices of Deprivation (http:// data. gov. uk/ datas et/ index-of-multi ple-depri vation). Participants were asked to report: their general health on a five-point scale (from excellent to poor); whether they are classed as high risk for COVID-19; and their working situation during the initial COVID-19 lockdown measures (i.e., not working, working from home, working outside of home but socially distanced, or a frontline NHS or key worker not able to socially distance).
Physical activity measures
Physical activity behaviour was self-reported using the nine-item International Physical Activity Questionnaire -Short Form (IPAQ-SF) [23]; participants reported the time they spent engaging in walking, moderate-intensity, and vigorous-intensity physical activity across the last 7 days. The amount of time participants spent walking (at a brisk or fast pace) and engaging in moderateto-vigorous-intensity physical activity per week was used to determine whether participants met current UK physical activity guidelines (i.e., 150 minutes per week of moderate-to-vigorous-intensity physical activity) [24]. Participants were asked to report whether their physical activity had changed during the initial lockdown, and if so, whether it had 'increased' , 'decreased' , or 'neither increased nor decreased, but was just different' . Additionally, participants were asked to rate how important they thought it was to be physically active during the lockdown period, on a scale from 1 'not at all important' to 10 'very important' , as well as how confident they were that they could be physically active during the lockdown period from 1 'not at all confident' to 10 'very confident' . These items were based on measures in the International Health and Behaviour Survey (adapted from [25]).
Diet measures
Participants were asked whether their diet had changed during the initial lockdown, and if so, whether it had 'improved during lockdown' , 'worsened during lockdown' , or 'neither improved nor worsened, just different' . The survey included a measure to assess participants' habit strength for healthy eating using the 4-item Self-Report Behavioural Automaticity Index (SRBAI) [26], adapted for healthy eating. The SRBAI asked participants to rate their agreement to four statements (e.g., Deciding to eat healthy foods is something I do automatically) on a seven-point scale from 1 'completely disagree' to 7 'completely agree' . Scores for the individual items were averaged to create a mean healthy eating habit score (potential range 1-7), with higher scores representing a stronger healthy eating habit. Participants were also asked to rate how important they thought it was to eat a healthy diet during the initial lockdown period, on a scale from 1 'not at all important' to 10 'very important' , as well as how confident they were that they could eat a healthy diet during the lockdown period from 1 'not at all confident' to 10 'very confident' [25].
Mental health measures
To measure prevalence of current depression symptoms, the validated eight-item Patient Health Questionnaire depression scale (PHQ-8) was used [27]. The PHQ-8 measures depressive symptoms (e.g., little interest or pleasure in doing things) across the last 2 weeks on a four-point scale from 0 'not at all' to 3 'nearly every day' . The PHQ-8 has a total score range from 0 to 24, where scores of 5, 10, 15, and 20 represent cut-points for mild, moderate, moderately severe and severe depression. Participants were dichotomised into: < 10 'none to mild depression' and > =10 'moderate to severe depression' . The PHQ-8 has shown good reliability and validity [27]. To measure current anxiety levels, the validated General Anxiety Disorder-7 scale (GAD-7) was used [28]. Participants responded to seven items on their anxiety symptoms (e.g., feeling nervous, anxious or on edge) across the last 2 weeks on a four-point scale from 0 'not at all' to 3 'nearly every day' . Total score range for the GAD-7 is 0-21, with scores of 5, 10, and 15 taken as cut-points for mild, moderate and severe anxiety. Participants were dichotomised into two categories: < 10 'minimal to mild anxiety' and > =10 'moderate to severe anxiety' . The GAD-7 has shown good reliability and validity [28]. Wellbeing was measured using the Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS,©NHS Health Scotland, University of Warwick and University of Edinburgh, 2008, all rights reserved). SWEMWBS asks participants to respond to seven statements (e.g., I've been feeling optimistic about the future) to describe their experience over the last 2 weeks on a five-point scale from 1 'none of the time' to 5 'all of the time' . SWEMWBS scores are summed with total scores ranging from 7 to 35, with higher scores indicating higher positive mental wellbeing. Participant scores were dichotomised into two groups: > = 28 for 'high mental wellbeing' and < 28 for 'low to moderate mental wellbeing' . Participants were also asked to report whether their mental health had changed during the initial lockdown, and if so, whether it had 'worsened' , 'improved' or 'neither improved nor worsened, just different' . The SWEMWBS has shown good performance as an instrument to measure wellbeing with good reliability and validity [29].
Twelve-month follow-up survey measures
The measures included in the follow-up survey closely matched the baseline survey. Participants were asked to rate their general health, and whether anything had changed regarding their household or working situation since the baseline survey. In terms of physical activity, participants were asked to report their physical activity behaviour using the IPAQ-SF [23], how important they felt it was to be physically active over the coming month, and how confident they were that they could be physically active over the coming month [25]. In relation to diet, participants were asked to report their habit strength for healthy eating (SRBAI) [26], how important they thought it was to eat a healthy diet over the coming month, and their confidence in eating a healthy diet over the coming month [25]. In terms of their mental health, participants were asked to repeat the PHQ-8 [27], GAD-7 [28], and SWEMWBS (©NHS Health Scotland) scales.
Data analysis
Descriptive statistics (means, standard deviations, proportions) were used to examine the distributions of demographic, diet, physical activity, and mental health variables for the baseline survey sample. For the participants who completed the survey at both time-points, paired sample T-tests (continuous variables) and McNemar Chi-Square tests (categorical variables) were conducted to test whether demographic, diet, physical activity, and mental and physical health variables differed between baseline and twelve-month follow-up.
Univariate and multivariate linear regression models were used to calculate prospective associations between predictor variables (i.e., demographic and psycho-social) at baseline with change in outcome variables (i.e., lifestyle behaviours and mental health) between baseline and twelve-month follow-up. Outcome variables were as follows: change in healthy eating habit (as a proxy for dietary behaviour [26]), change in physical activity (minutes per week of moderate-to-vigorous physical activity), change in depression (PHQ-8 summary score), change in anxiety (GAD-7 summary score), and change in mental wellbeing (SWEMWBS summary score). Change scores for the outcome variables were calculated by subtracting the baseline score from the twelve-month follow-up score, where a negative score would indicate a reduction in the outcome of interest. Univariate analyses were used to model the effect of each predictor variable on each of the outcome variables. Any significant associations in the univariate models were then entered into multivariate models for each of the outcome variables. If t-test statistics for the difference between baseline and twelvemonth follow-up for any of the outcome variables were non-significant (p < 0.05), prospective linear regression analyses were not conducted. All analyses were conducted in STATA version 16 (StataCorp, 2019).
Baseline sample characteristics
At baseline, 636 eligible participants completed the online questionnaire. Compared to the general UK population (50.6% female [30]; 8.4% living in South-West England [31]), most participants were female (78.0%) and from South-West England (75.3%), with the remaining participants from South-East England ( Table 1 shows the characteristics of the adults who participated in the online survey at baseline. At baseline, participants reported moderate-to-strong healthy eating habits (Mean = 4.59, SD = 1.65), placed high importance on eating healthily during lockdown (Mean = 8.89, SD = 1.52), and moderate confidence in their ability to do so (Mean = 7.60, SD = 2.17; Table 1). Twelve months later, 23.1% perceived their diet had worsened, 57.9% perceived their diet had stayed the same, while 19.0% perceived their diet had improved.
Similar to diet, participants felt it was very important to be physically active during lockdown (Mean = 9.02, SD = 1.40), but had moderate levels of confidence for doing so (Mean = 7.10, SD = 2.61; Table 1). On average, participants reported engaging in high levels of moderate-to-vigorous-intensity physical activity, but there was wide variation between participants (Mean = 424.39, SD = 420.67). This equated to two-thirds of participants engaging in sufficient physical activity to meet the UK government's recommended guidelines [24]. The sample were roughly evenly split on whether they perceived their physical activity had decreased (33.6%), stayed the same (32.2%), or increased (34.1%) since the lockdown started.
One quarter of participants (25.3%) reported moderate-to-severe levels of depression, one fifth (20.5%) reported moderate-to-severe levels of anxiety, while 16.2% reported high levels of mental wellbeing (Table 1). Only 7.6% of participants perceived that their mental health had improved since lockdown restrictions started, while 58.7% perceived their mental health had remained the same, and 33.8% perceived their mental health had worsened (Table 1).
Longitudinal changes in diet, physical activity and mental health variables
At twelve-months follow-up, 414 participants who provided their contact details at baseline were emailed the link to the follow-up survey, of whom 160 completed the follow-up survey (response rate: 38.6%). The twelve-month follow-up sample were generally representative of the baseline sample but tended to be older (20.0% compared to 10% were aged 65+ years), were less likely to live in a deprived area (63.8% compared to 58.2% in less deprived quintiles 1 & 2) and were more likely to be classed as a high risk for COVID-19 (20.9% responded yes). (Table 2).
Physical activity
There was little change, and no significant differences, between baseline and twelve-month follow-up for importance of being physically active, confidence in being physically active, minutes per week of moderate-to-vigorous-intensity physical activity, or proportion meeting recommended physical activity guidelines (all p > 0.05; Table 2). Therefore, linear regression analyses to explore the factors associated with change in physical activity were not conducted.
Mental health
Neither the continuous PHQ-8 score (p = 0.121) nor the proportion of participants reporting moderate-tosevere levels of depression (p = 0.819; Table 2) differed significantly between baseline and twelve-month followup. Therefore, no further analyses were conducted with depression as the outcome variable. When measured continuously, mean anxiety score reduced from baseline (Mean = 5.14, SD = 5.07) to twelve-month followup (Mean = 4.15, SD = 4.80; T = 2.75, p = 0.007), but there was no difference in the proportion of participants reporting moderate-to-severe levels of anxiety (p = 0.088; Table 2). However, there was no difference in the proportion of participants reporting high levels of mental wellbeing (p = 0.105).
Prospective associations of baseline variables with change in healthy eating habit and mental health Diet
In the univariate models, very good/excellent compared to poor/fair self-rated health at baseline was associated with an increase in healthy eating habit strength In the multivariate models, good (ß = 1.23, 95% CI = 0.20 to 2.25) and very good/excellent (ß = 1.71, 95% CI = 0.72 to 2.69) vs poor/fair self-rated health at baseline was associated with an increase in healthy eating habit strength at 12-month follow-up. I.e., the more people rated their health highly during the first lockdown, the stronger their healthy eating habits were after 12-months. People's perceived importance (ß = − 0.15, 95% CI = − 0.30 to − 0.01, p = 0.072) and confidence (ß = − 0.15, 95% CI = − 0.29 to − 0.02, p = 0.028) of eating healthily during the first COVID-19 lockdown at baseline were both associated with a reduction in their healthy eating habits at 12-months follow-up (Table 3). I.e., the more confident and important people felt healthy eating was during the first lockdown, the weaker their healthy eating habits were after 12-months. This association with a reduction in healthy eating habit could indicate a ceiling effect, given that both importance and confidence were relatively high at baseline.
Anxiety
In the univariate models, perceiving your mental health had worsened during the first lockdown compared to staying the same was associated with a reduction in symptoms of anxiety at 12-month follow-up (ß = − 3.25, 95% CI = − 4.72 to − 1.78). Living with children aged between 12 and 17 vs living alone during the first lockdown was associated with an increase in symptoms of anxiety at 12-months follow-up (ß = − 4.50, 95% CI = 1.13 to 7.88) (model approaching significance at p = 0.060).
In the multivariate models, perceiving your mental health had worsened during the first lockdown compared to staying the same was still associated with a reduction in symptoms of anxiety at 12-month follow-up (ß = − 3.05, 95% CI = − 4.53 to − 1.57). Living with children aged between 12 and 17 vs living alone during the first lockdown was also still associated with an increase in symptoms of anxiety at 12-months follow-up (ß = − 3.99, 95% CI = 0.77 to 7.21). However, the overall model was not significant (Table 4).
Mental wellbeing
In the univariate models, compared with being 18-34 years old, being 35-64 years was associated with decrease in mental wellbeing between baseline and follow-up (ß = − 1.70, 95% CI = − 3.32 to − 0.08). However, the overall regression model was non-significant (P = 0.112). Perceiving your mental health had worsened during the first lockdown compared to staying the same was associated with an improvement in mental wellbeing (ß = 2.55, 95% CI = 1.17 to − 3.94). In the multivariate models perceiving mental health had worsened compared to staying the same was associated with an increase in mental wellbeing score at 12-months (ß = 2.35, 95% CI = 0.94 to 3.77) ( Table 5). I.e., people who felt their mental health had worsened during lockdown had improved mental wellbeing a year later.
Discussion
Concerning the first research question, the present study found that during the initial lockdown, participants were generally active and had good eating habits. However, as least one out of five reported moderate to severe levels of depression and anxiety. For the second research question, over the 12 months, we found that healthy eating habit strength, and the importance of and confidence in eating healthily, were all reduced. Conversely, anxiety scores reduced and well-being increased. For the third research question, we found that self-rated health and confidence in eating healthily at baseline were positively and negatively associated with a 12-month change in healthy eating habits, respectively. Living with children aged 12-17 (compared to living alone) was associated with an increase in anxiety while perceiving mental health to have worsened during the first lockdown (compared to staying the same) was associated with reduced anxiety. Perceiving mental health to have worsened initially (compared to staying the same) was associated with an increase in mental wellbeing.
In this study, we found that in the 12 months since the start of the UK COVID-19 lockdown restrictions, the psycho-social variables related to healthy eating (habit, importance, and confidence) worsened across time. This is a concern, especially considering that a greater proportion of participants perceived their diet had worsened (compared to improved) at the start of the first lockdown restrictions (23.1% versus 19.0%). The associations between the change in the strength of healthy eating habits and participants' self-rated health at baseline, suggested that this negative impact may be more prevalent for participants in fair or poor physical health at the outset. It is possible that some participants felt more confident in their ability to eat healthily when lockdown restrictions were tighter, when they had more time and opportunity for cooking healthy meals, and there were fewer opportunities to eat out in social settings. Our data also indicated that living with secondary school aged children experienced worsening anxiety relative to people with younger or no children who showed no change.
Early research during the COVID-19 pandemic suggested an increase in consumption of unhealthy food, eating out of control and snacking between meals increased during the initial COVID-19 lockdown measures [14]. However, in contrast to this, we found healthy eating habit, importance and confidence of eating healthy dropped between lockdown and 12-months later. This is also supported by a short-term longitudinal study of Italian adults (N = 728) examining eating styles and behaviours between April 2020 (during lockdown) and June 2020 (after lockdown). The researchers found that during lockdown, participants reported an increase in healthy food consumption, involvement in cooking, and a decrease in junk food consumption [33]. Whereas, in the post-lockdown period, participants cut down their healthy food consumption and their involvement in food preparation but continued to reduce their junk food intake [33]. Time constraints and lack of willpower are well-known barriers to healthy eating [34], therefore, removing these barriers may result in healthier eating habits. However, when these barriers were restored as lockdown restrictions were eased, the opportunity for unhealthy habits to return increased. Our finding that higher perceived importance and confidence in healthy eating was associated with weaker habit at 12 months was unexpected as this contrasts with usual directions of effect; further work to explore hypotheses for this pattern is warranted. Among our sample, there was variation in the degree to which participants believed their physical activity behaviour to have changed at the onset of COVID-19 lockdown restrictions, with approximately one third of participants perceiving their physical activity to have increased, stayed the same, or decreased respectively. This somewhat contradicts some of the earlier published studies that observed reductions in physical activity through the start of the pandemic [14][15][16]. While many recreational and sports facilities were closed at the onset of the pandemic which limited activity choice and opportunities, government messaging highlighted exercise as one of the only reasons permissible for leaving the house, which may have increased motivation to be active for some individuals [35]. Similar to our findings, an international cross-sectional survey study (N = 13,696) conducted in March-May 2020, found that 44.2% of participants reported no change, 23.7% reported a decrease, and 31.9% reported an increase in their exercise frequency during the COVID-19 pandemic [36]. The authors also developed a prediction model to estimate changes in exercise frequency in future lockdowns, with results suggesting that those who rarely exercise before a lockdown tend to increase their exercise frequency during it, while those who are frequent exercisers before a lockdown tend to maintain it [36]. This variation in behaviour may explain why we found no difference in physical activity between baseline (during the first lockdown) and twelve-months later (when restrictions had started to be eased); we did not have a pre-lockdown measure of physical activity to enable us to test this interaction. Future longitudinal research would be useful to explore how the pandemic and the subsequent lockdown restrictions have had differential effects on the physical activity behaviour of specific population sub-groups to ensure interventions can be appropriately targeted.
The impact of COVID-19 lockdown restrictions on mental health has been a major concern [37], with research suggesting that levels of mental distress, anxiety, and depression increased at the onset of the pandemic [19-21,[33]]. A cross-sectional study of UK adults (N = 3097) measuring mental health at the start of the pandemic (April 2020), found 31.6% reported moderate-to-severe levels of depression and 26% reported moderate-to-severe levels of anxiety [38]. While levels of depression and anxiety were slightly lower in the present study (25.3 and 20.5% respectively), both studies indicate mean levels of depression and anxiety during the start of the COVID-19 pandemic exceed previously published population norms [39,40]. Our study expands this data, but demonstrating that levels of depression appeared to be consistent within our sample, but anxiety and mental wellbeing appeared to improve across time, suggesting that any negative effects of the pandemic on mental health may be reversible. Indeed, compared to those who perceived their mental health had stayed the same, participants who perceived their mental health had initially worsened during the first COVID-19 lockdown were more likely to report improvements in anxiety and mental wellbeing at the twelve-month follow-up. This 'bounce-back' effect for anxiety and mental wellbeing may have been due to the easing of restrictions, which enabled increased freedom to see family and friends, participate in hobbies and allow some individuals to return to work (lessening financial insecurity). There is a well-established link between physical activity and mental health which has remained during the COVID-19 pandemic [41]. However, the present study suggests that, given the lack of change in physical activity over 12 months, improvements in anxiety and mental wellbeing were not driven by physical activity. A previous study in Canada showed that walking and exercise were cited among the top four activities that people engaged in during the COVID-19 pandemic meaning that people continued to find ways to be active, despite restricted opportunities [42].
However, our findings do suggest that anxiety deteriorated for people living with 11-17 year older children. Reasons for this warrant further investigation, including whether this reflects parents' concerns about the continued disruption of education that persisted throughout the 12 months following the first lockdown, their anxiety at having to manage home-education while fulfilling their own work commitments, or other factors such as concern on the long term impact of the ongoing restrictions on their children's health and wellbeing.
Similar findings have been shown internationally. A large population-based survey study in China (N = 105,248) found that the prevalence of being high risk for mental disorders decreased from 25.8% when lockdown restrictions were in place (early-February 2020) to 20.9% when most COVID-19 restrictions were eased (mid-March 2020) [43]. However, it is still unknown whether this 'bounce-back' effect is present across all population sub-groups, or whether the mental health of certain groups remain negatively impacted by the COVID-19 pandemic.
Strengths and limitations
Strengths of this study include the longitudinal design, with data collected during the first UK COVID-19 lockdown restrictions and twelve-months later at the same time of year, overcoming the issue of the seasonal variation in physical activity, food intake, and mental health [44][45][46][47]. This study measured multiple domains of lifestyle behaviours and mental health using validated measures [23,27,28], as well as relevant psychosocial factors and demographic variables, allowing us to explore which groups were most susceptible to change in lifestyle behaviours.
The study sample was relatively homogenous, primarily female, of White British origin from South-West England, which limits the ability to extrapolate to other ethnic groups in more diverse areas of the UK. Only one quarter of the baseline sample completed the twelvemonth follow-up survey. We were not able to analyse diet behaviour. However, healthy eating habit was included as a proxy for diet behaviour because it has previously been found to be strongly correlated with dietary behaviour [26]. While the IPAQ-SF has been found to have acceptable levels of validity and reliability [23], it typically overestimates physical activity behaviour [48], which may explain the high levels of physical activity among our sample. A further limitation of this study is that sleep was not assessed. Sleep quality has been positively associated with better mental health [49][50][51]. Furthermore, physical activity has been shown to benefit sleep quality and quantity [52]. Our sample showed higher than average levels of physical activity. However, we were unable to test if their activity levels were associated with sleep and, in turn, mental health. We were also not able to capture lifestyle behaviours and mental health prior to the onset of the pandemic, thus we were reliant on participants' perceptions of how these variables changed at the onset of the COVID-19 lockdown measures, which provided us with an indication of direction but not the magnitude of change. Finally, this study only took a snapshot of two points in time which does not fully reflect the fluctuating nature of pandemic lockdowns over time.
Further follow-up and monitoring of diet, physical activity and mental health is needed to understand the long-term impact of the COVID-19 lockdown restrictions both in the UK and worldwide. More longitudinal studies are needed to investigate the factors associated with change in lifestyle behaviours and mental health, to highlight whether there are any specific population subgroups who have been particularly negatively impacted by the COVID-19 lockdown restrictions. Such data will help to identify relevant interventions and/or government policies that could be developed and implemented to combat any negative impacts of the COVID-19 pandemic and ensure that any positive impacts are capitalised on. Finally, more qualitative studies are needed to provide further insight into some of the key drivers of health behaviours both during and after lockdown.
Conclusions
To our knowledge, this is one of the first studies to report twelve-month follow-up data on the longitudinal impact of the UK COVID-19 lockdown measures on lifestyle behaviours and mental health. We provide evidence that healthy eating habits worsened in the 12 months since the pandemic started, while anxiety and mental wellbeing improved. Participants were more confident in their ability to eat healthily when lockdown restrictions were tighter, potentially due to increased opportunities for home cooking and fewer opportunities to eat out. Participants who perceived their mental health had worsened at the start of the lockdown restrictions were more likely to report positive changes in their level of anxiety and mental wellbeing twelve-months later, suggesting there may be a 'bounce-back' effect as restrictions were eased. More longitudinal research is needed into how lifestyle behaviours and mental health have changed since the start of the pandemic, and the factors associated with change, so that effective interventions and government policies can be developed and deployed. | 2022-08-05T13:23:00.866Z | 2022-08-05T00:00:00.000 | {
"year": 2022,
"sha1": "ba82be1ef7af1da00b0f147f2b43f56b371c5b7f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ba82be1ef7af1da00b0f147f2b43f56b371c5b7f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246026645 | pes2o/s2orc | v3-fos-license | Irisin Promotes Cardiac Homing of Intravenously Delivered MSCs and Protects against Ischemic Heart Injury
Abstract Few intravenously administered mesenchymal stromal cells (MSCs) engraft to the injured myocardium, thereby limiting their therapeutic efficacy for the treatment of ischemic heart injury. Here, it is found that irisin pretreatment increases the cardiac homing of adipose tissue‐derived MSCs (ADSCs) administered by single and multiple intravenous injections to mice with MI/R by more than fivefold, which subsequently increases their antiapoptotic, proangiogenic, and antifibrotic effects in rats and mice that underwent MI/R. RNA sequencing, Kyoto Encyclopedia of Genes and Genomes (KEGG) signaling pathway analysis, and loss‐of‐function studies identified CSF2RB as a cytokine receptor that facilitates the chemotaxis of irisin‐treated ADSCs in the presence of CSF2, a chemokine that is significantly upregulated in the ischemic heart. Cardiac‐specific CSF2 knockdown blocked the cardiac homing and cardioprotection abilities of intravenously injected irisin‐treated ADSCs in mice subjected to MI/R. Moreover, irisin pretreatment reduced the apoptosis of hydrogen peroxide‐induced ADSCs and increased the paracrine proangiogenic effect of ADSCs. ERK1/2‐SOD2, and ERK1/2‐ANGPTL4 are responsible for the antiapoptotic and paracrine angiogenic effects of irisin‐treated ADSCs, respectively. Integrin αV/β5 is identified as the irisin receptor in ADSCs. These results provide compelling evidence that irisin pretreatment can be an effective means to optimize intravenously delivered MSCs as therapy for ischemic heart injury.
For the MI/R rat model, adult male and female Sprague-Dawley rats were randomly assigned to the following groups: Sham; MI/R+vehicle; MI/R+rADSCvehicle; and MI/R+rADSC-irisin.
ADSC preparation
ADSCs were isolated from adult male C57BL/6J mice and adult male Sprague-Dawley rats as we previously described [3] . Briefly, inguinal subcutaneous adipose tissue was removed under anesthesia. The adipose tissue was rinsed several times with phosphate buffered saline (PBS). Blood vessels were excised under a dissection microscope. The remaining adipose tissue was cut into fine pieces, digested with 0.1% collagenase type I at 37°C for 60 minutes, and centrifuged at 600 g for 10 minutes.
After red blood cell lysis with 1× lysis buffer, the cells were cultured in a 1:1 mixture of Dulbecco's modified Eagle's medium (DMEM) and F12 medium containing 10% fetal bovine serum (FBS) and penicillin-streptomycin. Six hours after cell plating, the medium was changed to remove nonadherent cells. Adherent cells were cultured in DMEM-F12-10% FBS and split several times for expansion. Cells from passages 2-3 were used in all experiments.
Adenovirus construction and transfection
The recombinant FNDC5 adenovirus was constructed by Likely Biotechnology (Beijing, China). Cells were infected with the FNDC5 adenovirus or adenovirus containing empty plasmids (control) for 24 hours at a multiplicity of infection (MOI) of 20-200. The medium was then replaced with fresh medium, and the cells were cultured for another 24 hours.
Adeno-associated virus serotype-9 construction and intramyocardial injection
Adeno-associated virus serotype-9 carrying CSF2 shRNA (AAV9-CSF2-shRNA) or scramble RNA (AAV9-control) was purchased from GenePharma. One month before the MI/R operation, AAV9-CSF2-shRNA or AAV9-control (2.6×10 10 vector genomes per mouse) was intramyocardially injected into the left ventricle free wall at three different sites. The experiments were performed by an investigator who was blinded to the group allocations.
Small interfering RNA (siRNA)-mediated gene knockdown ANGPTL4 siRNA, CSF2RB siRNA, SOD2 siRNA, and their scramble RNAs were purchased from GenePharma. When ADSCs reached 80% confluence, siRNAs were transfected into the cells with RNAiMAX Transfection Reagent (Thermo Fisher Scientific, 13778075) according to the manufacturer's protocol (final siRNA concentration: 100 nM). After 8 hours of incubation (37°C), the transfection reagent-siRNA mixture was replaced with fresh growth medium. Successful knockdown was confirmed by Western blot analysis of the ANGPTL4, CSF2RB, and SOD2 protein levels.
ADSC labeling
Cultured ADSCs were labeled with the lipophilic dye CM-DiI (5 µM in DMEM-F12-10% FBS) for 20 minutes immediately before intravenous injection. ADSCs were also stained with cell plasma membrane staining kit with DiO, according to the manufacturer's protocol.
Flow cytometric analysis of the cardiac homing of ADSCs
The cardiac homing of ADSCs was evaluated by flow cytometry 1 day after the fifth intravenous injection (30 days after MI/R). Mice were heparinized (10 IU/g body weight), and 5 minutes later, they were anesthetized with an intraperitoneal injection of ketamine/xylazine mixture (100 mg/kg ketamine plus 10 mg/kg xylazine). Single cells were then enzymatically isolated from the hearts in 0.1% collagenase type II (30 minutes at 37°C) as we previously described [4] . A 'myocyte-depleted' cardiac cell population was prepared by filtering the single cells through a 40-μm mesh and was then stained with a PE/Cy5 anti-mouse/rat CD29 antibody (#102219, Biolegend). A PE/Cy5 Armenian hamster IgG isotype control antibody (#400909, Biolegend) was used as the negative control. The cardiac homing of ADSCs (CM-DiI+/CD29+) was determined with a flow cytometer (Epics MCL, Beckman).
The in vivo competitive homing assay
The competitive homing assay [5] was performed after a single intravenous injection (1 day after MI/R), and the distribution of ADSCs in organs was evaluated by flow cytometry. Vehicle-treated ADSCs (ADSC-vehicle) were labeled with DiO (a green dye) and irisin-treated ADSCs (ADSC-irisin) were labeled with CM-DiI (a red dye). Equal number of ADSC-vehicle-DiO and ADSC-irisin-DiI were then mixed before they were intravenously injected into the post-MI/R mice. An aliquot of the cell mixture was kept and analyzed by flow cytometry. The mice were sacrificed 1 day after intravenous injection (2 days after MI/R). The hearts, lungs, and spleens were removed from the mice. The hearts were prepared for the 'myocyte-depleted' cardiac cell population. The lungs were minced and then digested in 0.1% collagenase type II (20 minutes at 37°C). Spleens were homogenized into a single-cell suspension using PBS.
After red blood cell lysis with 1× lysis buffer, the lung cells and splenocytes were resuspended in PBS. Flow cytometry assays were performed to detect ADSC-vehicle-DiO and ADSC-irisin-DiI in the 'myocyte-depleted' cardiac cell population and singlecell suspensions of lung and spleen.
Echocardiography
M-mode images of mice and rats subjected to 1-2% isoflurane anesthesia were obtained via a VisualSonics 770 echocardiography machine (Canada) as previously described [6] . Hearts were viewed along the long and short axes between the two papillary muscles. The LV end-systolic diameter (LVESD) and LV end-diastolic dimension (LVEDD) were measured. The LVEF was automatically calculated by echocardiography software as follows: Measurements were performed for at least five separate cardiac cycles per mouse.
Echocardiography was performed by a single experienced operator in a blinded fashion.
Evaluation of angiogenesis
For the evaluation of angiogenesis, heart sections were deparaffinized and subjected to antigen retrieval in hot citric acid buffer. After cooling, the slides were permeabilized with 0.2% Triton-100 for 15 minutes, blocked with 1% BSA in PBS for 2 hours, and incubated with an anti-CD31 primary antibody at 4°C overnight (#GB13063, Servicebio). A donkey anti-goat antibody conjugated with CY3 (#GB21404, Servicebio) served as the secondary antibody. Nuclei were stained with 4',6-diamidino-2-phenylindole (DAPI, GB1012, Servicebio). Images of immunostained sections were acquired with a Nikon Eclipse C1 microscope and Nikon DS-U3 camera. The myocardial capillary density was quantified by Image-Pro Plus 6.0 software (Media Cybernetics). The representative image for each group was selected based upon the mean value.
Determination of apoptosis
ADSC apoptosis was determined by TUNEL staining with a One Step TUNEL Apoptosis Assay Kit (Beyotime, C1090). Images were acquired with a Nikon Eclipse Ni microscope and Nikon DS-Ri2 camera. Apoptosis of cardiomyocytes in heart tissue was determined with a Roche In Situ Cell Death Detection Kit (Sigma, 11767305001 and 11767291910) according to the manufacturer's protocol. Images were acquired with a Nikon Eclipse C1 microscope and Nikon DS-U3 camera. The index of apoptosis was determined by the number of TUNEL-positive nuclei/total nuclei. The representative image for each group was selected based on the mean value.
Masson's trichrome staining
Hearts were harvested from anesthetized mice and rats and embedded in paraffin.
The heart tissue extending from just distal to the coronary ligation point to the apex was separated into different segments at 200 µm intervals. Serial 5-µm-thick sections were obtained from each segment for Masson's trichrome staining according to the manufacturer's protocol (Sigma, HT15). Microscopic images of mouse heart sections were obtained with a 1.25× object lens (Nikon, Japan). Images of rat heart sections were acquired with a Nikon Eclipse C1 microscope and Nikon DS-U3 camera. For quantification, measurements of 5 transverse heart sections were analyzed. Fibrotic size was determined as the average ratio of the fibrotic area to the LV area (fibrotic size %) with Image-Pro Plus 6.0 software (Media Cybernetics).
CM collection
We employed a modified method to prepare the CM from ADSCs [3] . CM was generated as follows: after growth to 90% confluence in 6-well dishes, ADSCs were pretreated with 100 ng/mL irisin or vehicle for 1 day. The culture medium was washed and replaced with serum-free DMEM/F12 medium. Twenty-four hours later, the serumfree DMEM/F12 medium was collected and centrifuged at 1000 g for 10 minutes to obtain the supernatant (CM).
In vitro cardiomyocyte apoptosis assay
Both primary NRVCs and iPSC-CMs were used for the in vitro cardiomyocyte apoptosis assay. Primary cultures of NRVCs from 1-to 2-day-old Sprague-Dawley pups were prepared as described previously [7] . Human iPSC-CMs were purchased from HELP Therapeutics (#NC20010435). The differentiation and preparation of iPSC-CMs have been described previously [8] . For the evaluation of cardiomyocyte apoptosis, cultured NRVCs and iPSC-CM were subjected to 6 hours of H2O2 (200 µM) before treatment with recombinant irisin (100 ng/mL), ANGPTL4 (2 μg/mL), or CM derived from ADSCs.
Capillary-like tube formation assay
rCAECs were purchased from Procell (CP-R081, Wuhan, China) and cultured in low-glucose DMEM containing endothelial cell growth supplement (#211-GS, Cell Applications), 10% FBS, and penicillin-streptomycin. For the tube formation assay, Matrigel (BD Biosciences) was added to each well of a 48-well plate and allowed to polymerize at 37°C for 30 minutes. rCAECs were treated with CM derived from ADSCs for 24 hours. Images of the formation of capillary-like structures were obtained by computer-assisted microscopy. The total length per field was calculated from five random fields.
Immunohistochemistry
For fixed tissues, wax blocks were cut into 5-μm-thick sections and mounted on glass slides for staining. The slides were deparaffinized and subjected to antigen retrieval in hot citric acid buffer. After cooling, the slides were permeabilized with 0.2% Triton-100 for 15 minutes, blocked with 1% BSA in PBS for 2 hours, and incubated overnight at 4°C with an anti-troponin T mouse monoclonal antibody (Thermo Fisher Scientific, MS-295-P0) (1/1,000). The primary antibody was visualized with a donkey anti-mouse IgG (H+L) secondary antibody conjugated with Alexa Fluor 488 (A77440, Yeasen, China). The frozen sections were fixed in acetone at 4°C for 15 min. The sections were blocked with PBS containing 1% BSA at room temperature. Sections were then incubated overnight at 4°C with an anti-TNNT2 rabbit polyclonal antibody (Affinity Biosciences, DF6261). The primary antibody was visualized with a goat anti-rabbit IgG (H+L) secondary antibody conjugated with Alexa Fluor 488 (33106ES60, Yeasen, China). Nuclei were stained with DAPI (GB1012, Servicebio). Images were acquired with a Nikon Eclipse C1 microscope and Nikon DS-U3 camera.
RNA sequencing analysis
Differential gene expression analysis was performed using RNAseq at Shanghai Biotree Biological Technology [9] . After treatment with vehicle or irisin for 24 hours, total RNA was extracted from ADSCs via an RNeasy Mini Kit (Qiagen, 74106) according to the manufacturer's protocol. RNA purity was assessed using a NanoPhotometer® spectrophotometer (Implen). RNA integrity was assessed using the RNA Nano 6000 Assay Kit on the Bioanalyzer 2100 system (Agilent Technologies).
A total of 1 µg of RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using the NEBNext® UltraTM RNA Library Prep Kit for Illumina® (NEB, USA) according to the manufacturer's recommendations, and index codes were added to attribute sequences to each sample.
Briefly, mRNA was purified from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was carried out using divalent cations under elevated temperature conditions in NEBNext First Strand Synthesis Reaction Buffer (5X). First-strand cDNA was synthesized using random hexamer primers and M-MuLV Reverse Transcriptase (RNase H-). Second-strand cDNA synthesis was subsequently performed using DNA Polymerase I and RNase H. The remaining overhangs were converted into blunt ends via exonuclease/polymerase activities. After adenylation of the 3' ends of the DNA fragments, an NEBNext adaptor with hairpin loop structures was ligated to prepare for hybridization. For the preferential selection of cDNA fragments 250~300 bp in length, the library fragments were purified with the AMPure XP system (Beckman Coulter, Beverly, USA). Then, 3 µL of USER Enzyme (NEB, USA) was incubated with size-selected, adaptor-ligated cDNA for 15 minutes at 37°C and then at 95°C for 5 min prior to performing PCR. Then, PCR was performed with Phusion High-Fidelity DNA polymerase, universal PCR primers and Index (X) Primer. Finally, the PCR products were purified (AMPure XP system), and library quality was assessed on the Agilent Bioanalyzer 2100 system. The library preparations were sequenced on an Illumina NovaSeq platform, and 150 bp paired-end reads were generated. Differential expression analysis was performed using the DESeq2 R package (1.16.1). DESeq2 provides statistical routines for determining differential expression among digital gene expression data using a model based on the negative binomial distribution. The resulting P-values were adjusted using Benjamini and Hochberg's approach for controlling the false discovery rate. Genes with an adjusted P-value <0.05 found by DESeq2 were assigned as differentially expressed.
Gene Ontology (GO) enrichment analysis of differentially expressed genes was implemented by the clusterProfiler R package, during which gene length bias was corrected. GO terms with corrected P values less than 0.05 were considered to be significantly enriched by the differentially expressed genes. The KEGG database is a resource for understanding the high-level functions and utilities of biological systems, such as cells, organisms and ecosystems, based on molecular-level information, especially from large-scale molecular datasets generated by genome sequencing and other high-throughput experimental technologies (http://www.genome.jp/kegg/). We used the clusterProfiler R package to assess the statistical enrichment of differentially expressed genes in KEGG pathways. The thermal cycling conditions were as follows: denaturation at 95°C for 5 minutes, followed by 40 cycles of 10 s at 95°C, 20 s at 55°C, and 20 s at 72°C.
Western blot analysis
Total proteins were isolated from cells or heart tissues with 1× lysis buffer (CST #9803) supplemented with a protease inhibitor cocktail (Thermo Fisher Scientific, 78438). A total of 30-70 μg of protein per sample was separated via gel electrophoresis, transferred to a polyvinylidene fluoride membrane, and blocked with 5% milk for 1 hour. The membrane was incubated overnight with primary antibodies at 4°C. After incubation with a secondary HRP-conjugated anti-mouse antibody (Abbkine, A21010, 1/10,000) or anti-rabbit antibody (Abbkine, A21020, 1/10,000) at room temperature for 2 hours, the membranes were exposed to enhanced chemiluminescent (ECL) substrate Tables Table I: Genes of the cytokine-cytokine receptor interaction pathway with | 2022-01-18T06:17:23.346Z | 2022-01-17T00:00:00.000 | {
"year": 2022,
"sha1": "2efc4e8838eab3082b9af941fdc6d3a49cb22467",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202103697",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ccb4522a30994dec48d750fd728f20fa82bd901",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
112729695 | pes2o/s2orc | v3-fos-license | Studies to evaluate the impact of tamper on the depth of improvement in dynamic compaction
Ground improvement techniques are widely adopted in geotechnical engineering practices for improving the strength, density, and/or reducing drainage characteristics of the soil. Among the various options available for improving the soil, dynamic compaction (also referred to as impact densification, heavy tamping, and dynamic consolidation) has evolved as a widely accepted method of soil improvement in the past decade for treating poor soils in situ. This method is often an economically attractive alternative for utilizing shallow foundations and preparing subgrade for construction, as compared to other conventional expensive solutions like pile foundations, excavation and replacement, densification etc. Moreover, dynamic compaction has some unique applications, including treatment of reclaimed land, liquefaction mitigation and heterogeneous fill materials, and displacing unsuitable materials such as peat, and collapsing sinkholes. In general, the ultimate goals of dynamic compaction are to increase the bearing capacity of the soil, and decrease the total and differential settlements within a specified depth of improvement. Till date, the effective depth of improvement achieved through this technique has been restricted to about 5m of the soil. To increase the effectiveness of dynamic compaction, the soil condition and the energy configuration (which is decided by the surface area and the shape of the tamper used) has to be taken into account. In the present paper, an attempt has been made to investigate the impact of tamper base area in improving the influence depth during dynamic compaction on sandy soil. For this purpose, an innovative dynamic compaction set-up was developed in the laboratory for carrying out small-scale physical model tests on low energy dynamic compaction using circular steel tampers of three diameters (50 mm, 75 mm and 100 mm). This paper describes details of the dynamic compaction set-up developed, and its advantages over other compaction set-ups developed till date with respect to evaluation of dynamic compaction technique in the laboratory. In general, it was observed that, the width of area influenced by dynamic compaction is proportional to almost 2.5 times the tamper diameter. However, the tamper base area was found to exhibit marginal influence on the depth of improvement, provided the impact energy intensity was kept constant in all the three tests.
INTRODUCTION
The densification of loose soils by falling weights dates back to antiquity. The first known published reference on the subject involved a site in Germany. Not until 1969, however, was the technique finally promoted by Louis Menard as a routine method of site improvement. During dynamic compaction, repeated impacts are imparted to granular soil by means of a heavy weight hitting the ground surface, causing the soil particles to be rearranged into a denser state. This method is used for different types of civil engineering projects, including building structures, coal facilities, dockyards, highways and airports. In recent years, dynamic compaction has also emerged as an economically attractive method of ground remediation for loose cohesionless soils as a part of liquefaction hazard mitigation technique and densifying municipal solid waste landfills.
Full-scale tests and case studies on the use of dynamic compaction in the field have been reported by researchers like Mayne et al. (1984), Kumar Bonab and Zare (2014). However, some of these setups involve insertion of steel rods into the soil for guiding the mass, which is in contradiction to actual field conditions. Moreover, the tamper is made to fall from a considerable height to impart the required energy to the soil surface, making the overall set-up heavy and cumbersome. Till date, researchers like In the present paper, low energy dynamic compaction was simulated in the laboratory using circular steel tampers of three diameters (50 mm, 75 mm and 100 mm). The purpose is to determine the effect of tamper base area on dynamic compaction, by keeping the energy intensity (impact energy applied on unit area of affected soil) as constant. Low energy compaction process was chosen keeping in mind the fact that, in the field, studying the process of dynamic compaction is time-consuming, costly and difficult because of the heterogeneous soil encountered and other problems related to instrumentation and data acquisition. An innovative dynamic compaction setup was developed in the laboratory involving a metallic spring, so as to utilize the potential energy stored by the spring by virtue of its stiffness. The additional potential energy of the spring contributed by a major amount to the total energy required to be imparted to the soil. This, in turn, reduced the height of fall of the tampers, making the developed set-up compact and robust. Details of the dynamic compaction set-up developed, and its advantages over existing set-ups with respect to ground improvement are discussed in subsequent sections along with analyses of model test results using GeoPIV as outlined in White et.al (2003).
MODEL SOIL USED IN THE STUDY
The sand used in the present study was found to completely pass through BSS 36 sieve (0.425 mm) and retained in BSS 200 sieve (0.075 mm). The grain size distribution of the sand is classified as SP according to Unified Soil Classification System (USCS). The various properties of sand as determined in the laboratory are tabulated in Table 1. Figure 1 shows the cross-section of a model test package along with the developed dynamic compaction test set-up. Model tests were conducted in a container having 720 mm in length, 450 mm in breadth and 410 mm in height internally. Front wall is formed with a thick Perspex sheet for enabling view of front elevation of the model during the test. The developed dynamic compaction test setup consists of a 'spring-mass' system to guide the tamper mass, as it gets lifted and dropped on the soil with a certain impact velocity. The spring mass system is supported over a C-frame, which was fixed to the top of the container ( Figure 1) Figure 2 illustrates different components of the spring mass system, and the three tampers used in the study. The tampers are half circular plates, attached to the square rod (Figure 2), which is made to pass through the hollow guide rod fixed at the base of the bottom plate ( Figure 1). The lower part of the spring is welded to the bottom plate and fixed to the C-frame rigidly, while the upper movable part is welded onto the top plate, and guided by providing steel rods. The spring is designed with stiffness (k) of 4423 N/m, wire diameter (d) 6 mm, mean diameter (dm) of 6.6 mm, pitch of 6 mm and number of active coils (n) as 10. Figure 3 depicts the working mechanism of the developed dynamic compaction test set-up in the laboratory. As shown in Fig. 3, a steel wire is attached at the end of the rod used for holding the tamper. As it is pulled up by a distance h̕ vertically, the rod touches the movable top plate of the spring and pushes it upwards. This causes the spring to extend by the same distance h', which the tamper traverses. After attaining h', the steel wire is released, causing the tamper along with square hollow rod to fall under the combined force of gravity (mgh') and with the force induced by the extended spring (0.5kh' 2 ). Hence, as the drop height increases, the length of the guiding steel rods increases, making the overall set-up costly and cumbersome. To overcome these difficulties, the metallic spring was introduced in the present set-up to contribute a major share to the overall energy required to be imparted to the soil surface by virtue of its stiffness. The tamper derived a small fraction of its required energy from gravitational potential energy (mgh') and the major fraction is derived (from the potential energy stored in the spring (0.5kh' 2 ), thereby reducing the drop height from h to h'. Hence, the present set-up is more compact, robust, and can be used subsequently for small-scale physical model testing at higher gravities also, where constraint space poses a serious issue.
DETAILS OF DEVELOPED DYNAMIC COMPACTION SET-UP
The present set-up can simulate falling of tamper with energy upto 30 N-m. However, by using a spring with higher stiffness, higher energy can be imparted to the soil surface.
TEST PROCEDURE
Three tests were performed by raising and dropping the semicircular steel tampers (T1, T2 and T3) 15 times on the surface of the sand deposit in each test. For all the tests, the actual drop height (h) was kept constant at 0.55m, while the mass of the tampers were varied such that the energy intensity (impact energy divided by base area of each tamper) was constant in each test. After each impact on the model surface, a digital image was captured of the deformed soil using a Nikon digital camera with an image resolution of 3072 x 2304 pixels. Figure 4 presents the front view of the deformed soil surface captured at different stages of tamping for Test 1. In order to ensure adequate illumination, two no's of fluorescent lights were placed on the left and right sides of the camera, at a level higher than the optical axis of the camera. Details of tests performed are presented in Table 2.
RESULTS AND DISCUSSION
A typical image of the deformed soil surface, captured after tamping (for Test 2) is shown in Fig.5, where heaving of soil along the periphery of the crater is clearly visible. The variation in crater width and depth captured at various stages of tamping for the three tests conducted are presented in Table 3. The variation of crater depth with number of drops of the tamper is presented in Fig. 6, which shows an increasing trend.
Using GeoPIV software, displacement vectors were obtained from the images captured during the test from the plane of the perspex sheet, which were subsequently used for calculating strain contours using the method described in Hajialilue-Bonab and Rezaei (2009). Figure 7 shows the strain contours plotted at 1.5% strain in the soil for all the three tests performed by delivering 15 tamper blows to the soil. Strain value of 1.5% was selected in view of the fact that, this contour is considered as the effective zone of influence affected by dynamic compaction (Hajialilue-Bonab and Rezaei ,2009). Figure 7 shows the width of influence area is proportional to almost 2.5 times the tamper diameter. The depth of improvement, however, is almost identical for all the three tests, which may be attributed to identical impact energy intensity in all the tests.
CONCLUSIONS
In this paper, the design and development of the various components of a low energy dynamic compaction set up for densification of loose uniformly graded sandy soil is presented. The advantages of the developed set-up over previous dynamic compaction set-ups have been discussed, with special emphasis on the metallic spring incorporated in the design. Based on preliminary tests carried-out, the developed set-up was found to give consistent results. Further, it was used to validate the influence of base area of tampers on dynamic compaction. In general, it was observed that, the width of area influenced by dynamic compaction is proportional to almost 2.5 times the tamper diameter. However, the tamper base area was found to exhibit marginal influence on the depth of improvement, provided the impact energy intensity was kept constant in all the three tests. However, further tests are warranted in developing this test setup for achieving more quantification of observed test results. | 2019-04-14T13:07:05.652Z | 2016-01-31T00:00:00.000 | {
"year": 2016,
"sha1": "97454b5196e8b09527e649f6ea52ddb56a17166b",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jgssp/2/59/2_IND-20/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dca0bae68a0fbe5c9e69dc27019287ffe63aa33a",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geology"
]
} |
49866691 | pes2o/s2orc | v3-fos-license | Organic and inorganic carbon and their stable isotopes in surface sediments of the Yellow River Estuary
Studying the carbon dynamics of estuarine sediment is crucial to understanding of carbon cycle in the coastal ocean. This study is to evaluate the mechanisms regulating the dynamics of organic (TOC) and inorganic carbon (TIC) in surface sediment of the Yellow River Estuary (YRE). Based on data of 15 surface sediment cores, we found that TIC (6.3–20.1 g kg−1) was much higher than TOC (0.2–4.4 g kg−1). Both TOC and TIC were generally higher to the north than to the south, primarily due to the differences in kinetic energy level (i.e., higher to the south). Our analysis suggested that TOC was mainly from marine sources in the YER, except in the southern shallow bay where approximately 75% of TOC was terrigenous. The overall low levels of TOC were due to profound resuspension that could cause enhanced decomposition. On the other hand, high levels of TIC resulted partly from higher rates of biological production, and partly from decomposition of TOC associated with sediment resuspension. The isotopic signiture in TIC seems to imply that the latter is dominant in forming more TIC in the YRE, and there may be transfer of OC to IC in the water column.
The rate of CO 2 build-up in the atmosphere depends on the rate of fossil fuel combustion and the rate of CO 2 uptake by the ocean and terrestrial biota. About half of the anthropogenic CO 2 has been absorbed by land and ocean. Large rivers that connect the land and ocean may play an important role in the global carbon cycle 1,2 . On the one hand, river can transport a significant amount of dissolved and particulate carbon materials from the land to the ocean, which are subject to recycling and sedimentation in the estuaries, or further transportation to the marginal seas 3,4 . On the other hand, there may be high levels of nutrients in the river waters, which could enhance biological uptake of CO 2 and subsequent carbon burial in the estuaries 5,6 .
The Yellow River, the second longest river in China following the Yangtze River, provides approximately 50% of the freshwater discharged into the Bohai Sea every year 7 . There were some studies on sedimentary organic carbon around the Yellow River Estuary (YRE), which were mainly conducted in the Yellow River Delta 1,8,9 and in the shelf of the Bohai Sea [10][11][12][13] . Limited studies showed a large spatial variability (ranging from 0.7 to 7.7 g kg −1 ) in total organic carbon (TOC) in the YRE 14 , with the highest contribution (40-50%) of terrestrial organic carbon near the delta 11 . However, little is known about the TOC dynamics in the sediment for the transitional zone near the river mouth.
Limited studies of inorganic carbon dynamics have been conducted in the YRE. An earlier study showed that particulate inorganic carbon (1.8% ± 0.2%) was significantly higher than particulate organic carbon (0.5% ± 0.05%) in the water column of YRE 15 . A later analysis demonstrated that rate of CaCO 3 precipitation was modestly higher than rate of biological production in the water columns of the estuary 16 . These findings suggest that there might be more inorganic carbon (TIC) than TOC accumulated in the sediment of the YRE. However, there is no evidence to support it because little is known on the magnitude and variability of TIC in the YRE. On the other hand, recent studies have showed that there was a large amount of carbonate in the soils of lower part of the Yellow River Basin, and higher level of carbonate was associated with high level of organic carbon 17,18 . One may expect a similar phenomenon in the sediment of the YRE.
As the world's largest carrier of fluvial sediment, the Yellow River's sediment load has continually decreased since the 1950s due to changes in water discharge and sediment concentration by anthropogenic changes 19 . On the other hand, climate change and human activities in the Yellow River basin have decreased fine sediment from the Loess Plateau and increased coarse sediment scouring from the lower river channel 20 . These changes may have profound impacts on the physical, biogeochemical and biological processes in the YRE. This study is the first to assess the dynamics of both TOC and TIC in the surface sediment of the YRE, focusing on the transitional zone near the river mouth 21 . The objective of this study is to test the hypothesis of more TIC than TOC accumulated in the sediment, and to explore the underlying mechanisms that regulate the variability of TOC and TIC in the YRE.
Results
Physical characteristics. The sampling sites covered most parts of the YRE, with water depth ranging from 1.5 m to 13.5 m (Fig. 1a). Dry bulk density (DBD) ranged from 0.74 to 1.55 g cm −3 , with an average of 1.02 g cm −3 (Table 1). Generally, DBD was much higher in the shallow water area than in the deep water region, presenting high values mainly occurred in the south and north sides near the river mouth (Fig. 1b). Figure 2 showed the spatial distributions of the main granulometric variables of the surface sediment. In general, clay content was low, ranging from 1.4 to 10.8% (Table 1), with relatively higher values in the northern part than in the southern part. The highest clay content was found near the north side of the river mouth, and the lowest at the mouth section. Silt content was much high (69.4 ± 21.1%), exhibiting similar spatial distribution with clay. On the other hand, the highest content of sand was found at the mouth (Fig. 2c), where clay and silt contents were lowest (Fig. 2a,b). As expected, the spatial distribution of d(0.5) was similar to that of sand, displaying the highest values in the shallow river mouth section and lowest in the southern bay, indicating strong hydrodynamic effect in the former and weak in the latter. Spatial distributions of TOC, TN, C:N and δ 13 C org . Concentration of TOC was highly variable, with higher values (3.2-4.4 g kg −1 ) in the northernmost section of the estuary and the east deep water area (Fig. 3a). There was also a high value of TOC in the bay, south of the river mouth. On the other hand, lower TOC concentration (0.2-1.4 g kg −1 ) was observed in the south section. Similarly, TN value varied largely, from 0.06 to 0.68 g kg −1 , with the lowest at the shallow water area near the river mouth and the highest in the north deep water section (Fig. 3b). Overall, the spatial distribution of TN was similar to that of TOC, both showing higher values in the north and east deeper water area.
The C:N ratio ranged from 2.1 to 10.1 (Fig. 3c). In general, C:N ratio was higher in the shallow water part relative to the deep water part. The highest C:N ratio (8-10) was found in the southern bay, and the lowest in the shallow water area near the river mouth (<4.5). Figure 3d showed a considerable spatial variability in the δ 13 C org values with a range from −24.26‰ to −22.66‰. The δ 13 C org value was more negative near the river mouth and its adjacent south bay, and less negative far away from the river mouth and the coast line. Spatial distribution of TIC, δ 13 C carb and δ 18 O carb . There was a large spatial variation in TIC, as shown in Fig. 4a, ranging from 6.3 to 20.1 g kg −1 , with higher concentration in the northern deep sea area (>17 g kg −1 ) away the mouth, and lower level in the south section (<13 g kg −1 ). Apparently, TIC also presented a high value in the north and east part. Overall, the spatial distribution of TIC was similar to that of TOC. The values of δ 13 C carb and δ 18 O carb ranged from −4.89‰ to −3.74‰ and −10.92‰ to −7.92‰, respectively (Table 1). Generally, the spatial distribution of δ 13 C carb exhibited more negative values in the north and east deep sea area, which was opposite to that of δ 18 O carb (Fig. 4b,c).
Discussion
Sources for TOC in the Yellow River Estuary. It is well known that human activities such as industrial and agricultural development would cause an increase in riverine input of nutrients and organic materials, leading to enhancements in estuary productivity and TOC burial in the sediment [22][23][24] . There was evidence that δ 13 C org was less negative in the central Bohai Sea (−21‰ to −22‰) than in the nearshore (~−27‰) 11 , indicating more negative δ 13 C org in terrigenous OC. Provided that the δ 13 C org values ranged from −24.26‰ to −22.66‰, organic carbon in surface sediment of the YRE might be mainly from marine sources.
Since C:N ratio is significantly smaller in marine particles than in terrestrial organic matters, one may use a two-end-member mixing model to quantify different sources of OC; such approach has been widely applied in studies of wetland and lake sediments [25][26][27] , and offshore and marine sediments 28,29 . Given that TOC:TN ratio was lower than 5.5 g:g at some sites in the YRE, it was reasonable to assume that there were terrestrial inputs of inorganic nitrogen. There was a significant corelation between TN and TOC (Fig. 5a), with an intercept of 0.0297 g N kg −1 . Following Schubert and Calvert 30 , we calculated total organic nitrogen (TON) concentration of each sample by subtracting 0.0297 g N kg −1 (the intercept) from TN. As shown in Table 2, TOC:TON ratio was low (<7.1) in most sections, illustrating that TOC was mainly autochthonous in the surface sediment the YRE. On the other hand, mean TOC:TON ratio was 9.5 in the southern shallow bay; such high C:N ratio together with relatively more negative δ 13 C org value ( To quantify the relative contributions of autochthonous and allochthonous OC in the surface sediments, we applied a two-end-member mixing model by using TOC:TON ratio, and assuming 6.6 mol:mol as the marine end-member. Using the average C:N ratio (10.8 g:g) from the soils collected near the river mouth (Table 1), we estimated that 75% of TOC was from soil OC source in the bay section, but only 12-28% in other sections of the YRE (Table 3). However, our approach could introduce bias or uncertainty due to the choice of end member value for soil C:N ratio. According to our recent study 31 , soil C:N ratio varied from 9.5 to 13.4 in the middle-lower parts of Yellow River Basin. If we chose 9.5 (or 13.4) as the soil C:N end member, the terrigenous contribution would be increased (or decreased) by 4-25%. Nevertheless, TOC in the surface sediment was primarily autochthonous in most parts of the YRE.
TOC variability in the Yellow River Estuary. The magnitude and spatial distribution of TOC in estuarine sediment may reflect multiple and complex processes 10,32 . As shown in Fig. 2, the surface sediments were finer to the north than to the south. In general, coarser (finer) sediment particles usually indicated a stronger (weaker) water energy environment 33,34 . These analyses indicated that the relatively lower TOC values in the south section were attributable to higher kinetic energy level. On the other hand, a significantly positive relationship (r = 0.71, p < 0.01) between the δ 13 C org value and water depth (Table 4) implied that the shallow sections in the YRE accumulated more terrigenous OC (with more negative δ 13 C org values). Table 3. Relative contributions (%) of marine and terrestrial sources using different soil C:N ratios as the endmember. A, B and C presented the soil C:N in our study, in the lower Yellow River Basin and in the Chinese Loess Plateau, respectively. There is evidence that the magnitude and variability of OC is largely influenced by primary productivity, followed by sediment resuspension and riverine input in the Yellow-Bohai Sea 35 . In general, an increase of water productivity would cause enriched 13 C in carbonate 36,37 . However, we found a significantly negative correlation (p < 0.01, Table 4) between TOC and δ 13 C carb in the YRE, indicating that higher levels of TOC (with more negative δ 13 C carb ) were not a result of local biological production. Given that sediment resuspension played a large role in regulating the spatial-temporal variability of POC in the Yellow-Bohai Sea 35,38 , we inferred that the current system would cause re-distribution of POC thus TOC in the surface sediment. Therefore, more OC could deposit in the north and east deep water area (with lower kinetic energy levels) in the YRE.
Dynamics of TIC and underlying mechanisms.
Concentration of TIC in the surface sediment of the YRE was relatively higher in the north section (16.2 g kg −1 ) than in the south section (12.8 g kg −1 ) ( Table 2), which was consistent with TOC. As shown in Fig. 5b, there was a significantly positive correlation between TOC and TIC in the surface sediments in the YRE (r = 0.97, p < 0.01), implying a potential relationship between the two parameters. In general, OC production (i.e., uptake of CO 2 ) can induce changes of chemical properties in the water column, which often leads to precipitation of carbonate 36,37,39 . Our analyses showed that the change ratio between TIC and TOC (i.e., the slope of 2.93 in Fig. 5b) in the surface sediment of the YRE was close to the ratio of 3.6 for IC:OC in particles in the water column by Gu, et al. 15 , indicating that the spatial variability of TIC might be driven by variability of POC.
While higher levels of TIC might be associated with higher levels of TOC, there was a big intercept (7.17 in Fig. 5b) for the TIC-TOC relationship in the surface sediment, suggesting that there were other processes of CaCO 3 formation, which were not linked with biological production. If higher levels of TIC were a result of higher rates of biological production, one would expect an enrichment of 13 C in carbonate; on the other hand, higher rate of respiration/decomposition would lead to depleted 13 C in dissolved IC thus in carbonate 36,37 . The significantly negative relationship (p < 0.01) between δ 13 C carb and TIC in the YRE (Table 4) indicated that higher levels of TIC (with more negative δ 13 C carb ) might result from high rates of decomposition of OC. Given that both TIC and TOC had a significantly negative correlation (p < 0.01, Table 4) with δ 13 C carb in the YRE, we speculated that there might be decomposition of TOC/POC associated with sediment resuspension, which would lead to an increase in dissolved IC thus promote carbonate precipitation and sedimentation.
Comparisons with other studies. There have been many studies of TOC but only a few studies of TIC from the estuarine sediments. Overall, TOC levels are lower in the surface sediments in most estuaries in China, relative to those in the South and Southeast Asia 40,41 , Europe 42,43 , North America and South America 44,45 . In general, sedimentary TOC concentration is relatively lower in large river estuaries (e.g., the Yangtze River Estuary 46,47 and Pearl River Estuary 48,49 ) than in small river estuaries (e.g., the Luan River Estuary 50 , Licun Estuary 51 , Min River Estuary 52 and GQ Estuary 53 ), which indicating that weak hydrodynamic environment (in the small estuaries) was beneficial to accumulation of organic carbon 11,54 .
For the surface sediment near the river mouth in the YRE, TOC concentration was modestly lower in our study (0.2 to 4.4 g kg −1 ) than the previous reports of 0.7-7.7 g kg −1 14 and <1 to 6.0 g kg −1 11 , which may be attributable to the decline in the Yellow River's discharge over the past decade 19 . On the other hand, TOC levels near the Yellow River's mouth were significantly lower than those in the other coastal areas of the Bohai Sea, e.g., north off the YRE (2.6-17.2 g kg −1 ) 55 and the Laizhou Bay (5.7-12.8 g kg −1 ) 13 , which may reflect the different influences of kinetic energy level and terrigenous inputs.
The surface sediments contained much lower TOC in the YRE than other large estuaries in China (i.e., Yangtze River Estuary 46,47 and Pearl River Estuary 48,49 ). Interestingly, the primary productivity in the YRE was higher than that in the Yangtze River Estuary 56,57 . On the other hand, our recent study indicated that POC in the Yellow River Estuary was comparable to that in the Yangtze River Estuary, and there was profound, nearly year-around sediment resuspension in the Yellow-Bohai Sea particularly in the shallow sections 38 , implying that surface sediment was subject to frequent disturbing, transportation and recycling thus decomposition, which might be partly responsible for the lower TOC levels in the YRE.
However, the YRE had much higher TIC values than those (3.3-8.2 g kg −1 ) in the Cochin Estuary 40 , Vellar and Coleroon Estuary 58 , and Chilika Lagoon 41 of the South Asia. The large difference may be attributable to factors such as water quality, net biological production and respiration, and sediment resuspension processes [59][60][61] . For example, the conditions with rich calcium and magnesium ions and strong water exchange between salty and fresh waters would lead to much more carbonate precipitation in the YRE 13,16 .
Conclusions and Implications
To our best of knowledge, this study is the first to evaluate both TOC and TIC in the surface sediment of the YRE, and to explore the underlying processes determining the dynamics of TOC and TIC. We found that TIC concentration (6.3-20.1 g kg −1 ) was much higher than TOC (0.2-4.4 g kg −1 ), and both TOC and TIC were higher to the north (3.0 and 16.2 g kg −1 ) than to the south (1.7 and 12.8 g kg −1 ). The relatively lower TOC and TIC values in the south section were attributable to higher kinetic energy level. Our analyses indicate that TOC in surface sediment are mainly autochthonous except in the southern bay where approximately 75% of TOC is probably from terrigenous OC. Overall low levels of TOC in the surface sediment of YER are mainly due to the profound resuspension that can cause enhanced decomposition. On the other hand, higher levels of carbonate in surface sediment of the YRE result partly from higher rate of biological production, and partly from decomposition of POC/TOC associated with sediment resuspension. The isotopic signiture in TIC seems to imply that the latter is dominant in forming more TIC in the YRE, and there may be transfer of OC to IC in the water column. Further studies with integrative and quantitative approaches are needed not only to assess the spatial and temporal variations of major carbon forms in the water column and sediments, but also to quantify the contributions of various sources and transformations among the different carbon pools, which aims to better understand the carbon cycle in the YRE in the changing environment.
Materials and Methods
Site description. The YRE is a typical river-dominated estuary with weak tides, where has a warm-temperate continental monsoon climate with distinct seasons (Fig. 6). In the YRE, monthly water temperature is 4.1 °C in January and 26.7 °C in July, and annual wind speed ranges from 3.1 to 4.6 m s −1 in the estuary 62 . The estuary is characterized by a high sediment load (mainly composed of silt) in the water column, produced largely by the erosion from the China's Loess Plateau. Most of the sediments discharged from the modern Yellow River mouth are trapped in the subaqueous delta or within 30 km of the delta front by gravity-driven underflow 9,63 . In recent decades, the annual water and sediment fluxes have declined dramatically, which is caused by regional climate change, reservoir construction, and irrigation-related withdrawals 16,19,62 .
Field sampling and analyses. During October 2016, we collected 15 short sediment cores (H series) from the YRE using a Kajak gravity corer and 10 surface soil samples at 7 sites (S1-S7) along its upstream wetland (Fig. 6b). Each sediment core was carefully extruded and cut into 1-cm interval in the filed, and then placed in polyethylene bags which were kept on ice in a cooler during transport. In the laboratory, we took the top 2 cm sediment and surface soil samples, and then freeze-dried for 48 h before analyses. Grain size was determined using a Malvern Mastersizer 2000 laser grain size analyzer. According to Yu et al. (2015), each sediment sample and soil sample (~0.5 g) was pretreated, in a water bath (at 60-80 °C), with 10-20 ml of 30% H 2 O 2 to remove organic matter, and with 10-15 ml of 10% HCl to remove carbonates. The pretreated samples were then mixed with 2000 ml of deionized water, and centrifuged after 24 hours of standing. The solids were dispersed with 10 ml of 0.05 M (NaPO 3 ) 6 , and then analyzed for grain size (between 0.02 and 2000 μm). The Malvern Mastersizer 2000 automatically outputs the median diameter d(0.5) (μm), the diameter at the 50th percentile of the distribution, and the percentages of clay (<2 μm), silt (2-64 μm) and sand (>64 μm) fractions.
Elemental analysis was measured using an Elemental Analyzer 3000 (Euro Vector, Italy) at the State Key Laboratory of Lake Science and Environment, Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences. Freeze-dried samples were ground into a fine powder, then placed in tin capsules, weighed and packed carefully. For the analysis of TOC/soil OC, a ~0.3 g sample was pretreated with 5-10 ml 2 M HCl for 24 h at room temperature (to remove carbonate), and followed by washing with deionized water then drying overnight at 40-50 °C. Total carbon (TC) and total nitrogen (TN) were analyzed without pretreatment of HCl, and TIC/soil IC was calculated as the difference between TC and TOC/soil OC.
For the analyses of 13 C in TOC/soil OC (δ 13 C org ), approximately 0.2 g of the freeze-dried sample was pretreated with 5-10 ml 2 M HCl for 24 h at room temperature to remove carbonate, and then mixed with deionized water to bring the pH to 7, and dried at 40-50 °C before analyses. Each pre-treated sample was combusted in a Thermo elemental analyzer integrated with an isotope ratio mass spectrometer (Delta Plus XP, Thermo Finnigan MAT, Germany). Additionally, 13 C and 18 O in carbonate (δ 13 C carb and δ 18 O carb ) were measured following reaction with 100% phosphoric acid on a stable isotope ratio mass spectrometer (Thermo-Fisher MAT 253, Germany), at the Nanjing Institute of Geology and Paleontology, Chinese Academy of Sciences. All the isotope data were reported in the conventional delta notation relative to the Vienna Pee Dee Belemnite (VPDB). Analytical precision was 0.1‰ for δ 13 C org and δ 13 C carb , and 0.2‰ for δ 18 O carb .
Statistical methods and mapping. The p-value from the correlation analysis was derived from functions in SPSS statistics software (version 19, IBM, USA). A Pearson-test analysis was performed to determine the correlation's significance. Spatial distribution maps were generated by ArcGIS 10.2 software (http://www.esri.com/ arcgis/about-arcgis). | 2018-07-18T13:48:58.207Z | 2017-08-28T00:00:00.000 | {
"year": 2018,
"sha1": "46d58c20cecefe80286f2de56c1bbba4b8d59c68",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-29200-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46d58c20cecefe80286f2de56c1bbba4b8d59c68",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
213346686 | pes2o/s2orc | v3-fos-license | Reactive compatibilization as a proper tool to improve PA6 toughness
Polyamide 6 (PA6) is a widely applied thermoplastic, however, drawbacks as low notch impact strength and high moisture absorption hinder its commercialization. Seeking improvements for PA6, this work investigated blends based on PA6/maleic anhydride grafted ethylene propylene diene copolymer (EPDM-MA) with 1% maleic anhydride (MA). In the first step, blends were processed using a co-rotational twin-screw extruder, afterwards, pellets were injection moulded. Torque rheometry, impact strength, tensile strength, thermal deflection temperature (HDT), differential scanning calorimetry (DSC), contact angle and scanning electron microscopy (SEM) were performed and the main properties were defined. Torque of PA6/EPDM-MA blends increased compared to neat PA6. Only subtle decreases were verified in the tensile strength, elastic modulus and HDT parameters. Nevertheless, significant increases were observed for the elongation at break and the impact strength. EPDM-MA addition to PA6 did not affect its melting and crystallization parameters. Contact angle of all blends increased when compared to PA6, suggesting higher hydrophobic character of PA6/EPDM-MA blends. Improved results were collected for EPDM-MA 10%, reaching increases of 850% and 213% in impact strength and elongation at break. From SEM images, particles were observed with diameters ranging from 0.1 to 2 μm, which were well dispersed and properly distributed in the PA6 matrix; additionally, for the blends with 10%; 12.5% and 15% EPDM-MA, higher level of plastic deformation was reached corroborating with the significant increase in the impact strength and elongation at break.
Introduction
Polymeric blends are physical mixtures of two or more polymers with some specifically improved properties [1,2]. There is great commercial interest to develop blends due to the possibility of low cost potential applications which is an alternative to the synthesis of new polymers [3,4]. Generally, blends are produced in order to improve brittle polymers, i.e., getting better impact strength [5,6]. Upon toughening polymers become able to absorb higher deformational energy levels prior to fracture [7]. In this context, blends are proper alternatives to improve performance of commercial plastics such as PA6 [8,9].
PA6 is semi crystalline polymer being chemically, mechanically and thermally resistant, able to support high abrasion mechanisms; it presents low melt viscosity. Currently it is an engineering polymer widely applied in the automotive, aircraft, electronic and electro technical, clothing and healthcare industries. However, PA6 becomes fragile in contact with stress concentrators such as notches and solid fillers. Additionally it presents high moisture absorption, dimensional instability along with brittleness at sub-ambient temperatures, forbidding its use where these requirements are applied [10][11][12][13][14]. To overcome these weaknesses, academic and technical researchers have implemented the toughening procedure between PA and elastomeric materials, for 200 rpm and feed rate of 2 kg h −1 ; the screw profile was configured with distributive and dispersive mixing elements, as shown in figure 1. Blends were granulated and vacuum oven dried for 24 h at 80°C. Neat PA6 was processed and dried under the same conditions for comparison purposes.
Granulated material was injection moulded using an Arburg Model Allrounder 207 C Golden Edition injector to mould impact, tensile and HDT specimens according to ASTM D256, ASTM D638 and ASTM D648, respectively. The injection parameters are presented in table 3.
After injection, specimens were stored in a desiccator until characterization. Tensile test was performed on injected specimens according to ASTM D638 using an EMIC DL 2000 universal testing machine with elongation rate of 50 mm min −1 and load cell of 20 kN at room temperature (∼23°C). Presented results are an average of seven specimens.
Heat deflection temperature (HDT) was evaluated according to ASTM D648, in a Ceast model HDT 6 VICAT equipment, with voltage of 1.82 MPa and heating rate of 120°C h −1 . HDT was determined after sample deflecting 0.25 mm. Presented results are an average of three specimens.
Contact angle analysis was performed through sessile drop method, using a portable contact angle, Phoenixi model from Surface Electro Optics-SEO. The drop was dripped on an impact specimen using a micrometer dozer; images were captured and analysed using the equipment software.
Thermogravimetry (TG) analyses carried out in a TA Instruments SDT Q600 simultaneous TG/DSC device employing samples with 5 mg, heated from room temperature (∼25°C) to 600°C, the heating rate was 10°C min −1 and nitrogen flow rate of 100 ml min −1 .
Differential scanning calorimetry (DSC) scans were computed using a TA Instruments DSC-Q20. Experiments were executed from room temperature (∼25°C) to 260°C, under heating rate of 10°C min −1 , afterwards samples were kept for 3 min at 260°C, and cooled down to room temperature. A nitrogen flow rate of 50 ml min −1 was used, and tested samples were 5 mg weighted.
Scanning electron microscopy (SEM) images were captured on the fractured surface from impact test. A scanning electron microscope, Shimadzu SSX-550 Superscan, at voltage of 30 kV under high vacuum was used. Fractured surfaces were gold coated. SEM analyses were conducted after dispersed phase (EPDM-MA) extraction in xylene for 24 h, afterwards samples were vacuum oven dried at 60°C for 24 h.
Results and discussion
Torque rheometry Torque rheometry was employed to evaluate blends reactivity and processability. Increases in torque values can be interpreted as chain extension, crosslinking and interactions between chemical groups of macromolecules. Degradation reaction evidences can be identified as torque decreasing along with time [34]. Figure 2 shows torque plots as function of time for PA6, EPDM-MA and PA6/EPDM-MA blends. Maleic anhydride grafted ethylene propylene diene (EPDM-g-MA) presented the highest torque when compared to the other materials and, therefore, higher viscosity. However, EPDM-MA presented distinct behaviour, with torque stabilization after 6 min of processing. On the other hand, after 3 min of processing, PA6 torque seemed to be practically constant. This behaviour suggested viscosity stability for the applied process parameters, i.e., 60 rpm and 230°C. Stabilizing torque has processing significance, i.e., the point at which the material should be extruded or injected [35].
Upon EPDM-MA addition to PA6, slight increase in torque and consequently viscosity increase of PA6/ EPDM-MA is observed. Into investigated range of EPDM-MA contents, i.e., 5; 7.5; 10; 12.5 and 15%; no significant differences in viscosity were verified for 5 and 7.5% contents compared to neat PA6; only from 10% EPDM-MA on, pronounced torque increase was displayed.
In order to understand effects of higher EPDM-MA contents, blends with 30 and 40% of reactive copolymer (EPDM-MA) were produced. EPDM-MA content increasing in PA6/EPDM-MA was followed by significant torque increase, being for EPDM-MA 40% more pronounced. Viscosity increase in PA6/EPDM-MA blends may be linked to two factors: low melt flow rate (MFR) of EPDM-MA, or chemical interactions development between MA from EPDM and PA6 end groups. Torque increase is believed to be proportional to EPDM-MA amount, as it increases functional groups that can react with PA6. Literature [35,36] has shown reactive MA groups from EPDM-MA copolymer react with PA6 amino terminal groups, forming an imide group and thus resulting in in situ copolymer located at the interface. Figure 3 illustrates a hypothetical scheme for PA6/EPDM-MA blending. During reaction between PA6 amino terminal groups and MA from EPDM-MA, water is produced as by-product, which is undesirable, once water can drive to PA6 hydrolytic degradation by chain scission, reducing molecular weight [37]. However, figure 2 shows that PA6/EPDM-MA torque remained constant, suggesting degradation absence.
Impact strength
Impact strength is one of the most important parameters to select a particular polymer for engineering applications and often it is used as decisive factor for this purpose [38]. Figure 4 presents impact strength of PA6 and PA6/EPDM-MA blends with 5; 7.5; 10; 12.5 and 15% EPDM-MA, respectively. PA6 displayed the lowest value, which is due to its brittle character when notched. Addition of 5 and 7.5% EPDM-MA provided considerable increase in impact related to neat PA6, reaching gains of 101.0 and 186.5%, respectively. EPDM-MA addition to PA6 matrix is able to act as an impact modifier, thus promoting greater energy dissipation mechanisms. This behaviour is linked to the morphology as further on presented in figure 10, in which particles are well dispersed along with PA6 matrix, therefore, improving toughening. Gathered results can be assumed interesting from the technological point of view, since PA6/ EPDM-MA blends (5 and 7.5%) can be considered toughened at room temperature.
Addition of EPDM-MA 10% to PA6 provided significant increase in impact related to neat PA6, reaching an 850% higher value; being translated in synergic state. Barra et al [39] produced PA6/EPDM compatibilized with EPDM-MA in an extruder, specimens were compression moulded. It was found the highest impact obtained for PA6/EPDM-MA blend (80/20%) was 167 J m −1 , this result is much lower than these found in the present work with PA6/EPDM-MA (10%), indicating the processing route has great influences on the final mechanical performance. To maximize PA6/EPDM-MA blend properties, twin-screw extruder with distributive and dispersive blending elements, as well as performed injection moulding parameters must be used. In this present work screw configuration favoured impact strength, due to generated morphology, as will be shown later.
Change and inversion behaviours were verified in blends with 12.5 and 15% EPDM-MA, where there was impact reduction compared to EPDM-MA blend 10%. Therefore, there is an optimal reactive copolymer content for PA6/EPDM-MA toughening, after which impact starts decreasing. From the collected data, addition of EPDM-MA 10% to PA6 is enough to produce toughened blend. Indeed, it is unnecessary adding higher amounts of EPDM-MA as it will saturate PA6/EPDM-MA the system with MA, in which case there will not have enough PA6 amine end groups for reaction [10,40]. Similar finding was verified by Kudva et al [41] during development of PA6/PE blends compatibilized with PE-g-MA; these indicated high levels of MA are also unnecessary to get toughened and refined particulate compounds.
In general, the mechanical properties of PA6/EPDM-MA blends are interesting for automotive and electronic applications since the impact strength of blends with 10; 12.5 and 15% EPDM-MA approaches to super toughening blends, indicating synergism state.
Impact behaviour of PA6/EPDM-MA blends, regardless EPDM-MA content, suggests reactive copolymer provided interaction with PA6, rendering strong interface, which is fundamental for proper stress transfer between phases [42,43]. At the same time, dispersion degree and domain size of EPDM-MA in PA6 matrix had significant influence in impact strength [44]. figure 5 shows the average diameter of dispersed EPDM-MA in PA6/EPDM-MA blends.
Distinct behaviour in the average particle size of EPDM-MA is verified in figure 5, in this case, influencing impact strength results. Blends with 5 and 7.5% EPDM-MA showed particles that are more refined. On the other hand, when higher contents of EPDM-MA (10%) were added, particles tended to get larger. Apparently, EPDM-MA 10% is the ideal content for particle size ranging from 0.816 to 1.172 μm, which optimizes impact. Indeed, when higher levels of EPDM-MA, i.e., 12.5 and 15%, were used, particles became lower, as consequence, impact strength was reduced, as shown in figure 4. This indicates, within a certain range, dispersed phase at reduced size is important for impact strength as it increases interfacial area and improves stress transfer mechanism. Literature [45,46] has shown that super tough PA6 blends were obtained with rubber particles in sizes ranging from 0.1 to 2 μm, this finding suggests the size range reached for PA6/EPDM-MA blends was ideal, providing high impact performance. Table 4 shows gathered data with tensile test for PA6 and PA6/EPDM-MA blends with 5; 7.5; 10; 12.5 and 15% EPDM-MA, respectively. EPDM-MA addition to PA6 matrix led, within tested range of reactive copolymer contents, to decrease its elastic modulus, suggesting blends with greater flexibility. Adding 5% EPDM-MA provided 27.5% reduction in elastic modulus compared to neat PA6, while 15% EPDM-MA reduced the elastic modulus by 32.6%. This decrease can be attributed to the elastomeric component, with low elastic modulus. Regarding PA6/EPDM-MA blends, for EPDM-MA content ranging from 10 to 15%, it was observed blends within experimental margin of error, thus with comparable stiffness.
Tensile strength
Regardless the EPDM-MA content used, there was no drastic reduction in the elastic modulus of PA6/ EPDM-MA blends related to PA6, which it is important from the technological point of view, since the major limitation of PA6 for applications is its low impact upon stress concentrator addition. Nevertheless, PA6/ EPDM-MA blends with low content of EPDM-MA presenting proper balance of stiffness and impact strength can be obtained. Table 4 shows tensile strength data of the blends, which can be translated as the material answer when mechanically required, an important parameter to define blends' specific applications. PA6/EPDM-MA blends have lower tensile strength than neat PA6 due to the elastomeric component. As EPDM-MA content increases, tensile strength decreases, suggesting origin of deformable blends at lower stresses. However, blends with 5; 7.5 and 10% EPDM-MA are within the experimental error and in this case, comparable tensile strength was obtained. Reduced PA6/EPDM-MA tensile strength implies greater energy dissipation. This finding corroborates impact strength results, i.e., higher level of energy dissipation for PA6/EPDM-MA blends related to PA6.
Regarding elongation at break of PA6/EPDM-MA blends, significant increase compared to PA6 was verified, suggesting high-toughened blends. These data corroborate assumptions that interactions occur between PA6 and EPDM-MA, as proposed in figure 3. Blends with EPDM-MA 10 and 15% showed substantial increase in elongation at break, as shown in figure 6.
It appears that from 10% EPDM-MA on there is property optimization, with significant gains in elongation at break and, consequently, in tensile strength. It is reasonable to suggest 10% EPDM-MA is a critical concentration in which there is sufficient amount of maleic anhydride to react with PA6 end groups. As consequence, the synergic effect was observed in elongation at break upon addition of 10% EPDM-MA to PA6. Indeed, it is believed the reaction between EPDM-MA and PA6 increases the entanglement likelihood, thus providing greater resistance to molecular disentanglement, increasing elongation data. At concentrations higher than 10% EPDM-MA, elongation decreases suggesting there is excess of maleic anhydride reactive groups in compositions with 12.5 and 15%, which end up deteriorating this property. Overall, the PA6/EPDM-MA blend (10%) has the highest level of ductility, an important result for improved applications. Additionally, blends' behaviour is associated to morphological characteristic, which will show proper EPDM-MA particles distribution in these blends, which undoubtedly is a determinant factor for higher elongation and toughness characters.
Thermal deflection temperature (HDT)
Simulation of polymers' application at above ambient temperatures can be obtained through HDT experiments, which is very relevant for material selection [47]. Figure 7 shows acquired HDT for PA6 and PA6/EPDM-MA blends, it can be seen that EPDM-MA subtly reduces PA6 thermo-mechanical strength, since all PA6/EPDM-MA blends have lower HDT than neat PA6. HDT is influenced by the rubber content increase, since this property depends significantly on the continuous phase, which is crucial for material stiffness [48]. Therefore, blends' HDT decrease can be attributed to EPDM- MA, due to its rubbery character being higher flexible and driving to HDT losses. This finding is in agreement with elastic modulus data presented above. One limitation of PA6 for technological applications is its low impact strength when featuring stress concentrator. Although EPDM-MA did not provide increased thermo-mechanical strength, its addition did not drastically reduce PA6 HDT, and at the same time considerably increased impact, summing up, development of PA6/EPDM-MA blends make an important technological contribution. HDT of PA6/EPDM-MA blends are quite relevant as they provide the quality of injection moulded products. After injecting the product is only considered safe to be removed from the mould when it is near or below the HDT value, meaning the deformation will be kept within acceptable limits after removing.
Differential scanning calorimetry (DSC)
DSC parameters acquired during fusion and melt crystallization events are provided in table 5 and scans are displayed in figure 8. EPDM-MA showed low crystallinity, since it has two different meres in the main chain, packaging is difficult and therefore it is predominantly amorphous. Although EPDM-MA has two semicrystalline meres its crystallization is complex. Melting and crystallization peaks are associated with the ethylene phase, suggesting it is the major phase in the main chain.
PA6 scans presented two melting peaks defined as T m3 and T m2 , which measured parameters are presented in table 5, these peaks may be attributed to the two distinct crystalline forms, called α and γ [50], which have T m at approximately 222.1°C and 214°C, respectively. Blends' melting temperatures were quite similar to that of neat PA6. However, as presented in figure 8(a) all blends have a third melting peak (T m1 ), regardless of EPDM-MA content, whereas, most likely, addition of reactive copolymer modified PA6 macromolecular ordination mechanism.
Another hypothesis may be based on the fusion-recrystallization-fusion phenomena, where the first melting peak originally associated to less stable α 1 form afterwards is transformed into α; thus promoting development of T m3 , i.e., PA6's main endothermic peak [51]. Therefore, it is suggested EPDM-MA addition affected structural organization of PA6 crystals [52], in parallel reduction in blends' ΔH m related to PA6 is verified, indicative of lower crystallinity.
Blends crystallization temperatures (T c ) ( figure 8(b)) practically did not change related to the PA6 one. On the other hand, although not significant, blends' degree of crystallinity (X c ) decreased, corroborating with elastic modulus and impact strength. PA6 crystallization was hindered through EPDM-MA addition. Packaging and crystal formation are believed to be hampered by incorporation of amorphous (EPDM-MA) and, as consequence smaller and imperfect crystals are produced [53], as verified upon T m1 peak. Indeed, PA6/EPDM-MA blends presented the lowest melting enthalpy (ΔH m ), due to less energy consuming to melt the crystalline phase.
Thermogravimetry (TG) blends, favouring to keep PA6 thermal stability, which can be assumed as synergic interaction between components.
In TG plots, from room temperature to approximately 120°C, for all tested samples the weight loss was 1.2%, due to residual moisture. In general, a single decomposition step at approximately 370°C is verified for PA6 and PA6/EPDM-MA blends due to macromolecular chains degradation. Data acquired from thermogravimetry ensure proper thermal stability of tested compounds at applied processing temperatures.
Contact angle
Through contact angle analysis the surface character, i.e., hydrophilic or hydrophobic, can be assessed measuring interaction energy between surface and dropped liquid [55]. Specimens with contact angle θ<90°, are assumed having liquid affinity and therefore called hydrophilic. On the other hand, when the contact angle is θ>90°specimens should have hydrophobic surface [56]. Figure 10 presents surface contact angle results of PA6, EPDM-MA and PA6/EPDM-MA blends, which have hydrophilic character, since the contact angle (θ)<90°. PA6 has the highest hydrophilic character (47.3°), indicating it has water affinity. In fact, PA6 has hydrogen bonds between carbonyls and hydrogen from amide group, providing hygroscopic character. These bonds make water permeation easier, diffusing between chains and being located on hydrogen bond. On the other hand, EPDM-MA presented the lowest water affinity (75.7°), most due to the nonpolar ethylene propylene diene character. The contact angle of the blends was intermediate between to those of PA6 and EPDM-MA.
For the blends the contact angle was higher than that for PA6, it was assumed EPDM-MA decreased PA6 hydrophilic character. Most likely, as EPDM-MA has ethylene propylene diene (EPDM) as non-polar group, once it was dispersed into PA6 matrix, along with the surface, it headed to reduced water interaction, i.e., increased blends contact angle.
Upon EPDM-MA addition and in the range from 5 to 12.5%, no significant differences in contact angle were verified. However, for higher content, i.e., EPDM-MA 15% blend surface became more hydrophilic, such behaviour can be attributed to the increased maleic anhydride amount conducting to greater water interaction.
Contact angle results are greatly important, since PA6 presents as thresholding factor high water interaction, nevertheless, adding low EPDM-MA contents, PA6 hydrophilic character is reduced.
Scanning electron microscopy (SEM)
SEM images of PA6 and PA6/EPDM-MA blends are presented in figure 11, in figure 11(a) PA6 surface with regular appearance of ductile fracture is seen. SEM images of PA6/EPDM-MA blends reveal typical morphological surface of immiscible blends with separated phase. It is worth mentioning that complete miscibility between PA6 and EPDM-MA is not welcome, once it would drive to unfeasible toughness mechanisms. Increased addition of EPDM-MA produced distinct surfaces, as shown in figures 11(b)-(f). In blends with 5 and 7.5% EPDM-MA more homogeneous and smooth looking morphology was verified compared to richer EPDM-MA compounds (10; 12.5 and 15%). This finding corroborates impact data, where blends with EPDM-MA at 5 and 7.5% presented lower impact performance. Figures 11(b), (c) shows well-dispersed EPDM-MA in PA6 matrix, forming very small domains, represented by extracted solvent voids. EPDM-MA dispersed domains range from 0.1 to 2 μm, which are considered ideal to reach good impact properties [45,46]. PA6/EPDM-MA compounds show efficient morphology, since well dispersed EDPM-MA and small particles are reached due to decreased interfacial tension between phases. Observed morphology of blends containing 5 and 7.5% EPDM-MA corroborates impact strength results, which were higher than neat PA6. Blends with 10; 12.5 and 15% of EPDM-MA clearly presented evidence of more ductile fracture mechanism, with intense plastic deformation, as shown in figures 11(d)-(f). Voids are verified from solvent-extracted EPDM-MA particles surrounded by highly deformed structure, suggesting strong interaction of PA6 and EPDM-MA, providing increased mechanical properties, especially impact strength, tensile strength and elongation at break. Apparently, the blend with EDPM-MA 10% showed morphology with the highest plastic deformation, indicating to be the most toughened blend, with 850% gain in impact strength related to PA6, evidencing greater synergism. At the same time, the morphological aspect of the blend with 10% EPDM-MA reinforces the hypothesis at this concentration properties are optimized as well as the morphology is stabilized. In this case, above 10% saturation, dispersed phase exists. In addition, when adding 10% EPDM-MA compatibility increase of PA6/APDM-MA blend is reached, which is reflected as improved impact strength, ductility and HDT. When 12.5% EPDM-MA was added reduction of plastic deformation was observed on fractured surface related to EPDM-MA 10% blends, plastic deformation recovering is visualized for 15% compared to EPDM-MA 12.5%. However, this recovering is at lower extension if compared to the EPDM-MA 10% blend. Plastic deformation level directly influences impact and elongation at break improvements, since these properties increased proportionally, as shown in figures 4 and 6, respectively. From morphological analyses, it is suggested that greater interactions take place between PA6/EPDM-MA with EPDM-MA 10%, defining it as the ideal.
Increased elongation at break and impact strength are related to toughening processes such as micro fibrillation and micro flow under shear [57]. However, in multiphase systems such as PA6/EPDM-MA, these mechanisms can act simultaneously as complex interactions. Indeed, high impact performance and generated morphology of PA6/EPDM-MA with 10; 12.5 and 15% EDPM-MA suggest combination of these mechanisms. In these cases, shear bands represent barriers to micro fibrillation propagation and catastrophic crack origin, as consequent reduction of the micro fissures propagation rate [58]. Therefore, maximizing the synergic effect, obtaining high impact as seen in figure 4.
Conclusions
In this work, effects of maleic anhydride grafted ethylene propylene diene reactive copolymer (EPDM-MA) on torque rheometry, mechanical, thermal, thermo-mechanical properties and morphology of PA6/EPDM-MA blends were investigated in detail. PA6/EPDM-MA blends can be considered toughened due to favourable molecular interactions between phase components. Main results indicated, although EPDM-MA addition provided slight reduction in tensile strength and elastic modulus, significant increases in elongation at break and impact strength were reached. At the same time, no significant reduction was verified in thermo mechanical stability. EPDM-MA addition provided increased contact angle, driving to minimize PA6 hygroscopic character. Although EPDM-MA is an amorphous copolymer, it did not drastically change PA6's degree of crystallinity. EPDM-MA content had great influence on the morphology, especially at 10%, since it induced fracture surface with high degree of plastic deformation. In this case, favouring high performance blend under impact, indicative PA6/EPDM-MA (10%) has great technological potential and commercial viability. | 2020-01-23T09:07:03.091Z | 2020-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "7f068b472bcda3b9898cc5f4a732a1942a35ac1a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ab6e62",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8aa4dfa4158a87e8f885c843d04482158c32dfa7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
262199089 | pes2o/s2orc | v3-fos-license | A Comparison of Machine Learning Algorithms for Wi-Fi Sensing Using CSI Data
: In today’s digital era, our lives are deeply intertwined with advancements in digital electronics and Radio Frequency (RF) communications. From cell phones to laptops, and from Wireless Fidelity (Wi-Fi) to Radio Frequency IDentification (RFID) technology, we rely on a range of electronic devices for everyday tasks. As technology continues to evolve, it presents innovative ways to harness existing resources more efficiently. One remarkable example of this adaptability is the utilization of Wi-Fi networks for Wi-Fi sensing. With Wi-Fi sensing, we can repurpose existing networking devices not only for connectivity but also for essential functions like motion detection for security systems, human motion tracking, fall detection, personal identification, and gesture recognition using Machine Learning (ML) techniques. Integrating Wi-Fi signals into sensing applications expands their potential across various domains. At the Gamgee, we are actively researching the utilization of Wi-Fi signals for Wi-Fi sensing, aiming to provide our clients with more valuable services alongside connectivity and control. This paper presents an orchestration of baseline experiments, analyzing a variety of machine learning algorithms to identify the most suitable one for Wi-Fi-based motion detection. We use a publicly available Wi-Fi dataset based on Channel State Information (CSI) for benchmarking and conduct a comprehensive comparison of different machine learning techniques in the classification domain. We evaluate nine distinct ML techniques, encompassing both shallow learning (SL) and deep learning (DL) methods, to determine the most effective approach for motion detection using Wi-Fi router CSI data. Our assessment involves six performance metrics to gauge the effectiveness of each machine learning technique.
Introduction
Though Wi-Fi is not the only means of motion sensing, there are other means of detecting motion in any premises using Passive Infra-Red (PIR) sensors, vision sensors, ultrasound sensors, and other RF-based sensors such as RFID-based sensors.Alternatively, a lot of research has been carried out on improved and efficient solutions using low-energy communications devices like Bluetooth Low Energy (BLE) devices.The authors in [1] presented a scalable and non-intrusive method to detect occupancy in building zones, utilizing BLE technology in smartphones.Signal strength data collected by BLE beacons were processed through machine learning models to determine occupants' locations within zones.Both supervised ensemble and semi-supervised clustering models were assessed, with the latter showing efficient performance.The Singapore case study showcased up to 86% accuracy in locating occupants.Furthermore, this study identified distinct occupancy profiles based on movement patterns, offering insights for building management.The method's scalability suggested broader practicality.The downside of this approach is its dependence on the occupants carrying cell phones, plus the BLE technology of the cell phone needs to be active for the correct operation of this method.Furthermore, in circumstances where a person is equipped with more than one BLE-enabled device, such as a cell phone, smart watch, or earphone, the estimated occupancy of the proposed method can end up with inaccuracies in the results.
As mentioned above, numerous ways of motion detection with high accuracies using various dedicated hardware technologies have been presented by researchers globally, but the goal of the research presented in this paper is to achieve motion detection using only the pre-existing Wi-Fi devices in the customers' premises with maximum precision, with minimum to no dependencies on other hardware like cell phones with BLE, and without introducing new hardware overhead to customers, keeping the privacy of the customers intact as there will be no images or video data being recorded or processed when applying Wi-Fi sensing.Specialized hardware, such as motion detection sensors with higher sensitivity, can detect motion more precisely, and Bluetooth low-energy devices can detect premises occupancy more efficiently with lower power consumption.Similarly, active and passive RFID tags can be used to identify a certain device and or person with which it has been associated.The actual percentage of precision achieved using different hardware and software strategies is highlighted in the literature review section.On a quick note, use of BLE can give a precision of up to 86% [1] in premises occupancy estimation; passive processing of Wi-Fi CSI data using AI has given 97% precision for motion detection; and a precision of 95% was achieved using Ultra Wide Band (UWB) technology for Human Activity Recognition (HAR).The use of these specialized hardware technologies for different goals will have a higher impact on the overall cost for the service provider as well as for the consumer.Therefore, the research work presented in this paper focuses on achieving Wi-Fi sensing using only the pre-existing Wi-Fi router in consumers' premises with improvements and add-ons on the router firmware and on customers' mobile apps to support features using Wi-Fi sensing.
The Wi-Fi routers in our daily lives are mainly used for internet and intranet connectivity, mostly for communications, entertainment, data exchange, etc.The channel state information (CSI) and Received Signal Strength Indicator (RSSI) statistics are utilized mainly to analyze Wi-Fi channel conditions and adjust router configuration as necessary.Because people inside a Wi-Fi router's range also cause radio waves used for Wi-Fi communication between devices to be distorted, examination of the distortion parameters allows one to infer information about nearby activity without actually using the visualization and PIR sensing devices.The Wi-Fi signal being transferred between Wi-Fi networking devices in the target premises is the main focus of Wi-Fi sensing.To detect distortions in the samples produced by movements in the target premises, we use the CSI of the Wi-Fi signal.
An indicator of signal intensity is the RSSI.Although it has been actively used for active localization based on the Wi-Fi fingerprinting technique or as a metric for passive tracking of mobile devices, it is in fact quite unstable and varies from vendor to vendor.Additionally, it cannot accurately capture changes in signal caused by human movements, especially if a person is not directly between an access point and a Wi-Fi router.The CSI approach offers more precise information on the state of the channel.At each sub-carrier frequency, it monitors the amplitude and phase distortions of the wireless signals that are in motion for each antenna pair of the transmitter and receiver.As a result, CSI variations in the time domain exhibit various patterns for various people, activities, etc., and this can be used for Wi-Fi sensing in intruder alarm systems, gesture recognition, and healthcare applications, particularly fall detection, etc.
Using orthogonal frequency division multiplexing (OFDM) and Multiple-Input Multiple-Output (MIMO) technology, the CSI saves the wireless signal amplitude and phase value information for each pair of transmit-receiver antennas in an OFDM subcarrier.You can imagine a 2.4-GHz band as a narrowband flat-fading channel, similar to wireless technology 802.11n, and it can be represented using the following straightforward equation: Here, X and Y stand in for the transmitter and receiver's respective vectors, N for the Gaussian noise vector that is always present in the RF channel, and H for the channel matrix.Two TP-Link Archer C7 routers have been utilized in the experimental Wi-Fi sensing configuration, one as a transmitter and the other as a receiver.There are three antennas on both the CSI transmitter and receiver routers.As a result, our wireless communication system is 3 × 3 MIMO, and the CSI data are divided into nine streams in accordance with the nine pairs of transmitter-receiver links.For a device that distributes each stream's n subcarriers evenly among the channel's 56 subcarriers, a 20 MHz device.The size of the CSI data group matrix becomes nine rows and n columns, respectively, which leads to the (9 × n) data groups derived from each received CSI packet.
As CSI contains noises produced by indoor environments, the CSI data packets received at the receiver are quite noisy.Additionally, internal state transitions in the wireless signal transmitter and receiver devices are brought on by changes in transmission power, transmission rate adaptation, and internal reference level changes.Low-pass filters are used to first denoise the CSI data before the first data processing operations, such as data fusion employing correlation to reduce redundancy without losing important information, are implemented.CSI data can be utilized for motion detection by utilizing amplitude and or phase variance when the data processing is complete.It can also be used to train machine learning algorithms for motion detection, gesture recognition, and personal identification, among other things.
In the orchestration process, we have considered several machine learning (ML) algorithms and applied both shallow learning (SL) and deep learning algorithms to the CSI data collected.We employed classification-based ML algorithms for operations like motion detection and clustering-based ML algorithms for unique personal identification.The list of SL algorithms considered includes SVM, naïve Bayes, decision tree, K-nearest neighbors, and K-means algorithms, and the list of DL algorithms considered includes recurrent neural networks (RNN), convolutional neural networks (CN), and deep neural networks (DNN), respectively.These algorithms were trained and validated using a harmonized set of samples from publicly available Wi-Fi CSI datasets from CRAWDAD and CSI datasets from IEEE DataPort.The performance metrics were analyzed from a set of results obtained using each of the ML algorithms considered to compare the performance and efficiency of each ML algorithm and select the most suitable for Wi-Fi sensing using the CSI data captured from TP-Link Archer C7 routers in the target premises.
The novelty and main contributions of this paper, which distinguish this work from pre-existing research work, are as follows: Selection of Machine Learning Algorithms: The selection of machine learning algorithms is carried out for the comparison of performance when Wi-Fi sensing data, i.e., CSI data, is presented to these machine learning algorithms.The selected set of machine learning algorithms from both shallow learning (SL) and deep learning (DL) are chosen based on the type of solutions they provide.The presented research work in this paper only focuses on those machine learning algorithms that address the classification problem, i.e., classification of data for deciding whether motion has been detected or not.
Training and Evaluation of ML Models: All nine ML models, i.e., six SL and three DL models, are trained using the Wi-Fi CSI dataset from the IEEE data port repository, which contains a labeled dataset for humanoid motion detection.Hence, all the training carried out on these models was supervised machine learning.As mentioned earlier, the IEEE dataset for Wi-Fi sensing, i.e., the Widar dataset, was initially used for benchmarking and comparing the performance of ML models, which was then replaced by a locally captured CSI-based Wi-Fi dataset.In the locally captured dataset, two classes have been used, i.e., clean with no motion at all and another with a person walking.The number of training samples in the Widar dataset used is 35 k, and the number of samples in the testing data are 9 k.The number of training data samples used in locally captured CSI data are 8 k, and the number of testing data samples used in locally captured CSI data are 2 k.MADM for ranking ML Algorithms: Last but not least, in this research work, after carrying out performance analysis of each machine learning algorithm, the multi-attribute decision-making algorithm has been employed to systematically rank each machine learning algorithm.The MADM is introduced because there are multiple performance metrics against each ML algorithm, which makes it very hard to select the most appropriate ML algorithms in this case.Therefore, MADM is utilized here as it best suits the problem when we have nine different ML algorithms, with each algorithm having six different performance attributes.
The structure of this paper is drafted in such a way that, followed by the brief introduction presented in this section, Section 2 presents the literature review, Section 3 presents the proposed work and methods, Section 4 presents the results, which are followed by discussion in Section 5, and finally the conclusion in Section 6 and references in the last section of this paper.
Literature Review
A vast number of researchers globally have performed research on Wi-Fi sensing using CSI, RSSI, and other methods, mostly focusing on CSI and RSSI methods.The following set of paragraphs sheds light on different research projects carried out globally by researchers to perform Wi-Fi sensing.
Different sensing technologies are used to examine diverse human actions and gestures to perform human activity recognition efficiently.These technologies include sensors for motion detection [2], sensors for vision-based detection [3], sensors for sound-based sensing [4], and pyroelectric infrared light-based sensors [5].To measure body motions using motion sensor technology, people typically need to wear specialized devices, which is not always practical.Approaches using cameras and other devices or sensors based on the visual data can function effectively in specific lighting conditions, which can be easily obstructed by smoke, opaque objects, or low illumination conditions.Additionally, because acoustic signals attenuate quickly, acoustic-based techniques are unstable in the presence of background noise and outside sound interference, and their sensing range is constrained.Overall, using traditional approaches requires more work due to complex hardware installation and a variety of maintenance requirements.A low-cost, non-intrusive approach to recording human body motions associated with daily activities is desired to overcome the restrictions discussed in this article.Recently, an increasing amount of research has focused on radio frequency (RF)-based approaches for human activity detection, such as Wi-Fi.Nearly every electronic device in homes and offices, including smart speakers (like the Amazon Echo and Apple HomePod), smart TVs, smart thermostats, and home security systems, may now be connected wirelessly thanks to the widespread use of Wi-Fi technology.Indoor spaces typically allow Wi-Fi signals to spread out over tens of meters, and the wireless connections between these smart gadgets create an exhaustive combination of the reflected light rays that reaches each corner and narrow place.People's presence and associated body motion will have a significant impact on wireless signals, leading to significant variations in the amplitude and phase of received signals.These changes can be used to record human body movements associated with daily activities.
The research work presented in [6] aimed to tackle indoor occupancy estimation challenges using a combination of Bluetooth low energy (BLE) technology and machine learning.They developed a prototype system that comprises BLE beacons, a mobile application, and a remote server.By employing three distinct machine learning methods, they classified occupancy based on the data collected from these beacons.Their experimentation demonstrated the effectiveness of this approach in accurately estimating occupancy.The server handles data processing and training, eliminating the need for complex operations on the mobile application.
The authors in [7] presented "Plug-Mate", an IoT-based plug load management system that optimizes energy use and user comfort via intelligent automation leveraged highresolution occupancy data, advanced plug load recognition, and personalized controls.In a 5-month university office study, six strategies were evaluated, with the most successful achieving 51.7% energy savings across plug load types, a 7.5% reduction in building energy use, and high user satisfaction.
The paper [8] addressed the energy consumption in commercial buildings, focusing on heating, ventilation, and air conditioning (HVAC) systems.It introduced "Sentinel", a system that utilizes existing Wi-Fi infrastructure and occupants' smartphones for precise HVAC control based on occupancy.Unlike traditional sensor-based solutions, Sentinel reduces deployment costs.It achieved 86% accurate occupancy detection within office spaces, with minimal errors attributed to smartphone power management.In the realworld test, Sentinel controlled 23% of HVAC zones, resulting in a prominent 17.8% energy savings compared to static scheduling.
Research in [9] employed diverse sensor data for predicting occupancy in various room types.A new feature selection algorithm was introduced, surpassing the common approach by enhancing model performance with fewer sensors.Outcomes revealed that indoor CO 2 levels and Wi-Fi-connected devices are pivotal in predicting occupancy across offices, libraries, and lecture rooms.Optimal model performance was attained using distinct deep learning architectures for each room type.The algorithm's usability was extended to other datasets, providing insights to curtail sensor needs and deployment expenses in building management.
In [10], a robust Wi-Fi-based passive sensing technique named CNN-ABLSTM was introduced, combining CNN and attention-based bi-directional LSTM to address challenges like low sensing accuracy and high computational complexity.By utilizing CSI for Wi-Fi passive sensing, it achieves precise human activity recognition.CNN extracts features, reducing redundancy, while the attention mechanism improves model robustness.Simulation results show that CNN-ABLSTM improves recognition accuracy by up to 4%, reduces computation significantly, and maintains 97% accuracy across different scenarios and objects.Compared to traditional approaches, this DL-based method outperforms them, making it promising for advanced wireless communication systems.
Also, the increasing elderly population and the strain on healthcare services due to the COVID-19 pandemic have led to a demand for technological solutions in elderly homes.Research [11] introduced a real-time, noninvasive sensing system that utilized radio frequency (RF) sensing and channel state information (CSI) reports to monitor activities of daily living (ADLs).Machine learning, specifically the random forest algorithm, was employed to accurately classify ADL categories like "movement", "empty room", and "no activity", which achieved 100% accuracy on new testing data.The system detected movement using Wi-Fi signals without the need for wearables, and disruptions in CSI data indicate the presence of a person.This proposed real-time monitoring system enhances elderly care.
Another study [12] focused on ambient computing and used Wi-Fi channel state information (CSI) as a non-contact method for recognizing human activities indoors.LSTM outperformed CNN, and hybrid models achieved 95.3% accuracy in multi-activity classification.The research shows that RF sensing for indoor human activity recognition is feasible and offers privacy-friendly alternatives to vision-based systems.The study also suggested further investigation into the system's resilience in diverse environments and its ability to recognize activities for multiple users.Overall, LSTM-based RF sensing proves effective for indoor activity recognition and holds significant potential in various applications.
A research paper [13] presented a sign language recognition system based on deep learning and Wi-Fi CSI data.The proposed model utilized CNN, LSTM, and ABLSTM with different optimizers and preprocessing methods.It achieved impressive recognition accuracy of 99.855%, 99.674%, 99.735, and 93.84% in various environments and multi-user scenarios.The study demonstrated the effectiveness of using Wi-Fi signals for gesture recognition, surpassing other deep learning approaches.Additionally, the researchers suggested considering transfer learning like ResNet for future improvements.
Another study [14] explored device-free human activity recognition (HAR) using Wi-Fi CSI signals.Two algorithms, SVM and LSTM, are proposed for classification, with SVM employing wavelet analysis for preprocessing and feature extraction, while LSTM processes raw data directly.The research achieved high accuracy in detecting various human activities, including falls and counting individuals in a room.
A similar survey [15] investigated device-free human gesture recognition using Wi-Fi channel state information (CSI).It categorized recognition into device-based and device-free sensing methods and highlighted advancements in Wi-Fi CSI.The study examined modelbased and learning-based approaches, discussing their recognition performance and signal processing techniques.Deep learning methods showed promise with large datasets, while model-based approaches performed well with a single participant.Challenges included handling non-Gaussian signal distributions and capturing fine-grained information.
Another article [16] presented EfficientFi, a new wireless sensing framework for largescale Wi-Fi applications in smart homes.By overcoming existing limitations, EfficientFi used quantized representation learning with joint recognition, enabling efficient compression of Wi-Fi CSI data at the edge and accurate sensing tasks.It achieved remarkable data compression and high accuracy in human activity recognition and identification.Compared to classic methods, EfficientFi outperformed in compressive sensing and deep compression, demonstrating its potential for IoT-cloud-enabled Wi-Fi sensing applications.
The study in [17] also focused on human activity recognition (HAR) using ultrawideband (UWB) technology and Wi-Fi CSI.Through experiments, the UWB CIR data achieved a remarkable F1-score of 95.53% in activity classification.In comparison, Wi-Fi CSI data achieved F1-scores of 92.24% and 80.89% with denoised amplitude values and spectrograms, respectively, for the same activities.The research highlighted UWB's superiority over Wi-Fi for HAR and offered advantages like a smaller data dimension and lower signal processing requirements.UWB technology proved valuable not only for localization/tracking but also for device-free HAR.
Researchers in [18] focused on a contactless respiration detection system using Wi-Fi CSI.The ResFi system achieved a remarkable 96.05% accuracy in detecting human respiration, outperforming traditional machine learning methods.The study emphasized the potential of learning-based approaches for non-contact vital signal detection.
A similar study [19] concentrated on detecting human presence in rooms without the need for devices using Wi-Fi CSI data.The proposed approach employed the dynamic time wrapping (DTW) algorithm to compare empty and filled rooms, achieving accuracy comparable to existing methods.Experimental results demonstrated a 99.21% accuracy comparable to a 99.98 accuracy with the RF algorithm.
The RSSI CSI, which is readily available on many commercial network interface cards with modified driver software, allowed the researchers to measure the physical layer parameters of the wireless channel and carry out motion detection using the Wi-Fi signals.Wi-Fi signals can be modified to transmit wireless signals on a radio platform defined by a universal software radio peripheral (USRP), such as frequency modulated carrier wave (FMCW), to determine the frequency shift of the signal brought on by human motion in the target premises [20].The following Table 1 presents the overall comparison between different strategies in the literature, considering three different attributes: methodology considered, application of the methodology, and key findings of the corresponding strategies.
The research methods reviewed above target different domains, i.e., starting from premises occupancy estimation, smart energy management, HVAC, HAR, respiration detection, and motion detection, using a variety of approaches with different hardware and software assistance.The research work performed so far in the literature has mostly applied some analytical or artificial intelligence (AI) or machine learning (ML)-based methods, with some support from theoretical arguments in the literature.The lack of comparison between different ML methods, particularly a comparison between shallow learning (SL) and deep learning (DL) models for motion detection using a Wi-Fi-CSI-based dataset, has been identified and explored in the research work presented in this paper.
Furthermore, the selection of the most efficient ML algorithm has been carried out using the systematic approach of a multi-attribute decision-making algorithm, which was not seen in the literature.The work presented in this paper contributes to the validation of the process for selecting the best ML techniques in motion detection using Wi-Fi sensing.It also explores the behavior of various ML algorithms, i.e., SL and DL, when a CSI-based dataset is presented to these ML algorithms for training and testing.Taylor et al. [11] Real-Time Activity Sensing Activity Sensing Identification of optimal machine learning techniques.
Khan et al. [12]
Flexible SDR Human Activity Detection Contactless human activity detection using deep learning.
Bastwesy et al. [13] Wi-Fi CSI Sign Language Recognition Deep learning for sign language recognition.
Damodaran et al. [14]
Wi-Fi CSI Activity and Fall Recognition Device-free human activity and fall detection.
Ahmed et al. [15] Wi-Fi CSI Gesture Recognition Survey of device-free gesture recognition.
Yang et al. [16] Efficient Wi-Fi Sensing Wi-Fi Sensing Large-scale lightweight Wi-Fi sensing via CSI compression.All the research work done so far has focused solely on methods, tuning, and utilization of machine learning (ML) techniques to achieve the goal of Wi-Fi sensing to detect humanoid motion in the coverage area of the Wi-Fi network.In this article, a broader aspect of Wi-Fi sensing has been addressed, which is to analyze a set of machine learning algorithms to find out which ML methods are more suitable for the problem of Wi-Fi sensing when using the CSI data for training and detecting motion in the Wi-Fi coverage area.For this purpose, a number of shallow learning (SL) and deep learning (DL) algorithms were selected based on their characteristics, such as suitability for tabular data and classification capabilities, to suit our requirements for motion detection using Wi-Fi CSI data.
Wi-Fi Sensing Techniques
Various types of techniques have been explored by researchers globally when implying Wi-Fi sensing for motion detection purposes.Here we have classified these techniques based on the hardware deployed for Wi-Fi sensing, i.e., using commercial off-the-shelf (COTS) hardware such as Wi-Fi routers used at home for Wi-Fi access and using customized hardware such as software-defined radio, e.g., URSP, FPGA boards, etc. RSSI: data is available in most Wi-Fi devices, which indicates the path loss of wireless signals with respect to a certain distance and can be derived following the log-normal distance path loss (LDPL) model.
The CSI: To detect human activity with accuracy and dependability, Wi-Fi signal data are used.In order to accurately reflect the combined effect of, for instance, scattering, fading, and power decline with distance, more fine-grained CSI must be captured.Since wireless signals in an indoor setting could practically travel through any corner, the presence or movement of a human body would affect wireless signal propagation, leading to minute variations in numerous reflected rays.The measurable CSI values are created by all of these multi-path rays, which can also be utilized to identify and monitor human body movements.In contrast to RSSI, CSI is a set of complex values for several orthogonal frequency-division multiplexing (OFDM) subcarriers that include both amplitude and phase information.The effects of multi-path fading vary for every channel using a little variance in the center frequency, and all the subcarriers collectively represent the wireless channel in a fine-grained way.With customized drivers, any device with commercial Wi-Fi interfaces may measure CSI, just like RSSI.Researchers are now using it often to accomplish tasks including human intrusion detection, walking speed/direction estimation, and human activity recognition [21,22].
The Customized Hardware-Based Wi-Fi Sensing Techniques
Similar to the COTS device-based Wi-Fi sensing techniques, two main approaches to the customized hardware-based Wi-Fi sensing techniques are described in this article.These two techniques are frequency modulated carrier wave (FMCW) and Doppler shift methods.
FMCW technique: The measurement of human motion based on radio reflection from the human body, particularly by calculating the amount of time needed for the Wi-Fi signal to go from the transmitter to the reflecting body and back to the receiver.Given that wireless transmissions often move at the speed of light, determining the time of flight for the Wi-Fi signal is not a simple operation.To calculate the radio signal's time of flight, the FMCW can map the difference in time to a carrier frequency shift.It is crucial to remember that FMCW technology relies on specialized equipment (such as USRP) to generate the signal that sweeps the frequency across time, in contrast to conventional Wi-Fi that employs OFDM.The writers of the references [23][24][25][26] have shown how to estimate motion detection using FMCW for a variety of uses.Doppler Shift technique: Another physical layer characteristic of wireless transmissions that can be utilized to detect human activity is Doppler shift effects.It specifically monitors the frequency shift in the received signal of Wi-Fi as the transmitting and receiving devices change positions in close proximity to one another.Any movement of the human body would cause a Doppler shift if the wireless signal received and reflected from it were regarded as the signal sent out by the wireless transmitter.In particular, moving towards the receiver causes a positive frequency change (also known as a Doppler shift), but moving away from the receiver causes a negative frequency change.The authors in the cited publications [27][28][29][30][31] have suggested their work utilizing the doppler shift effects with software-defined radio (SDR) for recognition of human movements such as walking and running.
Proposed Work
We harnessed classification-based ML algorithms for motion detection.Our roster of SL algorithms encompassed SVM, naïve Bayes, decision tree, K-nearest neighbors, and K-means.In the realm of DL algorithms, we delved into recurrent neural networks (RNN), convolutional neural networks (CNN), and deep neural networks (DNN).These algorithms underwent rigorous training and validation using a harmonized dataset sourced from publicly available Wi-Fi CSI datasets.To evaluate their effectiveness, we scrutinized performance metrics derived from each ML algorithm's results.This comprehensive analysis allowed us to gauge the efficiency of each ML algorithm and identify the most suitable candidate for Wi-Fi sensing with CSI data sourced from TP-Link Archer C7 routers within the designated premises.This paper's primary contributions and differentiating factors from existing research are as follows: • ML algorithm selection: We meticulously selected a diverse set of ML algorithms tailored to our specific tasks of classification.
•
Training and Evaluation of ML Models: Our models underwent rigorous training and evaluation processes to ensure their reliability and effectiveness.
•
Systematic Model Ranking: We introduced a systematic approach for ranking the considered ML models based on statistical assessments of performance metrics, thereby enhancing decision-making in selecting the most efficient ML model.
Experimentation Setup
In this work, an indoor motion detection-based testbed has been configured with two TP-Link Archer C7 Wi-Fi routers with a CSI-enabled OpenWRT image flashed on both Wi-Fi routers.The routers are placed in such a way that any movements between the routers and within the premises within their range can be captured with the help of CSI data from the received Wi-Fi signals on the receiver router.One router becomes the access point, and the other becomes the client and CSI receiver, i.e., the recvCSI program runs on the receiver and the sendData program runs on the sender router.The motion detection is estimated with the help of the deviation in the CSI data received at the receiver router.The Figure 1 shown below depicts the general context considered for the experiments in Wi-Fi sensing.It shows our experiment setup where two TP-Link archer C7 routers are placed in a room with some furniture, and a person is moving from one point to another.The dataset constructed for locally generated data samples using no occupancy in the room is labeled no movement, and when a person is present in the room with continuous movements, it is labeled movement.This data has been used in the training, testing, and validation of ML techniques.
Experimentation Procedure
The motion detection is carried out using the difference in the CSI data whenever the user moves in the target environment.The difference is analyzed from the perspective of the signal variance magnitude caused by human movement direction.CSI data are captured from the target environment for both training and testing the efficiency of machine learning algorithms.The machine learning models were trained using the benchmarking dataset, i.e., the Wi-Fi sensing data from IEEE data portal called IEEE DataPort [32] plus the locally captured dataset, and then validated using data samples from the datasets that were never used for training.The Wi-Fi CSI dataset from the IEEE data port repository, which contains a labeled dataset for humanoid motion detection, is used to train machine learning models.Thus, supervised machine learning was used for all of the training that was done on these models.As was previously noted, the Widar dataset from the IEEE for Wi-Fi sensing was initially utilized for benchmarking and comparing the performance of ML models before being replaced by the locally collected CSI-based Wi-Fi dataset.Two classes have been employed in the locally collected dataset: one is clean with no motion at all, and the other is with a human walking.There are 35 k training samples and 9 k testing samples in the Widar dataset that was used.Eight thousand samples of training data and two thousand samples of testing data were used in the locally collected CSI dataset, which was then used for experimentation.The list of machine learning models selected for comparison is naive Bayes, support vector machine, decision tree, linear regression, K-nearest neighbor, ensemble, convolutional neural network, recurrent neural network, and deep neural network, respectively.
Target Machine Learning Algorithms
A short description of each of the considered ML algorithms is given in the following subsections.
Naïve Bayes
Naive Bayes [33][34][35][36][37][38] is a probabilistic classification algorithm that has been adapted here for Wi-Fi sensing using CSI datasets for motion detection.By treating CSI measurements as features and motion/no-motion as classes, Naive Bayes has been utilized to estimate the conditional probabilities of motion given CSI values.Despite its "naive" assumption of feature independence, naive Bayes can perform well for motion detection as it works effectively with high-dimensional data like CSI.It is particularly suitable for real-time applications due to its computational efficiency and ability to handle continuous features.
Support Vector Machine (SVM)
SVM [39][40][41][42] is a powerful classification algorithm that has been employed here for Wi-Fi sensing with the CSI dataset for motion detection.SVM seeks to find a hyperplane that best separates instances of different classes in the feature space.In this context, SVM has been trained to classify instances based on the patterns and variations in CSI data that correspond to motion.By selecting an appropriate kernel function, SVM can effectively capture complex relationships within the dataset, aiding accurate motion detection from CSI information.
Decision Tree
Decision trees [43,44] are versatile machine learning models that can be used to classify instances based on a sequence of hierarchical decisions.In this context of Wi-Fi sensing, a decision tree technique has been trained using CSI data to determine the presence or absence of motion.Each decision node represents a specific feature threshold, such as changes in signal strength or frequency shifts, and the resulting branches lead to the final classification.Decision trees are interpretable and can capture non-linear relationships, making them suitable for motion detection tasks.
Linear Regression
While linear regression [45] is traditionally used for regression tasks, it can also be applied in a binary classification setup for motion detection, which is our target problem in motion detection using Wi-Fi sensing.By modeling the relationship between CSI features and the likelihood of motion, linear regression can provide a continuous output that represents the degree of motion.By setting a threshold on the predicted values, instances have been classified as motion or non-motion.However, linear regression might not capture complex patterns in the CSI data as effectively as other methods mentioned here.
K-Nearest Neighbor (KNN)
KNN [46,47] is a simple yet effective algorithm for classification tasks.It operates by assigning a class label to an instance based on the majority class of its k-nearest neighbors in the feature space.For our problem of Wi-Fi sensing with CSI data, KNN determines whether a new instance corresponds to motion based on the similarity of its CSI values to those of previously observed instances.KNN can handle non-linear relationships and is robust to noise, making it a viable option for motion detection tasks.
Ensemble Methods
Ensemble methods [48,49], such as random forest and gradient boosting, combine the strengths of multiple models to improve overall classification accuracy.For Wi-Fi sensing, these methods can integrate information from various CSI features to enhance the motion detection process.Random forest creates multiple decision trees and aggregates their outputs, while gradient boosting builds trees sequentially, focusing on instances that were misclassified by previous trees.These techniques can effectively capture complex patterns and variations in CSI data.However, the complexity of implementation and high computational requirements make ensemble a less popular option here.
Convolutional Neural Network (CNN)
CNNs [50,51] are a class of deep learning models designed to capture spatial patterns in data, particularly images.In the context of Wi-Fi sensing, CSI data has been treated as a "sequence" of signal strength values.By using 1D convolutions, CNNs learned to extract relevant features from these sequences for motion detection.This approach is effective when dealing with patterns that evolve over time, allowing the network to identify motion-related changes in the CSI dataset.
Recurrent Neural Network (RNN)
RNNs [52,53] are specialized for sequences and time-series data.RNN long-shortterm memory (LSTM) has been employed for Wi-Fi sensing by treating the CSI dataset as a sequence of values collected over time.RNNs can learn to capture temporal dependencies and patterns in the data, making them well-suited for detecting motion.LSTM and gated recurrent unit (GRU) variants of RNNs are often used to mitigate the vanishing gradient problem and capture longer-term dependencies, but in our comparison of ML techniques, only LSTM has been considered due to the complexity and processing overhead of the GRU technique.
Deep Neural Network (DNN)
The fully connected deep neural network (DNN) [54] architecture has been applied to Wi-Fi sensing by directly processing CSI features to classify instances as motion or nonmotion.DNNs are capable of learning intricate relationships within the data, especially where a larger amount of labeled data is available for training.The large amount of training-labeled data prevents overfitting in the case of DNN.Using appropriate activation functions, regularization techniques, and optimization algorithms, DNNs can effectively handle motion detection tasks using CSI data.
In summary, each of these machine learning techniques has its strengths and limitations when applied to Wi-Fi sensing with the CSI dataset for motion detection.The choice of technique depends on the complexity of the patterns present in the CSI data, the amount of labeled data available, and the desired trade-off between interpretability and predictive performance.Experimentation and thorough evaluation are crucial to determining the most suitable approach for a specific motion detection application.This is the central goal of our research in this article: to train, validate, and compare the selected machine learning techniques, which are designed primarily to efficiently perform classification operations.Nine different ML techniques were presented with the Wi-Fi CSI dataset, and six different performance metrics, i.e., accuracy, precision, F1-score, true positive rate (TPR), true negative rate (TNR), and false positive rate (FPR), have been observed.Now this situation raises another issue of effective comparison and systematic selection of the most suitable ML techniques, considering six different attributes.The multi-attribute decision-making (MADM) technique has been employed to solve this problem.Here we have accuracy, precision, F1-score, true positive rate (TPR), true negative rate (TNR) as positive attributes, and false positive rate (FPR) as a negative rate.The weight assigned to these performance parameters is as follows: The accuracy has been assigned the highest weight as it is the most important performance parameter; FPR is assigned as 2nd most important parameter as more false positive occurrences can lead the model to higher inaccuracies; precision is followed by the F1-score; TPR is next; and TNR, which is a positive parameter, is the least important in the attributes list.The following section analyzes the results obtained using each of the considered ML techniques when employed on the same Wi-Fi CSI dataset for motion detection.
Results Analysis
This section presents the results obtained for motion detection using Wi-Fi sensing when a set of different machine learning models were exposed to the dataset.Analyzing the performance of ML models for Wi-Fi sensing typically involves a combination of some standard metrics [55,56] and evaluation methods, which include confusion matrix and its derived metrics, receiver operating characteristics curve (ROC), cross validation, etc.For performance comparison in this paper, the following set of performance metrics have been considered, which have been derived from the confusion matrix: Accuracy, false positive rate, precision, F1-score, true positive rate, and true negative rate.Each of these performance metrics has been compared when the same set of datasets is applied to the trained machine learning models.
Figure 2, shown below, presents the accuracy rate values for each of the ML algorithms when these algorithms were presented with the testing segment of the dataset.It shows that deep learning algorithms, i.e., DNN, RNN, and CNN, are performing distinctively well as compared to shallow learning algorithms.
Figure 3, shown below, presents the precision rate results from each ML algorithm when presented with the sampling dataset segment for motion detection.The precision values for deep neural network models outperform the precision values of shallow learning, except for the RNN in deep learning, which shows very low precision values.The true positive rate values for each of the ML algorithms are depicted here in Figure 4 below.It shows that the DNN outperforms not only the other deep learning algorithms but also all the shallow learning models.Once all the performance metrics have been recorded using all the target ML algorithms, then comes another challenge to compare the performance metrics of each ML algorithm to see which is the most optimal choice amongst the considered ML algorithms for motion detection using Wi-Fi CSI data.This is a multi-dimensional and multi-criteria problem that can be best resolved using the multiple-attribute decision-making (MADM) algorithm.Once all the performance metrics data from all the ML algorithms have been recorded, a score is added to each ML algorithm using the MADM algorithm to see which ML algorithm is performing better considering all the performance metrics at once.
The MADM [57,58] has been applied to evaluate and rank different machine learning algorithms based on their performance across various criteria (attributes).In this case, the decision matrix consists of rows representing different machine learning algorithms and columns representing different performance metrics (attributes) such as accuracy, FPR (false positive rate), precision, F1-score, TPR (true positive rate), and TNR (true negative rate).The goal of the MADM analysis is to rank these machine learning algorithms based on their overall performance across these attributes.The result of the MADM analysis is presented in the "Scores" column, and the algorithms are ranked based on these scores from low to high.Here are the general steps for applying MADM to the statistics in Common methods include the technique for order of preference by similarity to the ideal solution (TOPSIS), the analytic hierarchy process (AHP), and the weighted sum model, among others.In our case, we have selected the weighted sum method and assigned the weights to attributes such as accuracy as the highest and true negative rate as the lowest.-Ranking or Scoring: Apply the weighted sum MADM method to the decision matrix to calculate an overall score or ranking for each algorithm.This score reflects the algorithm's performance across all criteria, considering their weights.-Result in Column 8: The "Scores" column (column 8) contains the results of the MADM analysis.Each algorithm is assigned a score based on its overall performance.-Ranking: The algorithms are then ranked based on their scores in descending order.The algorithm with the highest score is typically considered the best-performing one.In the Table 2, the algorithms have been ranked based on their scores in the "Scores" column, from the highest score (rank 1) to the lowest score (rank 9).The Table 2 above clearly shows that deep learning algorithms have outperformed all the shallow learning algorithms collectively when the MADM algorithm has been applied to rank the bestperforming ML algorithms.
Discussion
The current literature study represents a significant advancement in the field of Wi-Fi sensing for motion detection, particularly within the context of Gamgee BV in the Netherlands.The primary objective of this research was to explore the integration of ML techniques into Wi-Fi sensing technology for improved motion detection.This milestone was achieved through the careful consideration and evaluation of a diverse set of ML algorithms, encompassing both SL and DL approaches.The utilization of publicly available datasets, including CSI datasets from the IEEE data port and locally captured datasets, was integral to benchmarking the performance of various ML models.These datasets served as valuable resources for training, validating, and testing the developed ML algorithms.Our focus was specifically directed towards classification-based ML algorithms tailored for motion detection.The array of algorithms assessed in the study encompassed a range of SL and DL models.Among the SL algorithms, SVM, naïve Bayes, decision tree, K-NN, and K-means algorithms were systematically evaluated.Additionally, we delved into the realm of DL algorithms, considering RNN, CNN, and DNN.Through meticulous performance analysis, we compared the efficiency of each algorithm, eventually leading to the identification of the most suitable ML algorithms for motion detection via Wi-Fi sensing using CSI data captured from TP-Link Archer C7 routers deployed within the target premises.Our findings underscored the superiority of DL algorithms, specifically DNN and RNN, in scenarios where larger datasets were utilized for training and validation.These DL models exhibited remarkable performance gains when exposed to extensive datasets, outperforming their SL counterparts by a significant margin.This outcome emphasizes the potential of DL techniques to enhance the accuracy and efficacy of motion detection via Wi-Fi sensing.
While this study represents a substantial leap forward in identifying the most suitable ML technique for motion detection using the Wi-Fi CSI dataset and the integration of ML with Wi-Fi sensing, certain limitations warrant consideration.First, the effectiveness of the selected ML algorithms might be influenced by variations in environmental conditions, potentially impacting the consistency of motion detection results.Additionally, the generalization of the trained models to different premises and contexts remains an aspect that requires validation.These limitations can, of course, be tackled with countermeasures such as the deployment of a sufficient number of Wi-Fi devices in the target premises, which will eventually also improve the performance of Wi-Fi connectivity at the same time.The dataset selection, although carefully considered, might not encompass the full spectrum of real-world scenarios, leading to potential biases in the developed models.Furthermore, the computational resources required for DL algorithms can be substantial, posing challenges for real-time implementation in resource-constrained environments.The solution to these limitations can be a more comprehensive dataset for training the ML model and the use of networking devices such as routers with higher specifications to handle the higher computational requirements, particularly in the case of ML models.
To contextualize our findings and highlight their relevance in the broader research landscape, it is essential to draw parallels with existing studies.Recent research in the field of Wi-Fi sensing and related domains has showcased a similar trend favoring deep learning approaches.Prominent works by Yongsen et al. [59] and Atzeni et al. [60] have reported remarkable success in employing deep neural networks for Wi-Fi-based applications.These studies have emphasized the ability of deep learning models to extract intricate patterns and representations from CSI data, leading to enhanced accuracy and reliability in Wi-Fi sensing tasks.Among the DL algorithms, DNN excelled with a remarkable accuracy of 0.9976.This performance surpasses recent work in [61] with an accuracy of 99.76%, establishing DNN as the leading choice for Wi-Fi sensing applications.The results in [61] show maximum accuracies of 99.38 for DL models such as RNN and CNN in different versions.Furthermore, with much lower accuracies for SL algorithms such as naive Bayes, SVM, and KNN, there is a similar trend to the results shown in this paper for SL techniques.In [62], the authors obtained a maximum accuracy of 98.2% using the DL technique for crowd estimation on CSI data obtained from Wi-Fi.Though the goal of that work was crowd estimation, Wi-Fi CSI data was utilized with ML techniques to achieve it.The accuracy achieved was closer to that presented in our research work, which still surpasses it with a difference of 1.56%.
In comparison to these contemporary research outcomes, our study corroborates the growing consensus that DL, particularly DNN and RNN architectures, represents a potent tool for Wi-Fi sensing applications.The exceptional accuracy and efficiency demonstrated by these DL algorithms in our experimentation underscore their viability in real-world scenarios, where robust Wi-Fi sensing is essential for diverse applications such as indoor localization, occupancy detection, and smart home automation.In conclusion, our study not only contributes valuable insights into the selection of suitable algorithms for Wi-Fi sensing using CSI data but also aligns with and reinforces the findings of recent research in the field.The superior performance of DL algorithms, as highlighted in our results, positions them as promising candidates for addressing the evolving challenges and opportunities in Wi-Fi sensing applications.
The future trajectory of this research is marked by several compelling avenues.Expanding our focus on localization holds great promise, as the ability to precisely identify the location of detected motion could significantly enhance security and monitoring applications.The automation of model learning within the target premises is a critical step towards achieving seamless and adaptable motion detection systems.The integration of Wi-Fi sensing with home automation and healthcare represents a paradigm shift with immense potential.Exploring the feasibility of leveraging Wi-Fi CSI data and AI for enhanced automation, ambient intelligence, and personalized healthcare interventions is an exciting direction for future investigation.In conclusion, the current work not only sets a foundation for ML-driven Wi-Fi sensing but also opens doors to a plethora of innovative applications.The journey from motion detection to localization, automation, and healthcare integration underscores the dynamic and transformative nature of this research trajectory.
Conclusions
The work performed for this article was the first milestone to introduce ML in Wi-Fi sensing for motion detection at Gamgee BV in the Netherlands.We have considered several ML algorithms composed of both shallow learning and deep learning algorithms.The publicly available datasets, i.e., CSI datasets from the IEEE data port and locally captured datasets, have been utilized for benchmarking the ML models before applying the testing segment of the datasets to look for the most suitable ML algorithms for motion detection using Wi-Fi sensing.We employed classification-based ML algorithms for operations like motion detection, which is part of the research work presented in this article, and clustering-based ML algorithms for unique personal identification in other subsequent research work being carried out at Gamgee BV.The list of SL algorithms considered includes SVM, naïve Bayes, decision tree, K-nearest neighbors, and K-means algorithms, and the list of DL algorithms considered includes recurrent neural network (RNN), convolutional neural networks (CN), and deep neural networks (DNN), respectively.The performance metrics were analyzed from a set of results obtained using each of the ML algorithms considered to compare the performance and efficiency of each ML algorithm and select the most suitable for Wi-Fi sensing using the CSI data captured from TP-Link Archer C7 routers in the target Our results showed that DL algorithms, i.e., DNN and RNN, performed much better as compared to the SL algorithms when larger datasets were exposed to the ML models for training and validation purposes.Our research has already been extended to further include localization to identify the exact zone where motion was detected and automation of model learning in target premises.The research work will be further extended to include home automation and healthcare applications using Wi-Fi CSI data and artificial intelligence (AI)-augmented Wi-Fi sensing.
2. 1 . 1 .
The COTS Hardware-Based Wi-Fi Sensing Techniques Techniques using COTS routers involve the use of the received signal strength indicator (RSSI) and channel state information (CSI).
Figure 2 .
Figure 2. Rate of Accuracies for different ML algorithms.
Figure 3 .
Figure 3. Precision rate for all ML algorithms.
Figure 4 .
Figure 4. True positive rate for all ML algorithms.
Table 1 .
Comparison of approaches in literature.
Table 2 :
-Define the Decision Problem: The decision problem is to determine the best-performing machine learning algorithm among the given options based on multiple perfor-
Table 2 .
MADM scoring on ML algorithms performance scores. | 2023-09-24T16:14:57.100Z | 2023-09-18T00:00:00.000 | {
"year": 2023,
"sha1": "d3dc1d0d0b5b39b1d2d490777b1c8512502bbac3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/12/18/3935/pdf?version=1695031911",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dda2a4c4ccccfa755dcac376249563367196fd44",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
224705265 | pes2o/s2orc | v3-fos-license | The Nondipole Part of the Main Geomagnetic Field and the Large Scale Topographical Heterogeneities of the Core-Mantle Boundary
Based on satellite data, both large and small-scale flows were identified on the core surface .At the same time, the observed rapid decrease in the dipole component of the main geomagnetic field does not find a satisfactory explanation. In the previous papers we carried out the comparative studies of the structure of the lowest mantle with the motion path of the small size current loops approximating the MGF's small-scale anomalies. The hypothesis was stated that heterogeneities in the lowest mantle structure and the topographic irregularities of the core-mantle boundary associated with ancient subduction zones is one of the primarily responsible for the small-scale vortices formation and the local MGF variation. In this paper, we carry out the analogous comparison for the large scale sources approximating the nondipole part of the geomagnetic field.
Introduction
By now, great advances have been achieved modeling processes occurring in the thickness of the Earth's liquid core [1]. Based on satellite data, both large and small-scale flows were identifiedon its surface [2,3].At the same time, the observed rapid decrease in the dipole component of the main geomagnetic field (MGF) that is attended with the development of its nondipole part does not find a satisfactory explanation. In [4,5] we carried out the comparative studies of the structure of the lowest mantle with the motion path of the small size current loops approximating the MGF's small-scale anomalies. The hypothesis was posited in [4,5] that heterogeneities in the lowest mantle structure and the topographic irregularities of the core-mantle boundary associated with ancient subduction zones is one of the primarily responsible for the small-scale vortices formation and the local MGF variation. In this paper, we carry out the analogous comparison for the large scale sources approximating the nondipole part of the geomagnetic field. As well as in previous works, model SAW642AN [7] was used as a model of the structure of the lowest mantle. This model allows us to construct the distribution of the enhanced and reduced propagation velocities of seismic waves relative to the average values for a given depth over the entire mantle, including the core-mantle boundary (Fig. 2). We used this mantle slice for comparison the seismic wave non-uniformities with the position and orientation of the cones, obtained by us from magnetic data. The interpretation of our results is based on the idea, that is not the subject of discussion among experts in seismic tomography, namely, that the regions of higher and lower seismic waves correspond to higher and lower densities of the mantle matter.
Method.
The most powerful anomalies of the MGF's nondipole part marked in Fig.1 were approximated by fields of the volume current systems (VCS), the geometry of which is a hollow thin-walled truncated cone. The magnetic field generated by such VCS was considered by us in [8]. The height of the cones was chosen in the range of 500-1000 km in accordance with the results of [9]. Difference between the radius of the upper and lower bases was 100 km. The other parameters: a δV m/s volumetric current density, a base radius, the center location coordinates, and the spatial orientation angles were determined separately by solving the inverse problem for each VCS. First, the inverse problem was solved for a not large region in the vicinity of the maximum of the Z component and then obtained parameters were refined during the iterative process. The method is described in detail in [9]. According to the results of [9], to separate correctly the systematic (dipole) and the nondipole part of MGF the seventh VCS has to be included in the model.The height of the corresponding truncated cone was 4200 km. The remaining parameters were obtained in the same way as for all other. The location and orientation of this VCS is not discussed in this paper, since the analysis of the systematic component is beyond the scope of the task.
Results.
The position and orientation in space of the cone-shaped VCSs were determined in the course of solving the inverse problem. The resultant magnetic field of the seven VCSs governs the spatial structure of the MGF at the distance of 10000 km from the Earth center. It should be noted that, despite a significant distance from the possible sources region, sevenVCSs turned out to be insufficient for a complete description of the MGF at that distance. The remainder of the approximation is also not discussed here. In the framework of this work, we limited ourselves to considering only world anomalies. An analysis of the location and orientation of the obtained VCSs relative to the heterogeneity of the structure of the lowest mantle (core-mantle boundary) are showed in Figs. 3-8. For all figures, the point of view is assumed to be located inside the sphere corresponding to the core-mantle boundary. The spatial distribution of the seismic wave velocity heterogeneities δV is calculated according to the SAW642AN model at the core-mantle boundary and represented by the color on the spherical segment. The rotational displacement of this segment was chosen based on the best visual representation of the relative position of the VCS and the heterogeneities of the core-mantle boundary. XYZ axes correspond to the central coordinate system. For a better understanding how the spherical segment is oriented in space, arrows are added to figures which show the direction ofthe westwarddrift ofthe liquid outer core. One can readily see that the location of all VCSs is characterized by the presence of a high-speed mantle anomaly west of the VCS. These anomalies can form the core-mantle topography and obstruct the free western drift at the edge of the liquid core. And this involves an inception and buildup in the nondipole part. The VCS shown in Figs. 3 and 4 are practically "surrounded" by mantle anomalies of different intensities. These anomalies forming the topography hills can restrict incoming substance of liquid core to VCS, wherefore its magnetic moment has to be decreasing as it is observed.
Fig. 3
Location and orientation of the cone obtained for the first anomaly ( Fig.1) relative the mantle structure at the core-mantle boundary. VTS is shown by green, δV m/s is shown by color map.
Fig. 4
Location and orientation of the cone obtained for the fifth anomaly 5 (Fig. 1) relative the mantle structure at the core-mantle boundary. The legend is the same as in Fig. 3.
As for the 2 nd anomaly, which is shown in Fig. 5, one can see that the mantle heterogeneities are mainly located west of VCSs. ThisVCS is characterized by a growing but the smallest magnetic momentum (MM). The corresponding MGF anomaly was actually described only at the very end of the 20th century [9]. Its MM growing, is connected with the position relative to the mantle heterogeneities. As it may be shownin Fig.5 these heterogeneities are placed west of VCS that allows an additional inflow of liquid core material.
The 3rd and the 6th VCS were obtained having the most powerful MM. Their location is shown in Figs. 6 and 7 correspondently. The mantle heterogeneities are also located west of VCSs. Thus, the westward drifting core material can result in growth of the VCS. .
Fig. 5
Location and orientation of the cone obtained for the second anomaly ( Fig. 1) relative the mantle structure at the core-mantle boundary. Legend is the same as in Fig. 3.
Fig. 6
Location and orientation of the cone obtained for anomaly 3 ( Fig.1) relative the mantle structure at the core-mantle boundary. The legend is the same as in Fig. 3. Fig. 7 Location and orientation of the cone obtained for the sixth anomaly ( Fig.1) relative the mantle structure at the core-mantle boundary. Legend is the same as in Fig. 3.
The VCS corresponding to the fourth anomaly (Fig.8) deserves a separate consideration. In [10], where to carry out the approximation by point sources, was obtained that the magnetic moment vector of source obtained for this anomaly tuned south-westward for a continuous period of 100 years. This is conceivably due to its displacing by the topography mantle heterogeneities.
Fig. 8
Location and orientation of the cone obtained for the fourth anomaly ( Fig.1) relative the mantle structure at the core-mantle boundary. The legend is the same as in Fig. 3.
Discussion.
The main result obtained in this paper concerns the association between the structural heterogeneities of the lowest mantle and the origination of the world anomalies of the geomagnetic field and their secular variation. In fact, both problems are the subject of discussion. The presence of high-speed anomalies in the lowest mantle is confirmed by all modern global mantle models [11] and can be traced from subduction zones on the Earth's surface to the coremantle boundary (CMB). The subducted part of lithospheric plates (slab) is colder and hence heavier relatively to the surrounding mantle. That result in so called slab pull. Finally the slab remains can penetrate into the lowest mantle. At larger scales, seismic tomography has become the primary tool for imaging the subducting slab in the lowest mantle. The geometry and the behavior of slabs varies not only among different subduction zones but also within the subduction zone [12].The ancient dense lithospheric slabs are concentrated into large agglomerations at the CMB.TheCMBtopography is poorly determined through seismology. Garcia &Souriau [14] found CMB topography of less than 4 km for waves with wavelengths longer than 300 km. Sze & van der Hilst [15] found the topography amplitudes of up to 5 km and reaching up to 13 km. But there is an overall trend towards assumed maximum amplitudes of around 1.5 km at long wavelengths [13]. So the authors [15] decided that this topography is unusually high and reduced amplitudes to 3 km. Authors of [16] have derived a model of CMB topography from the mantle dynamics. According to the authors this model should be useful at least at long wave lengths of several thousand kilometers with 8 km amplitude. From the other side the authors expect that the thermalboundary layer at the base of the mantle is on average around 300 km thick. In [17] it was obtained that the seismic wave anomalies in the uppermost 300 km of the outer core cannot be of thermal origin and should primarily reflect compositional heterogeneity.
Based on the above results, we assume that the denser slabs can partially penetrate below the CMB, generating the topographical "hills" on the CMB. This geometry impedes the free differential rotation of the liquid core substance, separating some part from the main generation cylinder and forming additional current structures. The comparative study carried out in this paper does not allow us to estimate the geometric dimensions of "hills". In this case, we are only talking about their effect on secular variations.
The thermal core-mantle coupling also affects the dynamics of the outer core. Cold slabs at the baseof the mantle are expected to increase the local heatflow. Several studies have been sought to interpret core flows obtained from secular variation of the MGF in terms of regional variations in heat flow [18,19,20]. Numerical simulations reveal a tendency to lock the pattern of convection to the pattern of time-dependent heat flow at the CMB. Similar conclusions are drawn from numerical geodynamo models [10,21,22]. Non-homogeneous boundary conditions in geodynamo models yield persistent structure in the time-averaged flow [21,22,23], although there can be substantial variation about the average. A solution to the full dynamo equations with lateral variations in heat flux on the outer boundary defined by the shear wave velocity of the lowermost mantle is presented in [24]. The assumption was that cold regions in the lower mantle could cause preferential cooling of the core, downwelling, and concentration of radialmagnetic flux at the core surface. As a result the four main equatorially symmetric flux lobes were obtained in magnetic field at the CMB. Authors strongly suggest that geomagnetic field morphology is dominated not only by geometry related to the inner core but also by the seismically fast structure in the bottom few hundred kilometers. Let us note that the authors have taken into account only thermal variations as having a dominant influence and have not considered compositional variations. In addition, a very averaged model of the velocity heterogeneity near the CMB was used in simulations.Therefore we are not looking for absolute matches of our results with those obtained in geodynamo models. Nevertheless , they do not contradict each other.
Conclusion
The results obtained in this paper provide implicitly support for our hypothesis that the topography of the CMB has a more complex relief than is suspected. The cold regions of the lowermost mantle associated with the higher density slabs can penetrate the liquid core, forming topographic "mountains". These heterogeneities prevent the free differential rotation of the liquid core relative to the mantle, what leads to a partial outflow of liquid core material from the main generation cylinder, the formation and development of new eddies of different scales. These two processes lead, on the one hand, to a decrease in the systematic component of the MGF and on the other hand, to an increase in its nondipole part. The formation of such "mountains" at the core-mantle boundary is possible due to processes that took place many millions of years ago, when significant volumes of the earth's crust were absorbed in ancient zones of subduction [10]. Due to the different speeds of tectonic processes in | 2020-10-20T01:01:14.383Z | 2020-10-18T00:00:00.000 | {
"year": 2020,
"sha1": "52d845866e259ffbe3f200b09b45f86a84e16c24",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "52d845866e259ffbe3f200b09b45f86a84e16c24",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Geology",
"Physics"
]
} |
14370522 | pes2o/s2orc | v3-fos-license | Differential Responses to Wnt and PCP Disruption Predict Expression and Developmental Function of Conserved and Novel Genes in a Cnidarian
We have used Digital Gene Expression analysis to identify, without bilaterian bias, regulators of cnidarian embryonic patterning. Transcriptome comparison between un-manipulated Clytia early gastrula embryos and ones in which the key polarity regulator Wnt3 was inhibited using morpholino antisense oligonucleotides (Wnt3-MO) identified a set of significantly over and under-expressed transcripts. These code for candidate Wnt signaling modulators, orthologs of other transcription factors, secreted and transmembrane proteins known as developmental regulators in bilaterian models or previously uncharacterized, and also many cnidarian-restricted proteins. Comparisons between embryos injected with morpholinos targeting Wnt3 and its receptor Fz1 defined four transcript classes showing remarkable correlation with spatiotemporal expression profiles. Class 1 and 3 transcripts tended to show sustained expression at “oral” and “aboral” poles respectively of the developing planula larva, class 2 transcripts in cells ingressing into the endodermal region during gastrulation, while class 4 gene expression was repressed at the early gastrula stage. The preferential effect of Fz1-MO on expression of class 2 and 4 transcripts can be attributed to Planar Cell Polarity (PCP) disruption, since it was closely matched by morpholino knockdown of the specific PCP protein Strabismus. We conclude that endoderm and post gastrula-specific gene expression is particularly sensitive to PCP disruption while Wnt-/β-catenin signaling dominates gene regulation along the oral-aboral axis. Phenotype analysis using morpholinos targeting a subset of transcripts indicated developmental roles consistent with expression profiles for both conserved and cnidarian-restricted genes. Overall our unbiased screen allowed systematic identification of regionally expressed genes and provided functional support for a shared eumetazoan developmental regulatory gene set with both predicted and previously unexplored members, but also demonstrated that fundamental developmental processes including axial patterning and endoderm formation in cnidarians can involve newly evolved (or highly diverged) genes.
Introduction
A major challenge in biology is to understand how the current extraordinary diversity of animal forms has been generated during evolution. Specific goals are to determine which genes were employed to regulate developmental processes in the earliest multicellular animals, and how this set of regulators was expanded during the evolution of different animal branches by diversification of existing gene families or by the acquisition of new genes. To address these questions requires identification and functional analysis of developmental regulatory genes in species from right across the animal kingdom, covering not only the ''bilaterian'' (protostome plus deuterostome) branch including the classic experimental models such as mouse, zebrafish, Drosophila and Caenorhabditis, but also non-bilaterian phyla such as cnidarians, ctenophores and sponges, which have evolved many distinct forms and body plans.
To gain a fresh perspective on the gene repertoires that regulate metazoan development, we employed a systematic unbiased comparative transcriptomics approach to identify potential regulators of embryonic patterning at gastrula stage in the cnidarian experimental model Clytia hemisphaerica [32]. Clytia is a typical hydrozoan species that includes a jellyfish form as well as a polyp form in its life cycle, unlike anthozoan cnidarians such as the popular sea anemone model Nematostella vectensis. After gastrulation, a torpedo-shaped ''planula'' larva is formed, whose organization shows the characteristic cnidarian body plan: a single ''oral-aboral'' axis and two germ layers. The outer ectoderm of the Clytia planula features ciliated epitheliomuscular cells for motility, and an internal endodermal (or ''entodermal'') region including a population of interstitial stem cells (i-cells) specific to hydrozoans, which generate a variety of cell types for each germ layer [33][34][35][36]. Gastrulation proceeds by unipolar cell ingression to fill the blastocoel prior to endoderm cell epithelialization [37]. The gastrulation site derives from the egg animal pole and corresponds to the pointed oral pole of the larva, giving rise after metamorphosis to the mouth region of the polyp form [38].
Establishment of the oral pole in Clytia critically depends on Wnt/Fz signaling activity through the Wnt/b-catenin pathway. Maternally-provided transcripts for the ligand Wnt3 and the receptors Fz1 (activatory) and Fz3 (inhibitory) are pre-localized along the egg animal-vegetal axis to drive activation of this pathway on the future gastrulation site/oral side during cleavage and blastula stages [39,40]. This activation establishes distinct regional identities characterized by specific sets of transcribed genes at the oral and aboral poles of the developing embryo, including those required for cell ingression at gastrulation. Fz-PCP signaling, dependent on the conserved transmembrane protein Strabismus (Stbm), is activated in parallel along the same axis to coordinate cell polarity in the ectoderm and to guide embryo elongation [41]. Since multi-member Wnt families with early polarized embryonic expression have also been uncovered in other cnidarians [42,43], ctenophores and sponges [44][45][46][47] as well as in a range of bilaterian models [48,49], it seems highly probable that Wnt/Fz signaling regulated embryonic patterning in ancestral metazoans, specifying the primary body axes and/or presumptive germ layer regions.
To identify genes potentially involved in Clytia embryogenesis without favoring gene families identified as developmental regulators from bilaterians, we compared transcriptomes at the onset of gastrulation between normal embryos and ones strongly ''aboralized'' by Wnt3 morpholino (Wnt3-MO) injection prior to fertilization [40]. In many animals gastrulation coincides with, or closely follows, a significant stepping up of transcription from the zygotic genome, taking over from an initial phase of development predominantly dependent on maternally supplied mRNAs and proteins. By comparing transcriptomes from undisturbed and Wnt3-MO early gastrulae by Digital Gene Expression (DGE) we compiled lists of significantly over-and under-expressed genes. These included orthologs of known conserved developmental regulators but also members of unexplored metazoan conserved gene families, and in addition many sequences restricted to cnidarians. Expression profiling for an unbiased subset of these transcripts systematically revealed spatially or temporally restricted expression profiles of four types. Further transcriptome and in situ hybridization comparisons with Fz1-MO and Stbm-MO embryos revealed expression-pattern-related differences in the responses of genes to disruption of Wnt/b-catenin versus PCP. Finally, roles in developmental processes for the identified genes, both conserved and cnidarian-restricted, were supported both by their characteristic expression patterns and by correlated phenotypes obtained following morpholino injection for a subset of 8 genes.
Overall our unbiased screen allowed systematic identification of developmental genes regulated by the Wnt/ß-catenin pathway and by Fz-PCP. It provided functional support for a shared eumetazoan developmental regulatory gene set with both predicted and previously unexplored members, while also showing that axial patterning and endoderm formation in cnidarians can involve taxon restricted genes.
A systematic approach to identify cnidarian developmental genes
To identify genes regulated transcriptionally in relation to Wnt dependent embryo patterning we compared transcriptomes from unmanipulated early gastrula stage embryos and from embryos injected prior to fertilization with a morpholino antisense oligonucleotide targeting Wnt3 [40]. Digital Gene Expression
Author Summary
The recent wave of genome sequencing from many species has revealed that most of the gene families known to regulate animal development are shared not only between humans and laboratory favorites such as mice, flies and worms, but also by evolutionarily more distant animals such as jellyfish and sponges. It is often assumed that genes inherited from a common ancestor remain largely responsible for regulating embryogenesis across these animal species, rather than more recently evolved genes. To address this issue we made an unbiased, systematic search for developmental genes in embryos of the jellyfish Clytia, selecting genes whose expression altered upon manipulation of the key regulator Wnt3, and comparing their expression in embryos specifically disrupted for Planar Cell Polarity. Identification of evolutionarily conserved and novel genes as developmental regulators was confirmed by demonstrating characteristic expression profiles for a sub-set of genes, and by gene knockdown studies. Conserved genes coded for members of many known signaling pathway and transcription factor families, as well as previously unstudied proteins. Nearly 30% of the identified genes were restricted to cnidarians (the jellyfish-sea anemone-coral group), supporting the idea that the appearance of new genes during evolution contributed significantly to generating animal diversity. analysis (DGE) was performed using an Illumina HiSeq sequencing platform. The number of mapped reads onto a reference transcriptome data set was taken as a measure of transcript level, and the statistical significance of differences in these levels between samples assessed using the DEGseq package ( Figure 1; see Materials and Methods for technical details). Plotting for each transcript the expression ratio between two samples against the global average expression ( Figure 1A,C) allowed visualization of sets of transcripts that showed significant differential expression, defined as ones that cannot be accounted for by sampling variation according to Random Sampling Model. We used the MATR method [50], justified by the Normal distribution of the data ( Figure 1B), to adjust the cutoff to take into account experimental noise, based on comparison of replicate samples (blue line in Figure 1A: compare with the red line delimiting the theoretical random distribution). For subsequent analyses we routinely used a corresponding ''z-score'' value as an index of significant differences between samples (see Methods).
Comparisons between Wnt3-MO and uninjected embryo samples ( Figure 1C) identified 375 assembled transcript sequences as differentially expressed according to the z-score +/23.3 cutoff, which corresponds to a probability threshold (p-value) of 0.01 (colored dots in Figure 1C). Detailed analyses were performed for a more restricted set of 179 sequences with z-scores of less than -5 or greater than +5 (see insert in Figure 1C; list of transcripts and their characteristics in File S1). We could eliminate transcripts whose expression levels were affected non-specifically by the morpholino injection procedure by comparing the Wnt3-MO embryo differentially regulated transcripts with those identified in embryo populations generated using morpholinos targeting two other genes, Fz1 and Fz3, which respectively activate and repress Wnt/b-catenin signaling leading to aboralized and oralized phenotypes, respectively [39]. Genes non-specifically affected by the morpholino injection procedure are expected to respond in the same way in all three experimental groups, whereas genes regulated specifically downstream of Wnt3 are expected to respond distinctly following Fz1-MO compared to Fz3-MO injection. Comparison between these groups allowed us to identify 4 sequences with high z-scores (.5) in Fz1-MO and in Fz3-MO (opposite phenotypes) as well as Wnt3-MO samples (purple dots in Figure 1D,E; DGE class 5 in File S1). Two of these code for Ubiquitin ligases, implicated in protein degradation, and one for a secreted cyclase, suggesting a possible association with lysis of damaged cells in injected embryos. In addition the Fz3 transcript was itself detected at high levels in Fz3-MO embryos, probably due to the stabilizing effect of the morpholino. An additional set of 10 transcripts were eliminated as coming from likely bacterial contaminants, because they clearly stood apart as strongly underrepresented (z-scores ,25) in both Fz3-MO and Wnt3-MO samples (and also for Fz1-MO in 9 cases) compared with uninjected controls (blue dots in Fig 1D, E.). The sequences of these transcripts had no similarity with any known eukaryotic genes but rather included genes from bacteria. Contamination from bacteria may be higher in uninjected embryos due to reduced manipulation of the egg and thus more frequent retention of the jelly coat and associated contaminants.
Conserved and novel transcripts are differentially expressed in Wnt3-MO embryos
After elimination of the 13 non-specifically affected sequences, our final validated transcriptome comprised 166 differentially expressed transcripts. 153 of these 166 had clear predicted full or partial ORFs, comprising 40 over-expressed in Wnt3-MO embryos and 114 under-expressed. Detailed analysis of these sequences (File S1) revealed conserved and novel genes.
Conserved developmental regulators. Clytia developmental regulatory genes already known to be expressed in a polarized manner were present as expected. These included the orally expressed Brachyury (Bra), Frizzled-1 (Fz1) and WntX1A in the Wnt3-MO under-expressed list, and the aborally expressed FoxQ2a, Frizzled-3 (Fz3), Hox9/14B and Sox15 in the overexpressed list [15][16][17]39,40]. Many additional Clytia orthologs of bilaterian developmental regulators were also identified (phylogenetic analyses in File S2). Amongst the transcription factors were an ortholog of the hydrozoan-duplicated T-box Brachyury gene Bra2 [11], two forkhead family proteins frequently associated with endoderm formation (FoxA, FoxC), a previously uncharacterized FoxQ2 paralog (FoxQ2c), a T-box transcription factor (Tbx: no clear orthology to vertebrate T box genes), a member of the Poxneuro branch of the Pax family (PaxA) [10], orthologs of Six4/5 and Nematostella DMRT-E [51], the Ets transcription factor Erg and the ANTP family non-hox/parahox homeodomain protein HD02 [16]. We also identified a Myb transcription factor belonging to the HTH class and several zinc finger domain transcription factors whose metazoan orthologs have not been characterized. Signaling pathway mediators notably included not only Wnt ligands and receptors but also many potential modulators of Wnt signaling including members of three families of secreted antagonists: Dkk (Dickkopf family), sFRP-A (a secreted frizzled-related protein) and Dan1(Cerberus/Dan family of Wnt and TGFb antagonists) [52]. We also identified secreted proteins that modify the extracellular environment, potentially capable of modulating ligand-receptor interactions through many pathways including Wnt as well as BMP and Hedgehog. Of particular note in relation to Wnt signaling were three heparan sulfate proteoglycan modifiers: two lipases implicated in glycipan cleavage closely related to Notum, and an endosulfatase related to vertebrate Sulf1/Sulf 2 [53,54]. The Clytia Notum sequences, derived from a hydrozoan-specific gene duplication, were named NotumA and NotumO because of their aboral and oral expression territories (see below). The second main signaling pathway that emerged in this analysis was the Notch pathway, with transcripts in the Wnt3-MO embryo-upregulated list coding for the Notch ligand and for two proteins related to Botch, whose Drosophila and mouse counterparts inhibit Notch protein processing [55].
Further signaling pathway components and transcription factors with likely conserved developmental roles featured in an extended list of transcripts differentially expressed in Wnt3-MO embryos with significance at p = 0.01 level (z-score +/23. 3 cutoff; list of additional sequences in File S3). These included yet more potential Wnt regulators: another Wnt ligand WntX1A [40], the transcription factor TCF, and Naked cuticle [56,57] were under-expressed, while another sFRP in the same orthology group as sFRP-A (named sFRP-B) and MESD (which interacts with the co-receptor LRP5/6 [58,59]) were overexpressed. Components of other signaling pathways also figured in this extended list, including a TGFb pathway ligand and cytoplasmic inhibitor (SMAD 6/7), a putative FGF receptor and a VEGF-related ligand. Clytia orthologs of the developmentally important transcription factors Goosecoïd (Gsc), Iroquois (Irx), Hox9/14B [16] and Rfx were also identified.
Other conserved metazoan genes. The developmental regulator gene orthologs listed above were known through functional studies in classic experimental model species such as Drosophila, mouse and zebrafish. Additional ancient metazoan genes conserved during evolution may also have roles in regulating development that have yet to come to light. As well as genes probably associated with the differentiation of larval cell types such as myophilin, calmodulin and innexin, our analysis provided a number of candidate-conserved developmental regulators, falling into three categories: 1) Known genes with little or no previously known involvement in development, for instance, coding for the amino acid transporter Aat or other solute carriers; 2) Members of large gene families associated with developmental regulation but lacking clear bilaterian orthologs, including putative transcription factors containing helix-loop-helix or zinc finger domains as well as 7-pass transmembrane (7tm) proteins (a large and diverse family of receptors for cytokines, hormones, peptides and other ligands with the potential to evolve developmental cell-cell signaling roles, as has occurred in the Frizzled family). 3) As yet uncharacterized genes that have homologs in bilaterians. This third category includes proteins containing conserved domains identified in the PFAM database such as the Domains of Unknown Function DUF4323 and DUF3504.
Cnidarian restricted genes. A considerable proportion of the sequences with complete predicted ORFs (37/126 = 29%) did not have identifiable orthologs among known bilaterian genes or any other non-cnidarian sequences in the NCBI databases (see Methods for details), as defined by a lack of significant similarity by reciprocal BLAST along the length of the sequence. These 'cnidarian restricted' sequences included the transcript most strongly under-expressed (highest z-score) in Wnt3-MO embryos, WegO1. In some cases, despite the absence of any identifiable ortholog, recognizable conserved motifs such as SAM or PH domains could be detected within the sequence of these transcripts using domain prediction software, suggesting involvement in mediating protein-protein or protein-membrane interactions. These domains are common to many diverse bilaterian proteins and cannot be taken as indicating homology. Such sequences could have originated through domain recombination or through extreme divergence of surrounding sequences during cnidarian evolution. Six of the novel cnidarian-restricted sequences identified in our study had clear counterparts in the fully sequenced genomes of Nematostella and/or Hydra but the others (31/37) are unique to Clytia amongst available sequences. Genome sequences for a larger range of cnidarian species will be required to assess the degree of taxonomic restriction of these genes. Thirteen possessed predicted 59 signal peptide sequences indicating that they code for secreted proteins varying in length from 77-526 predicted amino acids (average about 260). These characteristics are compatible with roles as novel signaling ligands, antagonists or extracellular regulators.
In situ hybridization analysis reveals four spatiotemporal expression profile types We undertook detailed characterization of spatial expression and sequence analysis (Table 1) hybridization at three stages: early gastrula, 24 hpf planula (just completed gastrulation, endoderm still undifferentiated), and 48 hpf old planula (cell differentiation ongoing in both endodermal and ectodermal regions). We found that almost all the in situ hybridization profiles could be assigned to one of four types, which we termed Oral (O), Aboral (A), Ingressing/Endodermal (IE) and Delayed expression (D) types, as described in more detail below and summarized in Figure 4. Briefly, O and A type profiles are characterized by polarized expression with respect to the developing oral-aboral axis at all stages, suggesting ongoing patterning roles during embryonic and larval development. The IE type profile corresponds to cells destined to contribute to the complex endodermal region including the i-cell stem cells and their derivatives. The D type profile transcripts were barely detectable in early gastrulae but showed at larval stages expression in diverse patterns in the ectoderm and/or later in the endoderm. Overall, our approach to identify new candidates for roles in cnidarian embryonic development was completely validated by these analyses. Without any selection based on sequence identity, all the transcripts we tested showed expression restricted in space and/or time during gastrulation and planula development.
Names were assigned to the analyzed transcripts on the basis of orthology and/or membership of known gene families (all phylogenetic analyses in File S2). Multiple members of known gene families were distinguished by suffixes designating the 4 main expression profile types: O, A, IE or D. Cnidarian-specific transcripts lacking any recognizable orthologs from non-cnidarian species in NCBI databases, and those with non-cnidarian orthologs that had not previously been characterized, were assigned novel names using the same suffixes, prefixed by ''Weg'' to denote differential expression in Wnt3-MO early gastrulae, or given names based on recognizable repeats when present.
Wnt3-MO embryo under-expressed transcripts show oral and endodermal profiles. Consistent with the aboralized phenotype of the Wnt3-MO-embryos, the twenty top underexpressed transcripts were all strongly localized during normal development to cells at the future oral pole (site of gastrulation) at the early gastrula stage ( Figure 2). Their expression profiles could all be designated unambiguously as either O or IE type. The eight O type profile transcripts ( Figure 2B-I) were detected strongly in the oral pole ectoderm in both gastrula and planula larva stages. These included the Wnt ligand WntX1A [40], three transcription factors (Clytia Bra2, Myb and ZnfO), and two novel cnidarian genes designated WegO1 and WegO2. None of these transcripts were significantly detected in cells ingressing during gastrulation, indicating either expression in exclusively ectodermal cells or down-regulated in ingressing cells upon their separation from the oral ectoderm. Two additional O-type profile genes showed later additional expression in cells of the endodermal region at the planula stage (Aat and Akr; panels H and I). Expression domains for these two genes and for the two Wnt ligands extended across the oral third of the larva [40]. In contrast, expression of the other counts of the mapped reads from one of the non-injected embryo samples by a QQ plot, a necessary condition to use the Random Sampling Model assumption in DGE analysis. C) MA plot of read data from Wnt3-MO versus non-injected embryo samples. Applying the 1% cut-off p-value for statistical significance corresponding to the threshold z-score of +/23.3 identifies 148 transcript sequences as over-expressed (green dots) and 232 as under-expressed (orange dots) in Wnt3-MO embryos. The more stringent +/25.0 threshold for z-score values eliminates a cluster of genes with expression characteristics very close to the overall population of non-differentially expressed genes, as demonstrated in the histogram (insert) and reduces the number of transcripts to 44 and 135 for Wnt3-MO embryo over-and under-expressed transcripts respectively. D, E) Z-scores for the Wnt3-MO embryo over-(green dots) and under-(orange dots) expressed transcripts plotted against those for two other morpholino-injected embryo groups harvested at the same developmental stage. Z-scores were calculated for experimental versus non-injected values in each case. D: Wnt3-MO versus Fz1-MO; E: Wnt3-MO versus Fz3 MO. Transcripts significantly under-expressed in all three MO groups (z-scores less than -5: probably from bacterial contaminants) are represented as blue dots and over-expressed (z-scores greater than 5; probably injection damage-related) as purple dots. doi:10.1371/journal.pgen.1004590.g001 five genes was predominantly detected at the oral tip, resembling the previously-described expression of Bra1(initially named Bra) [39]. O type expression patterns were also obtained for three additional transcripts selected from the Wnt3-MO-underexpressed list; Gsc, NotumO and an evolutionarily conserved but previously uncharacterized sequence designated WegO3 (in situ hybridization images in File S4). Gsc and WegO3 both showed oral tip expression, supplemented with additional endodermal expression in 48 h planula larvae for WegO3. NotumO expression extended further along the oral-aboral axis, matching that of the two Wnt ligands. IE type patterns were observed for twelve transcripts, and were characterized by expression mainly in ingressing or ingressed cells during gastrulation, and later in different cell populations within the endodermal region ( Figure 2J-U). At the gastrula stage, these transcripts were detected in subpopulations of cells ingressing into the endoderm at the oral pole, as well as in some cases putative pre-ingressing populations in the ectoderm. At planula stages, they were detected predominantly in the endodermal region, in different sub-populations of cells. For the transcription factors Znf845, FoxA and also for the kinase Mos3, the distribution of expressing cells, notably their position in non-polar regions between the endoderm and ectoderm layers in 48 hpf larvae, was reminiscent of previously described germ line/stem cell genes expressed in i-cells such as Nanos1, Vasa and Piwi [60]. Expression in scattered ingressing cells was also observed for HlhIE1 ( Figure 2M; fewer cells detected), the Ets family transcription factor Erg ( Figure 2N; additional expression in oral ectoderm cells in 48 h planulae), Clytia Sulf ( Fig 2O; very weak in 48hpf larvae) and Sox15 ( Figure 2P; additional expression in various endodermal and ectodermal cells as previously described [17]), and also for an additional FoxQ2 paralog identified in the Wnt3-MO-underexpressed transcript set designated FoxQ2c (File S2). The hypothesis that Znf845 and FoxQ2c were expressed in icells or their primary derivatives is consistent with data from a recent study in Hydra which compared transcriptomes of sorted endodermal, ectodermal or Nanos-expressing (i-cell lineage) cells from adult polyps [25] (see File S5). HlhIE2, DMRT-E and FoxC were also expressed in early stages of ingression in the early gastrula but adopted more widespread distribution through the endodermal region in the planula. Correspondingly Hydra FoxC transcripts are highly enriched in endodermal cells [25] (File S5). Two novel cnidarian transcripts (WegIE1 and WegIE2) were detected predominantly in an extensive population of presumptive endoderm cells, expression only becoming detectable once they have entirely separated from the oral ectoderm ( Figure 2T Figure 3L-R) profile types, or in four cases showed characteristics of both profile types ( Figure 3H-K). The seven transcripts assigned as having clear A-type profiles, on the basis of sustained expression at the aboral pole, notably included the Wnt regulators Dkk1/2/4, Dan1 and NotumA, the transcription factors FoxQ2a and ZnfA, the novel cnidarian gene WegA2, and WegA, which has no clear noncnidarian orthologs but contains a conserved 135aa domain (DUF3504). Six of these seven transcripts were detectable from the gastrula stage, while Dkk1/2/4 was only detectable in planulae. The extent of the aboral expression territory varied between genes, with WegA1 expression extending along about half the oral-aboral axis at all stages, while the others were expressed in the aboral third at the early gastrula stage before becoming more tightly restricted to different ectodermal and/or endodermal cell populations at the aboral pole of the planula larva. Consistent with the ectodermal localization of this transcript, a WegA1 ortholog identifiable in published transcriptome data from Hydra polyps [25] is also preferentially expressed in ectoderm (File S5). Clear Atype expression profiles were also observed for two additional transcripts selected on the basis of sequence identity from lower down the Wnt3-MO-underexpressed list: Tbx and sFRP-B (File S4).
Four more transcripts from the ''top 18'' showed aborally enhanced expression in at least two of the three stages tested, but also additional expression at other sites. Given their barelydetectable expression at the early gastrula stage they had partial expression features of both the A and D type profiles (see below). The extracellular glycoprotein ZpdA showed enhanced aboral Details of transcripts for which in situ hybridization analyses were performed previously or in this study, with corresponding expression pattern type (see Figure 4) and DGE class as defined according to z-scores in Wnt3-MO and Fz1-MO transcriptomes compared to non-injected embryos (see Figure 7). expression at the gastrula and 24 h planula stages, but was detected across the whole larva by 48hpf ( Figure 3H). Conversely, expression of Botch2 was mainly confined to cells in the aboral half ectoderm at 48hpf, but concentrated in the oral endoderm at 24hpf ( Figure 3I), while bZip-expressing cells were concentrated at the aboral pole at planula stages but also detected in more central Table 1). Scale bar 50 mm. doi:10.1371/journal.pgen.1004590.g002 Table 1). Scale bar 50 mm. doi:10.1371/journal.pgen.1004590.g003 locations ( Figure 3J). Notch expression was not detectable in the gastrula and was predominantly endodermal in planula stages, but with low additional signal detected in the aboral ectoderm ( Figure 3K). The seven transcripts with D type profiles exhibited a heterogeneous array of expression sites ( Figure 3L-R). Their main common characteristic was ubiquitous low or undetectable expression at the gastrula stage, later being detected in a variety of distributions. WegD1 transcripts were detected only in the endodermal region at 48hpf ( Figure 3L); Botch1 transcripts were detected initially in cells of the aboral ectodermal in 24hpf planulae but at 48h in patches of cells scattered irregularly along the oral-aboral ectoderm as well as through the endodermal region ( Figure 3M). The Asparaginase (Asp; Figure 3N) and Ammonium transporter (Amt; Figure 3O and the mitochondrial uncoupling protein UCP ( Figure 3R) were detected in cells distributed widely across the larva at both 24 and 48hpf.
Unlike the regionalized A, O and IE type patterns, the diverse expression profiles of this D group of transcripts was not anticipated from the aboralized Wnt3-MO phenotype. To check that they did not represent false positives from the DGE screen, we verified expression for representatives of each of the four expression profile types in Wnt3-MO embryos by quantitative PCR (Q-PCR; Figure 5) and in situ hybridization ( Figure 6). The Q-PCR analysis confirmed the DGE response for all of the10 transcripts tested. By in situ hybridization, expression of O and IE profile transcripts was as expected undetectable in Wnt3-MO early gastrula. D-type pattern transcripts showed strongly elevated expression in Wnt3-MO embryos compared to control embryos processed in parallel. The expressing cells lined the blastocoel across the embryo for Botch1, bZip and Amt (Fig 6. J-L); also found for Asp, ZpdA, and Botch2 (File S6). In contrast, A type profile transcripts (Fig 6. G-I) showed expression territories extended spatially though the ectoderm from the aboral side, but without significantly higher expression than in the aboral domain of control embryos. These results indicate that IE, A and O type profile genes are all regulated by regional differences in Wnt signaling activity at the early gastrula stage, whereas D type profile gene transcription is activated temporally between the early gastrula and planula stages following down-regulation of Wnt3 signaling.
PCP disruption preferentially affects transcripts with nonaxial expression profiles
The overall outcome of our in situ hybridization analyses was that transcripts identified as Wnt3-MO-underexpressed consistently showed Oral and Ingressing/Endodermal type expression profiles while the overexpressed ones all showed Aboral and Delayed type profiles. The significance level of the response did not, however, correlate with expression patterns (O versus IE or A versus D, respectively; see z-scores in Table 1). Remarkably, we were able in both cases to uncover a strong correlation when we included in the analysis the z-scores obtained for the Fz1-MO sample ( Figure 7A). This could be demonstrated by plotting the zscores calculated for the two experimental conditions (against noninjected) against each other and determining the position of all the transcripts analyzed in Figures 2 and 3, of genes with expression patterns characterized previously (Bra, Fz3) and of five additional examples selected from our primary list (FoxQ2c, Tbx; NotumO, sFRP-A, Gsc, WegO3; File S4; All patterns summarized in Table 1 and Figure 4). Amongst the Wnt3-MO embryo underexpressed transcripts (orange dots in Figure 7A), those with Fz1-MO z-values higher than -5.0, ie not significantly affected or only relatively weakly underexpressed in Fz1-MO embryos, tended to show the O type expression pattern (eleven of the thirteen examined transcripts in the dark orange ''Class 1'' zone). The others (pale orange ''Class 2'' zone) showed IE type expression profiles in eleven of the twelve cases. A similar strong correlation was found for the Wnt3-MO embryo-over-expressed transcripts (green dots in Figure 7). In this case, applying a Fz1-MO z-score value threshold of +5.0 we found that transcripts with higher zvalues (grey ''Class 4'' zone) tended to show D or mixed D/A-type patterns (seven and three respectively of the eleven analyzed transcripts), while nine transcripts with z-scores less than 5.0 (green ''Class 3 ''zone) showed A-type patterns and the tenth (Notch) a mixed A/D pattern. In this Class3 zone, responses to Fz1-MO were quite variable, including moderate over-expression, unchanged expression and, in a few cases, under-expression (notably FoxQ2a and WegA1).
From these analyses we defined four ''DGE classes'' on the basis of z-score values in Wnt3-MO and Fz1 MO embryos, as indicated in Figure 7A. Although these classes strongly correlate with the four types of expression profiles (Figure 7; Table 1) there are exceptions, for instance ZnfO is categorized as Class 2 on the basis of z-scores but shows an oral type expression profile, while Sulf is categorized as Class 1 but shows endodermal expression.
Fz1 acts as a receptor for Wnt3 to activate Wnt/b-catenin signaling [39,40], but is also thought to interact with the Clytia Strabismus protein to mediate planar cell polarity (PCP), necessary for cell alignment in the ectoderm but also axial elongation during larval development and endoderm formation [41]. We thus hypothesized that the differences in expression responses in Fz1-MO versus Wnt3-MO could be due to the specific involvement of Fz1 in PCP. To test this hypothesis we made additional comparisons using a transcriptome derived from early gastrula embryos in which PCP was specifically disrupted by a morpholino targeting Strabismus (Stbm-MO). Plotting the z-scores (in relation to uninjected embryos) of the Fz1-MO and Stbm-MO transcriptomes against each other revealed a striking similarity ( Figure 7B). The linear positive correlation was especially clear between Fz1-MO and Stbm-MO z-scores for the Wnt3-MO over-expressed transcripts (i.e. DGE Classes 3 and 4; green and grey dots respectively in Figure 7B; Pearson correlation coefficient value Table 1, Figure 4), transcript levels at the early gastrula stage were determined by Q-PCR in Wnt3-MO and non-injected early gastrula embryos. The ratio of expression levels of selected genes between injected and control embryos, normalized with respect to EF-1a is compared to the DGE data represented in the same way by using the counts of reads mapped rather than the number of cycles of Q-PCR amplification. Transcript identities are shown beside each pair of bars. doi:10.1371/journal.pgen.1004590.g005 0.93). The separation between Class 1 and Class 2 transcripts on the basis of Stbm-MO responses was less strict, with Class 1 transcripts showing moderately increased or decreased levels in these embryos, compared to unaffected or reduced levels in Fz1-MO embryos (compare distribution of orange dots in Figure 7A and 7C). This can be explained by the requirement of Fz1 but not Stbm in Wnt/b-catenin signaling in the presumptive oral territory, We validated the transcriptome comparison analyses by in situ hybridization on Fz1-MO and Stbm-MO early gastrula embryos ( Figure 8) using a subset of the probes used to examine Wnt3-MO embryos ( Figure 5). For each gene the expression patterns in the two morpholino conditions were strikingly similar: The Class 1/O-type pattern transcript Myb, and the Class 3/A-type pattern transcripts ZnfA and sFRP-A showed little change compared with non-injected controls ( Figure 8A, E, F). ZnfO, assigned to DGE Class2 despite its O-type expression profile, showed undetectable expression at the early gastrula stage in both Fz1-MO and Stbm-MO embryos ( Figure 8B) and thus indeed represents an axially-expressed gene atypically sensitive to PCP perturbation.
The weak change in levels of most axially-expressed genes along with the significant under-expression of FoxQ2a and WegA1 in both Fz1-MO and Stbm-MO early gastrula embryos ( Figure 7A, C; File S1) revealed in this study may at first seem difficult to reconcile with the previous description of an ''aboralized'' phenotype including a slight expansion of the FoxQ2a expression domain in Fz1-MO embryos [39], but this can be explained by a difference in the timing of the two studies since the PCP effect is only transient. Thus, analysis of Stbm-MO embryos revealed that while aboral FoxQ2a expression is undetectable by in situ hybridization at the early gastrula stage it subsequently becomes restored, while conversely oral expression of Bra1 is transiently expanded but then becomes re-restricted to the oral pole of the planula [41].
The in situ analyses performed for Class 2/IE-type and Class4/ D-type pattern transcripts also validated the DGE analyses. FoxA and Znf845 were barely detectable by in situ hybridization at the early gastrula stage ( Figure 8C, D), while Botch1 and bZip were detected strongly across the embryo ( Figure 8G, H). As in Wnt3-MO embryos ( Figure 5) the signal in these latter cases was mainly We conclude that the relatively strong under-expression (Class 2) or over-expression (Class 4) of certain genes in Fz1-MO embryos is due in whole or part to disruption of PCP. This effect could reflect regulation of gene transcription by specific signaling pathways activated by PCP or be indirect, resulting from disturbed morphogenesis following failure of the ectodermal cells to align, to develop cell polarity and to undergo ciliogenesis [41].
Knockdown of conserved and cnidarian-restricted genes generates developmental defects
To test whether the newly identified genes in Clytia were indeed involved with developmental processes as predicted by their expression patterns, we injected antisense morpholino oligonucleotides targeting a selection of identified genes. We included in this analysis transcripts representing each of the four expression profile types including cnidarian-restricted genes (WegO1, WegIE2, WegD1), candidate conserved developmental regulators (Bra1, Bra2, FoxQ2c, FoxQ2a, HD02) and the partly conserved transcript WegA1. For each morpholino tested, developmental defects observed at morphological ( Figure 9) and cellular (File S8) levels were coherent with the corresponding expression patterns (Figures 2 and 3), confirming the usefulness of our approach to identify developmental regulators. Wherever possible (6/8 cases, see File S7 for details) morpholinos targeting two different sites in the transcript were used, and in each case similar phenotypes were observed. Figure 4) at early gastrula (left) and planula (right) stages, showing their mapping onto the differential responses of the transcripts to Wnt3-MO and Fz1-MO. indicated on the z-score plot in the center. Four DGE classes were defined on the basis of zscores in Wnt3-MO and Fz1-MO embryos applying cutoffs of -5 for classes 1 and 2, and +5 for classes 3 and 4 as indicated respectively by the dark and light orange, green and gray zones on the graph. The numbers indicate how many of the transcripts for which expression patterns were determined for each class showed the corresponding expression profile. These transcripts are all listed in Table 1 Morpholinos to the three O-type expression pattern transcripts all showed defects in endoderm formation, consistent with endoderm fate specification in the oral territory [61,62]. Morpholinos targeting the two Clytia paralogs Bra1 and Bra2 both significantly inhibited endoderm formation. Initial signs of cell ingression at the oral pole occurred with only a slight delay with respect to non-injected controls, but subsequent filling of the blastocoel was strongly retarded, such that by 24hpf Bra1-MO and Bra2-MO embryos ( Figure 9C, D) resembled uninjected embryos at the onset of gastrulation (about 11hpf). Bra1-MO and Bra2-MO embryos then elongated somewhat and disorganized cells accumulated in the blastocoel to a variable degree, although often with a significant reduction in the amount of endoderm observed. Confocal microscopy confirmed that the residual ectodermal cells of both Bra1-MO1 and Bra2-MOe/i embryos accumulated in aboral regions and showed signs of epithelialization (File S8 C, D). A similar but much less severe delay in gastrulation was obtained following injection of morpholinos targeting the cnidarianrestricted gene WegO1, whose expression profile is very similar to that of Bra1 and Bra2 (Figure 2). Planulae showed a characteristic tapering of the oral half ( Figure 9H), and confocal microscopy revealed that endoderm was reduced in this region (File S8 B).
Strikingly, morpholinos targeting the A-type profile transcript WegA1 generated an opposite phenotype from the O-type pattern morpholinos. At the onset of gastrulation, massive cell ingression initiated widely across the embryo ( Figure 9F). This is reminiscent of the phenotype previously described for Fz3 MO [39]. During subsequent development, cells from the internal regions were expulsed in most embryos, so that by the planula stage, embryos were commonly smaller and consisted of accumulations of endodermal-type cells surrounded in some cases by a very thin ectoderm layer, in which the cells were stretched over the inner cell mass (Figure 9F; File S8 G, H, I).
Morpholinos targeting the two IE type pattern genes WegIE2 and FoxQ2c both caused only minor disruption of development prior to the end of gastrulation, but subsequent formation of the endodermal cell layer was affected, with in both cases a thin and uneven layer of endodermal cells observed at 48hpf surrounding a distended cavity containing cell debris ( Figure 9R, S). WegIE2- MO embryos showed additional disorganization of the oral ectoderm. Confocal microscopy confirmed that the endodermal cell layers were severely disorganized (File S8 E,F).
Finally, morpholinos targeting the two D-type profile genes, which are strongly up-regulated at the early gastrula stage upon Wnt3, Fz1 or Stbm disruption, did not markedly disrupt gastrulation but resulted in highly aberrant morphology of the planulae ( Figure 9T,U). WegD1-MO embryos showed a distended aboral end with the ectoderm then becoming highly folded, this effect extending along the length of the embryo in the most extreme cases. Injection of morpholinos targeting the ANTP family gene HD02 also resulted in elongated and very irregular shaped planulae. In both cases the interface between the ectoderm and endoderm layers was very irregular with confocal microscopy revealing mixing of cells from the two layers and an absent or highly disrupted basal lamina between them (File S8 P, T). In HD02-MO embryos, anti-tubulin staining revealed an abundance of neurite-like projections traversing irregularly this interface, contrasting with the well defined epithelial basal lamina and regular distribution of orthogonally extending neural projections in undisturbed planulae (File S8; compare K and O).
Preferential association of cnidarian-restricted genes with embryo patterning
We used the strong correlation between DGE classes and expression patterns to assess the relationship between transcript identity and localization, using the 128 transcripts for which complete ORFs were present ( Figure 10). The proportions of transcription factors and probable signaling pathway regulators were similar between DGE classes (12-21%; values not significantly different by Fisher's Exact Test). In contrast there was a significantly higher proportion of cnidarian-restricted sequences in DGE classes 1, 2 and 3 than in DGE class 4 which tend to show Dtype expression profiles (around 30% vs 6%; Fisher's Exact Test pvalue for this comparison = 0.04). This analysis suggests that while cnidarian-restricted developmental regulators contribute significantly to patterning at the early gastrula stage, expression of evolutionary ancient genes predominates during development of the larva following gastrulation.
Discussion
This study successfully identified many potential developmental regulators from the cnidarian experimental model Clytia hemisphaerica by analyzing the transcriptome of early gastrula stage embryos aboralized by Wnt3 knockdown, providing a number of new insights into the evolution of developmental patterning mechanisms. Firstly, the key role of Wnt signaling in embryo patterning was confirmed since the identified genes all displayed one of four basic expression profiles, three associated with embryo patterning (through localized expression in the oral, aboral and presumptive endoderm regions) and one with planula formation. Expression profile types could be related to differential expression sensitivity to Wnt3-MO vs Fz1-MO or Stbm-MO, allowing us to separate genes expressed along the oral-aboral axis predominantly under Wnt/b-catenin signaling regulation from genes whose expression at the early gastrula stage is affected by Fz-PCP. Secondly, the identified genes included not only members of known conserved metazoan developmental gene families, but also previously uncharacterized or understudied conserved metazoan genes, providing novel candidates for evolutionary ancient roles in directing developmental processes. Finally, a number of cnidarian-restricted genes emerged as potential developmental regulators. Roles in larval patterning and morphogenesis were confirmed by morpholino analysis for 3 such genes as well as for one that shares a domain of unknown function with bilaterians. Overall our study illustrates the power of systematic transcriptomics-based screens, coupled with functional studies, to identify developmental genes in non-bilaterians and thus to help understand metazoan evolution and diversification.
Wnt signaling and PCP direct gene expression programs in the early gastrula
Our findings confirmed the central importance of Wnt signaling in embryo patterning. The transcripts under-represented in the spherical, aboralized Wnt3-MO embryos were during normal development systematically found expressed either in the oral ectoderm or in cells that contribute to the endodermal region (defining O and IE type profiles respectively), while those from the over-represented set were detected either in the aboral ectoderm (A type profile) or generally repressed throughout the embryo at the early gastrula stage to be expressed in different patterns during planula larva formation (D type profile). The O and A type profile genes displayed sustained localized expression at the poles through gastrulation and larval development and are thus good candidates for roles in patterning along the oral-aboral axis, but may also include precociously expressed gene markers of larval cell types enriched at one pole.
We were intrigued to find that the four types of expression profile for Wnt3-MO-differentially expressed transcripts strongly correlated with four ''DGE classes'', distinguished by the strength of the effect of Fz1-MO on the expression of the same genes. More specifically the axially expressed transcripts tended to show less extreme changes in expression in Fz1-MO early gastrulae than did IE and D-type profile transcripts ( Figure 7A). We have shown previously that Wnt/b-catenin signaling activated by Wnt3 and Fz1 is a key regulator of gene expression along the oral-aboral axis [39,40]. The relatively weak difference in expression of the axial genes in Fz1-MO relative to Wnt3-MO early gastrulae documented here could be explained, at least in part, by incomplete inhibition of this pathway by Fz1-MO compared with total extinction by Wnt3-MO, as revealed by b-catenin nuclear localization (compare Figures 3 in [39] and [40]). It is also conceivable that Wnt receptors other than Frizzleds such as RYK or ROR2 [63] could be partly responsible for mediating the Wnt3 responses in oral regions. Our Stbm-MO analyses demonstrate, however, that the main explanation for less marked changes in expression of 'axial' versus 'non-axial' genes in Fz1-MO embryos relates to the involvement of Fz1 in PCP. One aspect of this is that transient up-regulation of some oral genes and down-regulation for some aboral genes due to PCP disruption, as shown in Stbm-MO embryos [41], could in Fz-MO embryos counterbalance and dampen the effects of Wnt/b-catenin signaling. Concerning the non-axial genes the strong effects of PCP disruption could reflect direct signaling through 'non-canonical' intracellular pathways acting downstream of Fz/Dsh [64]. Given the transient nature of the effect, however, we favor the possibility that the effect is indirect, resulting from the developmental programs of the corresponding cell lineages being delayed or accelerated by a changed morphological environment. For cells of the presumptive endodermal region (IE type pattern), lack of detection at the early gastrula stage in Fz1-MO and Stbm-MO embryos could result from disruption of ingression behavior due to loss of polarity of oral ectoderm cells. Conversely the strong over-expression of the D-type profile genes at the early gastrula stage in Fz1-MO and Stbm-MO embryos suggests that epithelial PCP may have a significant effect in delaying the development of certain planula cell types. One attractive possibility is that Fz-PCP disruption affects apical-basal polarity of epithelial cells and thus the generation of new cell types through oriented asymmetric divisions, as has been recently demonstrated in Xenopus embryos [65]. Consistent with this hypothesis, cells expressing Botch1, bZip and Amt became prominent in basal regions of the epithelial ectoderm of early gastrulae when PCP was disrupted directly using Stbm-MO or Fz1-MO (Figure 8), or disturbed indirectly in the Wnt3-MO context [41] (Figure 6). Furthermore several other D type profile transcripts (HD02, UNC, WegD2 and possibly also Notch and Botch2) also tended to be expressed in basal regions of the ectodermal and/or endodermal epithelia during planula development (Figure 3).
Conserved metazoan developmental regulators in cnidarian embryogenesis
Our study provides further support for the well-known idea that a common set of transcription factors diversified from a common cnidarian-bilaterian ancestor has retained roles in regulating development in individual evolutionary lineages, with some families diversifying functions following lineage-specific gene duplications [4][5][6]9]. Clytia orthologs of many of known developmental regulator genes were identified from our unbiased screen based on sensitivity to Wnt/Fz signaling. All those tested showed characteristic spatiotemporally restricted expression profiles, and for four examples from well-known transcription factor families, roles in developmental regulation were supported by functional studies based on morpholino injection. Analysis of the morphant phenotypes suggested that the two Clytia Brachury paralogs Bra1 and Bra2, expressed at the oral pole throughout larval development, both play important roles in controlling the progression of gastrulation. Expression around the blastopore has been proposed to be an ancestral metazoan characteristic of Brachury, which during bilaterian evolution became involved in the specification of various mesoderm and endoderm fates from these tissues [66] but with the ancestral role likely to have been in regulating morphogenetic movements [67]. In Clytia, although there is no blastopore, the relationship with the gastrulation initiation site is conserved, and our morpholino results suggest that morphogenetic movements upstream of endoderm specification are affected. The Hydra Bra1 and Bra2 orthologs have been shown to have subtly distinct roles in endoderm and ectoderm layers of the budding polyp [11], suggesting that while embryogenesis roles for these genes overlap, their functions at other life cycle stages have diverged. A morpholino targeting FoxQ2c, expressed in the developing endodermal region during planula formation, caused severe defects in the organization of the endodermal layer. As with Brachyury, gene duplications have expanded the FoxQ2 gene family in Cnidaria, and in this case the paralogs have adopted clearly distinct expression profiles, FoxQ2a having conserved the likely ancestral aboral (anti-blastoporal) expression [68] while FoxQ2b is only expressed in oocytes [15]. The final member of a known developmental transcriptional regulator gene family we tested functionally was HD02, a non-Hox member of the Antp homeodomain family [16], expressed particularly strongly in cells at the base of the ectoderm and endoderm layers during larval development ( Figure 2P). The phenotypes following morpholino injections suggest that HD02 is involved directly or indirectly in regulating development of the neural network that develops at this site [69], perhaps dependent on the correct organization of the basal lamina. Further in depth studies will be required to explore this possibility, as well as to confirm and understand fully all the other morpholino phenotypes documented here.
On the basis of expression patterns it is likely that several other transcription factor genes identified in this study have developmental functions conserved through metazoan evolution. For example FoxA and FoxC are associated with distinct cell populations contributing to the endoderm region during gastrulation, as has also been reported for their Nematostella orthologs expressed in distinct regions of the developing pharynx [12,70]. In bilaterian species orthologs of these Fox genes are associated with development of endoderm/axial mesoderm and mesoderm respectively [71][72][73][74].
As well as transcription factors from families such as T-box, Fox and Antp, our transcriptome comparison identified likely regulators of a variety of intercellular signaling pathways including Notch, FGF, TGFb and Ras-MAPkinase. These included core components (ligands, receptors and secreted antagonists), but also less well known regulators acting in ligand or receptor processing and/or extracellular interactions, such as the Botch, Sulf and Notum proteins. Most strikingly we identified Clytia orthologs of known Wnt pathway regulators acting at all levels: Wnt ligands (WntX1A), receptors (Fz3, Fz2), members of three of the five families of secreted antagonists known from bilaterian models (Dkk1/2/4; Dan1; two sFRPs) [52,75], MESD which specifically interferes with ligand co-receptor LRP5/6 [58,59], an ortholog of the intracellular negative regulator Naked Cuticle [56], and also the two Notum family lipases and Sulf. Sulf enzymes act on cell surface Heparan Sulphate Proteoglycans and have been reported to modulate Wnt as well as Hedgehog, TGFb and FGF signaling while Notum releases the GPI anchor of glycipans such as Dally [76][77][78][79]. The oral expression profile of all the positive Wnt pathway regulators from this and our previous study (five Wnt ligands, Axin and TCF) reinforces the notion that an active Wnt signaling source is maintained at the cnidarian embryo and larval oral pole [40,42,80] as it is at the equivalent 'head organizer'' site in the Hydra polyp [81][82][83]. Co-expression of orally expressed putative pathway inhibitors such as Clytia NotumO is consistent with a role in limiting the extent of Wnt activity, equivalent to its action in Drosophila imaginal discs [76] or during planarian head regeneration [84]. Most of the putative Wnt antagonists we identified, however, were expressed aborally in the gastrula and in aboral pole subdomains in the planula (demonstrated by in situ hybridization for Dkk1/2/4, Dan1, sFRP-A and NotumA, implied by DGE responses for sFRP-B and MESD), suggesting that Wnt signaling is inhibited actively at the aboral pole region in the larva. Future functional studies will be required to examine the functions of each Wnt regulator during Clytia development, and to unravel the interactions between them.
New candidate developmental regulators of potentially wide interest
Our study uncovered many potential developmental regulators amongst gene families with orthologs and/or shared domains identifiable from the mass of available genomic and transcriptomic data across bilaterian species, but for which nothing is known about function or expression. These include zinc finger and helixloop-helix domain transcription factors as well as putative novel signaling pathway components. The prominence of cell surface protein modifiers with known impact on one or several signaling pathways in our screen raises the possibility that some of the other uncharacterized conserved or cell surface proteins may function similarly. In this context it would be interesting, for example, to test the function of the ZpdA and Aat genes, which code for a likely cell surface glycoprotein and a membrane transport protein respectively. Uncovering developmental roles for such proteins in Clytia would open the way to explore the involvement of potential novel regulators of key embryonic and cellular processes in bilaterians, and the associated evolutionary and medical implications. WegA1 offers an interesting illustration of this possibility. WegA1-MO injection results in a spectacular developmental defect involving premature cell ingression (a process of epithelialmesenchymal transition) at gastrulation, and a massive shift in the balance of ectoderm to endoderm formation. This finding implies that this previously unknown protein functions during normal development under the control of Wnt/b-catenin and PCP signaling to inhibit cell ingression in aboral territories. As well as the 135-amino acid, C-terminal DUF3504 domain the WegA1 sequence contains a putative nuclear localization signal. Whether it has true orthologs in bilaterians remains to be established.
Cnidarian restricted genes
Amongst the potential developmental regulators identified in our study, 29% were defined as cnidarian-restricted on the basis that they had no identifiable orthologs in any other metazoans. Previous surveys of available cnidarian genomic and transcriptomic data revealed about 25% in Clytia and 15% in the 'polyp only' cnidarian models Nematostella and Hydra [4,14,18,23]. A few of these match genes previously known only outside Metazoa, and so represent ancient genes lost in bilaterian branches or gained by lateral gene transfer, while the others probably represent cnidarian innovations. Although more in depth studies of each gene are required, the characteristic phenotypes observed in our morpholino experiments support the stereotypical expression pattern data in suggesting roles in regulating developmental processes for these cnidarian-restricted genes: larval oral pole organisation for WegO1, endoderm formation for WegIE2 and epithelial organization for WegD1 respectively.
More than half of the cnidarian-restricted transcripts identified in our study contained secretion signal sequences. These are prime candidates for roles in cell-cell signaling, either as ligands or as modulators of ligand/cell surface/receptor interactions during axis establishment and gastrulation. Candidate receptors for such signaling molecules include the many unclassified 7tm receptors identified particularly amongst IE profile/DGE class 2 transcripts. With notable exceptions such as Frizzled and Patched, members of the 7tm superfamily, including the G-protein coupled receptors (GPCRs), have not been strongly implicated in developmental regulation in bilaterians. This family has expanded independently in cnidarians [14], so its exploitation for developmental signaling might represent a cnidarian specialty, a fascinating possibility to explore in future studies.
Intriguingly, almost all (35/37) of the cnidarian-restricted genes we identified belonged to the three DGE classes associated with regional expression and thus embryo patterning at the gastrula stage ( Figure 10). Conversely, the DGE Class 4 transcript set contained a higher proportion of broadly conserved ''ancient'' genes. Recent studies have demonstrated that the extensive variation in modes of early embryogenesis between species correlates with expression of evolutionarily ''newer'' genes, while subsequent 'phylotypic stages' (corresponding to neurula and somatogenic stages in vertebrates and the germ-band segmentation stage in insects) are strongly conserved at the phylum level and tend to express more ancient genes [85,86]. With the caveat that our analysis concerns only a small fraction of the transcriptome and provides only limited coverage of developmental stages, the observation that most (28/35) of the DGE class 1-3 (putative patterning) genes lacked counterparts in Nematostella or Hydra may reflect the widely divergent modes of early embryo patterning and gastrulation amongst cnidarian species [87]. In contrast several of the DGE class 4 genes, mostly ''ancient'', appeared to be associated with epithelia development and in particular with formation of the basal lamina, a structure considered to be a major innovation in the animal lineage [88,89] and highly conserved in all Eumetazoa. A temporal shift in expression from ''new'' to ''old'' genes between gastrula and larva in cnidarian species is consistent with the idea that the epitheliarized, planula stage-ciliated torpedo larva represents the phylotypic stage [90].
To conclude, from a methodological standpoint, our study demonstrates the power of rigorous unbiased transcriptomic approaches to obtain a fresh view of gene conservation and innovation in the evolution of animal diversity. It also illustrates how transcriptome comparisons can allow prediction of expression characteristics without doing large-scale in situ hybridization screens; The differential transcriptional responses in Fz1-MO and Stbm-MO embryos will be very useful for picking candidate genes for future studies targeted to particular developmental processes. From a theoretical standpoint, our findings provide strong support for the notion that many evolutionary-conserved genes are deployed across eumetazoans to regulate development, but also good evidence that developmental regulation in cnidarians may involve a significant number of taxon-restricted genes. Functional studies of the genes identified here in Clytia should provide a fruitful entry for exploring both these possibilities.
Embryo manipulation, culture and harvesting
Eggs obtained by light-induced spawning of laboratory-raised medusae were microinjected with morpholino oligonucleotides prior to fertilization as described [39]. Previously unpublished morpholino sequences are provided in File S7. Use of genetically identical female medusae derived from a single individual laboratory polyp colony Z 4 B and males from a closely related colony [32] restricted the problems of sequence polymorphism. After culture at 18uC to the four cell stage, any unfertilized or abnormally-dividing embryos were removed. Early gastrula stage embryos, used for RNA extraction or fixed for in situ hybridization or confocal microscopy, were obtained after culture at 16uC overnight (17 hours). Planulae were fixed for in situ hybridization after 24 or 48 hours of culture at 18uC. Particular care was taken to use identical timing and temperature regimes for all experiments.
DGE analysis
For each experimental condition, total RNA was extracted from batches of 900-1400 early gastrula stage embryos using RNAqueous kit (Life Technologies/Ambion, CA). RNA integrity was confirmed by formaldehyde gel electrophoresis. Two independent biological replicates were performed for the uninjected and Wnt3-MO conditions, and single samples for the other morpholino conditions. Estimated final embryo numbers in each sample, after removal of any showing arrested cleavage or irregular development, were as follows: Uninjected: each 1900; Wnt3-MO: each 2300, Fz1-MO: 900, Fz3-MO: 1600 and Stbm: 1400. Library construction and Illumina short-read (51 bp) sequencing was performed by GATC (Konstanz, Germany).
To quantify gene expression, the number of mapped reads onto a reference transcriptome data set was taken as a measure of transcript level. The reference transcriptome, comprising 24893 distinct (non-overlapping) assembled sequences, was built by combining, using CAP3 software, previous EST data [15,32] and Illumina sequences from one of the untreated early gastrula samples generated in this study. Redundant sequence entries were eliminated by USEARCH (ver. 5.2.32_i86linux32). The longest predicted ORF from each sequence was used as the reference for read mapping. To reduce polymorphism, adaptator sequences and probable 59 UTR sequences upstream of the first ATG in each cDNA contig were removed, For each experimental condition approximately 80 million of 51pb Illumina reads were mapped on the reference transcriptome using the Bowtie command, with tolerance of two mismatches. Reads that matched to more than one reference sequence were not taken into account. Around 35% of the reads obtained for each condition could be mapped using this method. Statistical analysis was performed using the DEGseq R package [50] to determine for each transcript whether the observed ratio of transcript levels (M) between two samples is significant given the global average expression (A). The Random Sampling Model employed assumes a normal distribution for log 2 (C), where C is the number of counts, as confirmed for our data by a Q-Q plot ( Figure 1B). M = log 2 (C sample1)-log 2 (C sample2) estimates the difference of expression between the conditions; A = (log 2 (C sample1)+log 2 (C sample2))/2 measures the average expression in the two conditions. A p-value was generated for each gene to determine whether the expression difference between samples was significant. A z-score was generated for each transcript, as a measure of the deviation from the random model (z-score = (Mobserved -Mexpected according to random sampling model)/Var(M expected according to random model). The MATR method used an estimation of the variation between duplicate embryo samples (calculated using the CTR method) to generate a second MA plot and to adjusts the z-score accordingly.
Gene sequence analysis
A R-script was devised to analyze automatically the six possible reading frames of each unique assembled transcript sequence and to predict the best ORF (''find. ORF'' script downloadable at http://octopus.obs-vlfr.fr/R_scripts). Sequence comparisons were performed with both BLASTx with the whole sequence and BLASTp with predicted translated ORF against the ''nonredundant''' (nr) NCBI database. Domain analyses (Files S1 and S3) were performed using Interproscan, SignalP for the detection of secreted peptide signals and TMHMM for the prediction of transmembrane domains. Gene identities (column 3 of File S1) were based on BLAST and domain analyses. Gene accession numbers are provided in File S1 and File S3.
Orthology analysis
To determine orthology of the transcript sequences studied in detail (Table 1) we searched for homologs by reciprocal BLASTp. When reciprocal blast and domain analysis (see above) gave unambiguous identities (non-multigene families), gene names were attributed directly (Sulf, Aat, Asparaginase, Amt, UCP). For certain multigenic developmental regulator families, we added our candidate sequence and the retrieved cnidarian sequences to alignments from previously published studies kindly provided by authors (see File S2 and acknowledgements). Where no existing appropriate alignments were available, sequences from a range of eumetazoan genomes (Drosophila melanogaster, Lottia giganta, Strongylocentrotus purpuratus, Xenopus laevis or Homo sapiens, H. magnipapillata and N. vectensis) were aligned using MUSCLE, the best fitting model of evolution was determined using ProtTest2.4, and phylogenetic analysis performed using PhyML3.0. The trees are available in File S2.
In cases where clear Hydra magnipapillata orthologs were identified, further analysis was performed using the Hydra vulgaris transcript dataset (HAEP) available at http://compagen.zoologie. uni-kiel.de/blast.html) [91]. The number of matching reads recorded in each separated cell population (endoderm, ectoderm and nanos-positive cells) was normalized with respect to total read number (File S5).
Gene cloning and probe synthesis
In situ hybridization probes were synthesized from cDNA clones corresponding to our EST collection when available. For the remaining sequences, cDNAs were cloned by PCR using the TOPO-TA cloning kit (Invitrogen). All sequences were verified before probe synthesis. DIG-labeled antisense RNA probes for in situ hybridization were synthesized using Promega T3/T7/Sp6 RNA polymerases, purified using ProbeQuant G-50 Micro Columns (GE Healthcare) and taken up in 100 ml of 50% formamide.
Gene function analysis and microscopy
For the selected candidate genes we addressed phenotype specificity by designing and testing several morpholinos targeting different parts of the sequence, discarding any that proved toxic to cell division during pre-gastrula development. We could only identify 1 non-toxic morpholino targeting FoxQ2c and WegA1, and none for FoxQ2a. For Bra2 one morpholino targeted the predicted AUG translation initiation codon and the other an exonintron junction (all details in File S7). For each morpholino we first injected a range of concentrations into eggs prior to fertilization, and then assessed planula morphology for the lowest non-toxic dose at 24 h and 48 h. The cellular basis of the observed phenotypes was then further assessed by confocal microscopy. Images of in situ hybridization profiles and DIC images of live embryos were acquired on an Olympus BX51 microscope. For confocal imaging of cell boundaries using fluorescent phalloidins and nuclei using Hoechst 33358 or TOPRO-3 dyes, embryos were fixed, processed and imaged on a Leica SP5 microscope as described previously [39]. Microtubules were stained by immunofluorescence using anti-a tubulin rat monoclonal antibody YL1/2 (Sigma) followed by rhodamine-conjugated anti-rat Ig antibodies (Jackson Immunoresearch).
Quantitative RT-PCR
Total RNA from 60 Wnt3-MO injected and 60 non-injected early gastrulae was extracted using RNAqueous-Micro kit according to the manufacturer's instructions (Ambion, Warrington, UK). Genomic DNA was removed by a DNAse I treatment (Ambion) and this step was controlled for each RNA extract. Firststrand cDNA was synthesized using 500 ng of total RNA, Random Hexamer Primers and Transcriptor Reverse Transcriptase (Roche Applied Science, Indianapolis, USA). Quantitative PCRs were run in quadruplicate and EF-1alpha used as the reference control gene. Each PCR contained 5 ml cDNA 1/400, 10 ml SYBR Green I Master Mix (Roche Applied Science), and 200 nM of each gene-specific primer, in a 20 ml final volume. PCR reactions were run in 96-well plates, in a LightCycler 480 (Roche Applied Science For each gene studied an expression level N was calculated as 2 2Ct , where Ct (Cycle threshold) represents the number of cycles required for the fluorescent signal to cross the threshold.
Supporting Information
File S1 Primary list of transcripts differentially expressed in Wnt3-MO compared to non-injected embryos identified by DGE (z-score cutoff of +/2 5.0), with putative identities, domain analysis, z-scores calculated for three MO-injected embryo conditions, accession numbers and DGE classes assigned in relation to Wnt3-MO and Fz1-MO responses (see text).
(XLS)
File S4 Expression patterns of additional selected transcripts. In situ hybridization profiles for FoxQ2c (A); NotumO (B); WegO3 (C); Goosecoïd (D); sFRP-A (E) and Tbx (F). FoxQ2c shows an IEtype pattern, NotumO, WegO3 and Goosecoïd show O-type patterns, sFRP-A and Tbx show A-type patterns. The correlation with DGE responses holds in all cases (see Table 1). Scale bars and panel organization as in Figures 2 and 3
. (TIF)
File S5 Hydra orthologs of the Clytia gene analyses from genomic and transcriptomic data, along with information where available on differential expression between Hydra tissues [25]. Orthology was confirmed by phylogenetic (PhyML) analysis of sets of closely related sequences from cnidarians. [92] provided a marker of the planula ectoderm (green in P, T, U). Oral poles are at the top in all images. Uninjected planula larvae (A) showed well-defined, epithelialized ectoderm and endodermal layers. WegO1 (B), Bra1 (C) and Bra2 (D) showed deficits of endoderm with regions of empty blastocoel still found (asterisks) as well as residual ectodermal cells accumulating towards the aboral pole (white arrowheads). This defect was much less severe in WegO1-MO embryos. FoxQ2c-MO (E) and WegIE2-MO (F) embryos showed severely disrupted endodermal layers (black arrows). WegA1-MO injection generated aggregates of endodermal-like cells of variable sizes covered in some (G, I) but not all (H) cases by a thin layer of ectodermal cells. HD02-MO (L-P) and WegD1-MO (Q-T) embryos had severely disrupted morphology characterized by a disruption of the basal lamina between the endoderm and GFP-expressing ectoderm layers (white arrows in P and T; compare with the uninjected planula in U). The WegD1-MO embryos were filled with disorganized cell sheets and lacked the central stripe of cell destruction characteristic of normal endodermal cavity formation. Anti-tubulin staining revealed disorganized bundles of neurite-like processes in the ectodermendoderm interface in HD02-MO embryos contrasting with the more orderly organization in uninjected embryos (pink arrows in confocal images in O and K respectively, acquired at this level) and the lack of this layer in WegD1-MO embryos (S). Scale bar 50 mm. (TIF) | 2016-05-04T20:20:58.661Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "fe493b13243201648f54685c650e184d07603383",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004590&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c661f23e7c73148bbeb4c6614e5dbc7381ab692",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14284875 | pes2o/s2orc | v3-fos-license | Whole genome sequencing enables the characterization of BurI, a LuxI homologue of Burkholderia cepacia strain GG4
Quorum sensing is a mechanism for regulating proteobacterial gene expression in response to changes in cell population. In proteobacteria, N-acyl homoserine lactone (AHL) appears to be the most widely used signalling molecules in mediating, among others, the production of extracellular virulence factors for survival. In this work, the genome of B. cepacia strain GG4, a plasmid-free strain capable of AHL synthesis was explored. In silico analysis of the 6.6 Mb complete genome revealed the presence of a LuxI homologue which correspond to Type I quorum sensing. Here, we report the molecular cloning and characterization of this LuxI homologue, designated as BurI. This 609 bp gene was cloned and overexpressed in Escherichia coli BL21(DE3). The purified protein was approximately 25 kDa and is highly similar to several autoinducer proteins of the LuxI family among Burkholderia species. To verify the AHL synthesis activity of this protein, high resolution liquid chromatography-mass spectrometry analysis revealed the production of 3-oxo-hexanoylhomoserine lactone, N-octanoylhomoserine lactone and 3-hydroxy-octanoylhomoserine lactone from induced E. coli BL21 harboring the recombinant BurI. Our data show, for the first time, the cloning and characterization of the LuxI homologue from B. cepacia strain GG4 and confirmation of its AHL synthesis activity.
INTRODUCTION
It has been widely accepted that single-celled bacteria communicate with each other using small, hormone-like chemical molecules known as autoinducers. Such cell-cell communication mechanism or quorum sensing (QS) regulates various physiological activities among bacterial communities, ranging from bioluminescence to swarming motility (Dong & Zhang, 2005;Miller & Bassler, 2001;Williams, 2007). The QS bacteria release autoinducers in response to environmental stimuli or at specific stages of growth. At a threshold level, these signalling molecules will then bind to their cognate receptor to form an autoinducer-receptor complex that regulate the expression of target genes (Chan et al., 2011b;Hong et al., 2012b;Williams et al., 2007). By far, the most extensively studied QS molecules in the last two decades is N-acyl homoserine lactone (AHL) (Eberhard et al., 1981). Other well-known bacterial cell-cell communication signals include cyclic thiolactone (Ji, Beavis & Novick, 1995), hydroxyl-palmitic acid methyl ester (PAME) (Flavier et al., 1997), furanosylborate (Chen et al., 2002), and methyl dodecenoic acid (Wang et al., 2004).
Among Gram-negative bacteria, a myriad of AHL derivatives which differ in length or structure of the acyl side chain, have been identified. The acyl side chain consists of fatty acids which have different chain length, degree of saturation, and the presence of substituent at the C3 position (Swift et al., 1997). The two principal components of AHL-driven QS systems are LuxI and LuxR proteins which act as the AHL synthase and signal receptor, respectively (Fuqua, Parsek & Greenberg, 2001). The secreted AHL will bind to LuxR protein to form LuxR/AHL complex which then regulates the expression of target genes and thus the physiological functions of the cell (Chong et al., 2012;Parsek & Greenberg, 2000). Studies have shown that the luxI gene is one of the main targets of the LuxR/AHL complex, thus increasing the production of AHL (Hong et al., 2012b). LuxI/LuxR QS systems have been well studied in numerous bacterial species. In addition, whole genome sequencing projects have unravelled more bacterial species with putative luxI/luxR homologues. There were also multiple systems of QS found in single genomes (Hao et al., 2010).
In the past few decades, members of the Burkholderia genus are among groups of Proteobacteria which have been extensively studied in QS system. These Gram-negative bacteria are versatile microorganisms and may cause a number of diseases in many host organisms. They have been isolated from water, soil, industrial areas and hospital environments (Stoyanova et al., 2007). In recent years, the genus Burkholderia has been phylogenetically well defined. It comprises more than 60 species which are functionally remarkably diverse. Of all Burkholderia species, B. cepacia is of greatest importance. Previously known as a phytopathogen and the etiological agent of soft rot of onions, B. cepacia (previously Pseudomonas cepacia) is also an important causative agent in patients with cystic fibrosis (Govan, Hughes & Vandamme, 1996). QS in B. cepacia was found to play critical roles in regulation and expression of extracellular proteins and regulation of swarming and biofilm formation (Aguilar et al., 2003).
Chan and his co-workers have been exploring novel rhizosphere environments for bacterial communities in the Malaysian rainforest and recently, and the genus Burkholderia was recently found associated with the roots of Zingiber officinale (ginger) (Chan et al., 2011a;Chan et al., 2011b). One of the ginger rhizosphere strains was identified as B. cepacia strain GG4 (hereafter referred to as strain GG4). This soil isolate was found to secrete four AHLs, namely 3-oxo-hexanoyl-homoserine lactone (3-oxo-C6-HSL), N-octanoyl-L-homoserine lactone (C8-HSL), 3-hydroxy-octanoyl-homoserine lactone (3-hydroxy-C8-HSL) and N-nonanoyl-L-homoserine lactone (C9-HSL). While most Burkholderia spp. have been reported to produce C6-HSL, C8-HSL and 3-hydroxy-C8-HSL, strain GG4 was the first Burkholderia strain found to synthesize long-chain C9-HSL. The production of C9-HSL may regulate unknown genetic traits which could play a vital role in the adaptation of this strain GG4 as endophytic bacterium in ginger rhizosphere, as compared to other Burkholderia species. Hence, it is of high interest to elucidate the role of the AHLs as the global regulator of QS activity in physiological functions of this soil-dwelling bacterium.
The whole-genome sequencing of B. cepacia strain GG4 was performed recently using Roche 454 GS FLX technology. The assembly of the genomic data produced an approximate genome size of 6.6 Mb with 72 contigs (Hong et al., 2012a). This plasmid-free bacterium was found to consist of two chromosomes with G + C content of 66% and 2,716 predicted coding sequences. The genome sequences corresponding to chromosomes 1 and 2 have been deposited in GenBank, with the accession numbers CP003774 and CP003775, respectively.
The objectives of the present study were to decipher the genomic architecture of strain GG4 for autoinducer protein and subsequently the molecular characterization of this single putative luxI homologue, burI. The burI gene was amplified from genomic DNA of strain GG4 and the gene was overexpressed in E. coli. The recombinant BurI protein was purified and the production of AHLs was characterized using mass spectrometry.
Bacterial strains and culturing conditions
All bacterial strains and plasmids used in this study are listed in Table S1. B. cepacia sp. strain GG4 was grown aerobically in Lysogeny Broth (LB) medium or LB agar (Merck, Germany) at 25 • C with shaking (220 rpm). E. coli strains were grown routinely in LB medium supplemented with 100 µg/ml ampicillin (Sigma, St. Louis, MO) alone or 30 µg/ml kanamycin (Sigma, St. Louis, MO) and 34 µg/ml chloramphenicol (Sigma, St. Louis, MO), and incubated at 37 • C aerobically with shaking (250 rpm). All bacterial strains were stored frozen at −70 • C in LB supplemented with 50% glycerol.
Isolation of genomic DNA
An overnight culture of strain GG4 was harvested and lysed with DNAzol reagent (Invitrogen, USA) followed by addition of Proteinase K (NEB, USA). Absolute ethanol was added to the lysate to precipitate the DNA. The resulting DNA pellet was washed twice with 75% (v/v) ethanol, air-dried and dissolved in TE buffer (pH 8.0) and stored at 4 • C. Plasmid DNA for use in subcloning was isolated using QIAprep Spin Miniprep Kit (Qiagen, Germany) according to manufacturer's instructions. The quality of extracted DNA was analyzed by means of agarose gel electrophoresis, followed by ethidium bromide (Sigma, St. Louis, MO) staining. The purity of the DNA was estimated by NanoDrop spectrophotometer (Thermo Scientific) and the yield was estimated using Qubit 2.0 Fluorometer (Life Technologies, USA).
Construction of recombinant burI expression plasmids
The burI gene was amplified from the extracted genomic DNA of B. cepacia GG4 using polymerase chain reaction (PCR). The primers used were burI-F (5 ′ CCATGGGCATGCGG ACCTTCGTTCAC3 ′ ) and burI-R (5 ′ CTCGAGTATGGCGGCGATGGCTT3 ′ ). The primers were designed based on the sequence of burI identified from whole genome analysis. Two restriction sites (underlined), NcoI and XhoI, were added to the forward and reverse primers, respectively. The PCR cycles consist of an initial denaturation at 95 • C for 5 min, followed by 30 cycles at 95 • C for 30 s, annealing at 55 • C for 40 s and extension at 72 • C for 40 sec, and a final extension at 72 • C for 5 min. Sterile deionised water was used as the negative control in all PCR reactions. The PCR product was verified using agarose gel electrophoresis followed by ethidium bromide (Sigma, St. Louis, MO) staining. The amplicon with the desired band size was purified using QIAquick Gel Extraction kit (Qiagen, Germany) and ligated to pGEMT (Promega, USA), as per the manufacturer's instructions. The resulting recombinant plasmid (designated pGEMT-burI) was transformed into E. coli JM109 (Sambrook & Russel, 2001). A DNA fragment was excised from this recombinant plasmid by digestion with NcoI and XhoI followed by gel purification, and ligated into pET28a (Novagen, Germany) digested with the same enzymes, to produce pET28a-burI. Verification for the correct insert cloned into pGEMT and pET28a plasmids was done by automated Sanger DNA sequencing.
Nucleotide sequence and bioinformatics analysis of burI
The nucleotide sequences of burI and other luxI homologues were verified using the BLASTX program available from NCBI website (http://www.ncbi.nlm.nih.gov/). Searches for open reading frame (ORF) was performed using the ORF Finder tool (http://www. ncbi.nlm.nih.gov/gorf/gorf.html) while the fundamental properties of the proteins were predicted using ExPASy (http://www.expasy.org/). Multiple sequence alignments of the amino acid sequences were performed using Sequence Manipulation Suite (http://www. bioinformatics.org). A phylogenetic tree of the BurI protein was then constructed with Molecular Evolutionary Genetic Analysis (MEGA) version 5.0 using Neighbour-Joining strategy (Chan et al., 2010;Tamura et al., 2011). Bootstrap analyses up to 1,000 replicates were used to provide a good confidence estimates for the constructed tree.
Heterologous expression of BurI protein in E.coli
To express the His-tagged fusion protein, pET28a-burI was transformed into E. coli BL21 (DE3)pLysS cells (Sambrook & Russel, 2001). Then, 1 ml of the overnight culture was inoculated into fresh 50 ml of LB medium containing both kanamycin and chloramphenicol and cells were grown to an OD 600 of 0.5. Optimization on overexpression of BurI was performed in terms of isopropyl-D-thiogalactopyranoside (IPTG; Sigma, St. Louis, MO) concentration (0.2-1.0 mM) and temperature of induction (15 • C and 37 • C). Once IPTG was added, growth of the culture was continued for 8 h with shaking at the desired temperature. The cells were harvested by centrifugation at 10,000 rpm and lysed by BugBuster TM Protein Extraction Reagent supplemented with protease inhibitors and Benzonase nuclease (Novagen, Germany). The recombinant proteins were purified from cell lysate using Ni-IDA agarose affinity (Applied Biological Materials Inc., USA) according to manufacturer's protocol.
Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis
Samples of the cell lysates taken before and after IPTG induction were suspended and boiled in 5× Laemmli sample buffer (Bio-Rad, USA), and examined by polyacrylamide gel electrophoresis (PAGE; Bio-Rad, USA) in the presence of SDS on 12.5% (w/v) gels (Laemmli, 1970). Following this, Coomassie brilliant blue R-250 (CBB; Bio-Rad, USA) was used to stain the gel before viewing.
AHL extraction
E. coli BL21 cells harboring pET28a-burI were cultured in LB medium buffered with 50 mM 3-[N-morpholino] propanesulfonic acid (MOPS) to pH 6.5 to prevent hydrolysis of AHL in alkaline medium (Yates et al., 2002). The culture was then induced with IPTG as described earlier. The spent culture supernatant was extracted thrice using equal volume of acidified ethyl acetate (0.1% v/v glacial acetic acid; Merck, Germany) and the organic solvent was evaporated to dryness. The dried extracts were then resuspended in 1 mL of acidified LC-ethyl acetate and allowed to dry. Finally, 100 µL of acetonitrile (HPLC grade; Merck, Germany) was added to dissolve the extracted AHLs. The mixture was then filtered with a 0.22 µm syringe filter and an aliquot (20 µL) of the extract was placed in a sample vial for analysis using liquid chromatography mass spectrometry (LC-MS) (How et al., 2015).
Identification of AHL profile by liquid chromatography mass spectrometry (LC-MS/MS)
An Agilent 1290 Infinity LC system (Agilent Technologies Inc., USA) was used as the LC delivery system coupled with an Agilent ZORBAX Rapid Resolution HT column (2.1 mm × 50 mm, 1.8 µm particle size). The column temperature was maintained at 37 • C and the injection volume was set to 2 µL. The analysis was performed in 15 min at a constant flow rate of 0.3 mL/min. The following mobile phases were used: (A) 0.1% v/v formic acid in HPLC grade water and (B) 0.1% v/v formic acid in acetonitrile (ACN). A gradient profiles with the following settings were applied (time: mobile phase A: mobile phase B: 0 min: 80:20, 7 min: 50:50, 12 min: 20:80, and 14 min: 80:20). The high-resolution electrospray ionization mass spectrometry (ESI-MS) was performed with the Agilent 6490 Triple-Quad LC-MS/MS system (Agilent Technologies Inc., USA). ESI in positive mode was employed and the range of m/z value for precursor ions was set from 150 to 400. The probe capillary voltage was set at 3 kV, sheath gas at 11 mL/h, nebulizer pressure at 20 psi, and desolvation temperature at 200 • C. The collision energy was optimized in 5 eV and fragmentation was performed at 380 eV. For detection of AHLs, precursor ion scan mode was used. The presence of product ion at m/z 102 corresponds to the presence of [M + H] + ion of the core lactone ring moiety. Agilent MassHunter software was used for the MS data analysis by comparison of extracted ion (EI) mass spectra and retention index with data obtained from synthetic AHL compounds (Chong et al., 2012;Yin et al., 2012). The ACN and AHL extracted from culture supernatant of E. coli harboring pET28a alone were used as the blank and negative controls, respectively.
RESULTS
From in-silico analysis, an open reading frame (ORF) coding for a putative LuxI homologue, designated as burI, was found in chromosome 2 of the complete genome sequence of B. cepacia strain GG4 (Hong et al., 2012a). This autoinducer synthesis protein has been deposited in the GenBank database (Accession number YP 006617833. 1). Analysis of the LuxI gene cluster shows indistinct variation among strain GG4 and other close relatives of Burkholderia species (Fig. 1). All the Burkholderia strains studied possess luxI homologues which are divergently oriented with the upstream transcriptional regulator, luxR homologues.
Web-based similarity searches against the GenBank database indicated that BurI protein sequence is highly homologous to other AHL synthase of Burkholderia species, mostly from B. cepacia complex (Bcc) strains. All Bcc strains have been isolated from both environmental and human clinical sources (Coenye & Vandamme, 2003). Multiple sequence alignments in Fig. 2 illustrated that BurI protein shares similarities and conserved amino acids with other reported AHL synthase of Burkholderia species. It was found that BurI and all the LuxI family members contain the conserved 10 amino acid residues of LuxI homologues. On the other hand, the phylogenetic tree constructed based on amino acid alignment (Fig. 3) illustrated that BurI was clustered closely with autoinducer synthesis protein from B. vietnamiensis G4, a type of nitrogen-fixing bacteria colonizing the rhizosphere of rice (Suárez-Moreno, Caballero-Mellado & Venturi, 2008). Figure 4 shows the upstream and downstream sequences of burI gene. The burI gene codes for a putative AHL synthase which consists of 202 amino acids. The sequence, TGTAAT, at 34 nucleotides upstream from the start codon and the sequence, TTACCA, located at 65 nucleotides upstream could correspond to the putative −10 and −35 transcriptional sequences, respectively. There are 25 nucleotides separating the two consensus regions, in agreement to the optimum spacing suggested by Hawley & McClure (1983) on E. coli promoter analysis. A potential Shine-Dalgarno site (GAGG) is located 6 bp upstream from the start codon. Apart from that, a putative lux box (TGTAAGAGTTACCAGTT) was found to partially overlap the putative −35 region. This The 609 bp-ORF of burI was then amplified by PCR and cloned into pET28a overexpression vector, producing pET28a-burI, with expression of a 6 × His-tag driven by a T7 promoter. This recombinant plasmid was transformed into E. coli BL21 and the recombinant burI gene was overexpresed upon IPTG induction. The optimum induction was found to be at 1 mM IPTG (data not shown). Figure 5 shows the presence of the overexpressed protein from the harvested cell lysate in the SDS-PAGE, corresponding to the recombinant BurI protein. However, the protein was mostly present in the insoluble fraction. Hence, purification from the precipitation dissolved in 8 M urea was performed using Ni-IDA agarose affinity chromatography (Fig. 6). The purity of the recombinant protein was fairly good with molecular weight approximtely 25 kDa, inclusive of His-tag peptide.
The spent culture supernatants of the IPTG-induced E. coli BL21 harboring pET28a-burI were analyzed using Agilent 6490 Triple-Quad LC-MS/MS system. High resolution mass spectrometry analysis demonstrated the presence of 3-oxo-hexanoyl-homoserine lactone (3-oxo-C6-HSL), N-octanoyl-L-homoserine lactone (C8-HSL) and 3-hydroxyoctanoyl-homoserine lactone (3-hydroxy-C8-HSL) with m/z values of 214.0000, 228.3000 and 244.0000, respectively (Figs. 7-9). The mass spectra of the extracted AHL were similar to the corresponding synthetic compounds at their respective retention times (Fig. S1). For each detected AHL, a fragment ion at m/z 102 was observed, which correponds to the lactone moiety. AHLs were not found in the E. coli BL21 harboring pET28a alone or appeared in trace amounts in noninduced E. coli harboring pET28a-burI. The mass spectra also revealed quantitatively that C8-HSL was produced more abundantly than the other two AHLs after 8 h of induction. Nevertheless, C9-HSL was found to be absent in the culture supernatant of E. coli BL21 harboring pET28a-burI despite several optimizations in induction of the protein expression (Fig. S1).
DISCUSSION
Members of the Burkholderia genus of Proteobacteria are nutritionally versatile and they are ubiquitous in the environment. The unusually large genomes, which often consist of several (typically two or three) large replicons, as well as the ability to use different kinds of compounds as energy sources (Parke & Gurian-Sherman, 2001) are believed to be the main reasons of the ecological versatility of these bacteria. The ability to survive in a diverse array of environments is partly attributed to QS activity. QS via AHL signalling molecules is present in almost all Burkholderia species. It plays a crucial role in governing virulence and other phenotypic traits such as colonization and niche invasion. This cell-dependent communication enables rapid adaptation of the organisms to different changes in environment (Choudhary et al., 2013). In a recent study, Hartmann & Schikora (2012) demonstrated that QS signals may induce various types of plant responses.
QS-associated genes in Burkholderia sp. are located on chromosome 2 where most genes related to virulence and secretion systems are previously found (Whitlock, Mark Estes & Torres, 2007). Thus far, it has been known that there are two major AHL QS systems in the genus Burkholderia. The first system is CepI/R system found in members of the Bcc which produce and respond to C8-HSL (Eberl, 2006;Venturi et al., 2004). The other system is the BraI/R system that produces and responds to 3-oxo-C12-HSL found in many diazotrophic and plant-associated Burkholderia species (Caballero-Mellado et al., 2004). According to Suarez-Moreno et al. (2010), CepI/R system is a global regulatory system in Burkholderia sp. On the other hand, the BraI/R system is stringently regulated by RsaL and this system was believed to control regulation of a small set of genes.
Besides the conserved CepI/R system, additional QS systems were found in other Bcc strains. For instance, Malott & Sokol (2007) reported the presence of BviI/R system in B. vietnamiensis, while some B. cenocepacia strains harbor the CciIR system (Malott et al., 2005). Apart from that, a number of B. cenocepacia strains have been shown to produce two additional types of QS signalling molecules, 2-heptyl-4-quinolone (HHQ) and cis-2-dodecenoic acid (BDSF) (Deng et al., 2009;Diggle et al., 2006). A study has shown that absence of HHQ from B. pseudomallei altered its bacterial colony morphology and increased the synthesis of elastase (Diggle et al., 2006). This shows that a single type of QS signal can be secreted in different kinds of bacteria and a bacterial strain in fact could harbor more than one type of signaling system. On the other hand, it is interesting to know that Bcc strains could recognize and respond to P. aeruginosa QS molecules, indicating a possible inter-species communication among the etiological agents that contribute to the disease in cystic fibrosis patients (Riedel et al., 2001). Hence, it could not be denied that the QS activity exhibited by strain GG4 in our study is a way of communication with other members of the microbial communities in the soil rhizosphere.
In addition, the CepI/R system from B. cepacia strain H111, a cystic fibrosis respiratory isolate, has been found to be involved in controlling biofilm formation and swarming motility. Similarly, two AHL molecules, C8-HSL and C6-HSL, were produced by this strain but in a ratio of 10:1. It was also shown that if the CepI mutant was defective in secretion of AHL, biofilm formation was affected significantly (Huber et al., 2001). However, these defects were restored to wild-type phenotype when synthetic C8-HSL was added into the growth medium. On the other hand, the presence of QS system with the secretion of C8-HSL and C6-HSL was reported in onion pathogen B. cepacia strain ATCC 25416. The cep locus is implicated in protease production and onion pathogenicity via the expression of polygalacturonase, an extracellular enzyme responsible for onion maceration. It was reported that proteolytic activity was significantly lower in CepI mutant and hence, attenuated onion pathogenicity. Likewise, the complemented mutant harboring the cepI locus in trans had caused a higher rate of polygalacturonase activity and onion maceration (Aguilar, Bertani & Venturi, 2003).
In the current study, burI which encodes the putative AHL synthase has been successfully cloned and characterized. The estimated size of the purified protein was in agreement with the SDS-PAGE profile (Fig. 6). The deduced protein sequence has a high degree of homology with several AHL synthases from other Bcc strains. This strongly indicates a conserved QS system and low rate of random mutation for this autoinducer gene among the heterogeneous Bcc. It appears likely that these proteobacteria share similar basic QS mechanism and gene regulation in AHL synthesis although they are responsible for different target genes. Analysis of the completed genome sequences revealed that BurI is probably the only member of the LuxI family (Hong et al., 2012a). The genetic organization of GG4 and other Burkholderia species shows clearly that majority of the luxI/R gene clusters are conserved (Fig. 1).
A detailed analysis of both upstream and downstream sequences of burI gene identified a putative Shine-Dalgarno region as well as −10 and −35 promoter elements (Fig. 4). Although both −10 and −35 promoter regions are not strongly conserved, the sequences meet the requirement of the typical E. coli RNA polymerase σ 70 consensus promoter sequences (Harley & Reynolds, 1987). The palindromic sequence of lux box upstream of the gene may highly suggest that the putative transcriptional activator, BurR binds to burI promoter to activate burI expression. Such hypothesis, although yet to be validated, is in agreement with findings from CepI/R system in B. cepacia (Lewenza et al., 1999), in which the expression of cepI is activated by CepR/AHL complex. In fact, in many Bcc members, it is found that transcription of luxI homologue which is activated by LuxR/AHL complex provides a signal amplification via a positive feedback mechanism (Choudhary et al., 2013). The increase in the production of AHL is important in response to cell density and expression of target genes. Lewenza and co-workers (1999) postulated that the expression of pvdA, a gene involved in the biosynthesis of ornibactin in B. cepacia, was regulated by CepR as the promoter region of pvdA contains a possible lux box-like sequence. Meanwhile, protease activity was also found to be influenced by CepR as mutation in cepR result in a protease-negative phenotype. Hence, the regulation of ornibactin biosynthesis and protease activity by QS system in strain GG4 opens another research scope for future study.
While most Burkholderia strains synthesize C6-HSL and C8-HSL (Suarez-Moreno et al., 2010), GG4 strain produces other AHL such as C9-HSL. Such behavior in many Burkholderia species highly suggests that the AHL synthase is not well-conserved. When E. coli harboring the recombinant burI was induced with IPTG for 8 h and its spent supernatants was assayed with LC-MS/MS, the presence of 3-oxo-C6-HSL, C8-HSL and 3-hydroxy-C8-HSL were confirmed, suggesting the BurI is indeed the AHL synthase of B. cepacia strain GG4. Such findings are in consistent with a recent study by Chan et al. (2011a) which obtained the same AHL profile. Nevertheless, in this work, C9-HSL which was secereted by the parent strain could not be detected from the spent culture supernatant of E. coli harboring the recombinant burI. Most likely, this discrepancy can be attributed to gene expression in different species used in this study. It is possible that E. coli produces low amounts of C9-HSL or that the growing conditions employed were not optimal for synthesis of this autoinducer molecule. Another likely reason is that E. coli cells may not have the biosynthetic machinery needed to activate the production of C9-HSL. A point noteworthy is C8-HSL appeared to be the AHL synthesized in highest amount by E. coli harboring burI, which was in agreement with numerous studies that most Bcc isolates produce C8-HSL in greatest abundance.
Phylogenetic analysis (Fig. 3) demonstrated that BurI is closely associated to autoinducer protein of B. vietnamiensis G4, an environmental isolate which play important roles in nitrogen fixation (Malott & Sokol, 2007). In addition to C8-HSL and C6-HSL, B. vietnamiensis G4 strain produce long chain AHL molecules such as N-decanoylhomoserine lactone (C10-HSL), N-dodecanoyl-homoserine lactone (C12-HSL), and 3oxo-decanoyl-homoserine (3-oxo-C10-HSL) (Malott & Sokol, 2007). It is interesting to find that strain GG4 is the only soil isolate among Bcc strains to synthesize C9-HSL. Such differences in AHL profile is believed to express a different QS network which regulates diverse physiological processes and to facilitate intercellular communication among bacterial communities in the rhizosphere environment. Chan et al. (2011a) reported that quorum-quenching was found to co-exist with AHL-dependent QS in B. cepacia strain GG4. This isolate was able to reduce 3-oxo-AHLs to the corresponding 3-hydroxy compounds. From LC-MS/MS analysis, this study demonstrated that the production of 3-hydroxy-C8-HSL was directed by the LuxI homologue and not by the reduction of 3-oxo-C8-HSL. As there could be strain variation in terms of AHL production by B. cepacia, it would be of great interest to look into the relationship between different strains of Bcc isolates and their autoinducer synthesis.
To date, characterization of Bcc species in the environment has been more limited than investigation on clinical epidemiology. A study by Stoyanova et al. (2007) reported that grasses and maize from the Gramineae group are essential rhizospheric hosts and niche for Bcc bacteria. Our group, hence, believes that environmental isolates such as strain GG4 are likely to have a major impact on the properties of polymicrobial communities in the rhizosphere. In fact, many Bcc isolates have been exploited for various purposes, including plant growth promotion, biological control of plant pathogens, and bioremediation of recalcitrant xenobiotics (Stoyanova et al., 2007). In the future, it is plausible that more Bcc strains associated with different host plants will be isolated from different habitats.
Despite significant progresses on the taxonomy of Bcc, the knowledge of the virulence determinants and their molecular mechanisms used by Bcc bacteria, particularly the clinical strains, remains scarce. Hence, as B. cepacia is ubiquitous in nature, it is an attractive organism to study the role of QS as the global regulatory system in controlling virulence, thereby developing the interventions designed to combat infection or to induce beneficial applications in agriculture. | 2017-08-25T02:11:07.427Z | 2015-08-06T00:00:00.000 | {
"year": 2015,
"sha1": "98841e63a2db933a647a7e001156ac3f01f13a5b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.1117",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98841e63a2db933a647a7e001156ac3f01f13a5b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219721584 | pes2o/s2orc | v3-fos-license | Psychological Functioning of Slovene Adults during the COVID-19 Pandemic: Does Resilience Matter?
As a public health emergency, a pandemic increases susceptibility to unfavourable psychological outcomes. The aim of the present study was to investigate the buffering role of personal resilience in two aspects of psychological functioning, mental health and stress, among Slovene adults at the beginning of the COVID-19 outbreak. Within five days after Slovenia declared epidemics, 2722 participants (75% female) completed an on-line survey measuring mental health and perceived stress as outcome variables and demographics, health-related variables, and personal resilience as predictor variables. Hierarchical logistic regression analyses demonstrated that women, younger, and less educated participants had higher odds for less favourable psychological functioning during the COVID-19 outbreak. In addition, poorer health indicators and COVID-19 infection concerns predicted diminished psychological functioning. The crucial factor promoting good psychological functioning during the COVID-19 pandemics was resilience, additionally buffering against detrimental effects of demographic and health-related variables on mental health and perceived stress. While previous research suggests that mental health problems increase during pandemics, one way to prevent these problems and bolster psychological functioning is to build individuals’ resilience. The interventions should be targeted particularly at younger adults, women, less educated people, and individuals who subjectively perceive their health to be rather poor.
Introduction
A pandemic as a public health emergency in itself increases the proneness of people to various mental health problems, which may be further aggravated by the social distancing approach disrupting daily routines, restraining interpersonal communication and limiting the availability of social support [1,2]. While modern world has faced other epidemics and pandemics before, none of them had such world-wide and drastic effects on most of the individuals and their everyday life as the current COVID-19 pandemic. The present study aimed to elucidate people's psychological functioning at the beginning of COVID-19 outbreak. Besides investigating the role of demographic characteristic and health-related variables in two aspects of individuals' psychological functioning -stress and mental health, special research interest was focused on examining the buffering role of personal resilience.
On 11th of March 2020 The World Health Organization [3] recognized the COVID-19 as a pandemic. Many countries, including Slovenia, took increasingly stricter measures directed towards flattening the curve, i.e. slowing the infection rate of the virus across the population. These measures were primarily focused on social distancing and were to be continued for an unpredictable time. The present study was carried out during the first days of lockdown, characterized by significant changes in all aspects of people's daily lives and high overall worry about the infection, inflated by the exponentially increasing infection and death rates in the neighbouring regions of Italy.
Among personal factors affecting psychological functioning during adversity, resilience has been suggested to have a buffering role in pandemic-related stress [4]. The present study investigated resilience at the individual level as a personal quality that helps individuals to thrive in the face of adversity [11]. The positive role of resilience in various stressful situations and life outcomes has been well-documented [12], but its effects on psychological functioning during virus outbreaks remains understudied with a few exceptions [13].
The aim of the present study was to examine psychological functioning during the first days after the declaration of COVID-19 pandemic. We aspired to broaden existing knowledge on psychological functioning during such public health crises by focusing not only on mental health problems (i.e. stress levels) but also on positive mental health, thus adopting the modern view of mental illness and mental health as separate though related entities [14,15]. Moreover, we investigated the role of potential predictors of psychological functioning. In addition to more commonly explored role of demographic and health-related variables, including people's concern about COVID-19 infection, this study also explored the incremental predictive value of individuals' personality resilience in the context of COVID-19 pandemic. More precisely, resilience was expected to have a two-fold buffering effect: it could (i) inoculate individuals against elevated stress levels and decreased mental health, as well as (ii) weaken the negative impact of potential risk factors (e.g., pre-existing health conditions) on stress and mental health.
Participants and Procedure
The total sample consisted of 2722 participants with a mean age of 36.4 years (SD = 13.1). Among them, 32.2% were emerging adults (18-27 years), 40.9% were early adults (28-44 years), 20.7% were middle adults (45-59 years), and 6.1% were late adults (60-82 years). A quarter of the participants (25.1%) were male and three quarters (74.9%) were female. Regarding their education, 32.2% had a high school or lower education and 67.8% attained a post-secondary education or graduate degree.
The data were collected within five days after Slovenia declared epidemics. During these five days, the government closed all sales and service facilities (with the exception of food and pharmacy stores), schools and kindergartens, stopped public transportation, and prohibited public gatherings. Furthermore, COVID-19 claimed its first victim in Slovenia. The data collection took place via an on-line survey platform. The link was distributed via social networks and advertised on the National radio and television's website. On the first page of the survey, the participants were informed about the aims of the study and asked to confirm their informed consent to participate.
Measures
Demographic data collected included information on sex, age, and educational level.
The general health indicators included the presence of at least one chronic health condition (yes/no answer) and subjective reports of health, assessed along a continuous scale ranging from 0 (very bad) to 100 (very good). Two contextualized health-related variables tapped the degree of worry regarding their own and their significant others' possible COVID-19 infection, assessed on a continuous scale ranging from 0 (not at all) to 100 (very good). All continuous scale-scores were dichotomized with scores up to and including 50 regarded as poor health/not worried and scores above 50 as good health/worried. The 10 item Connor-Davidson Resilience Scale -CD-RISC-10 [16] is a self-report scale that measures how well is one equipped to bounce back after adversity. Each item is rated on a 5-point scale (0not true, 4true nearly all of the time). In the present study, the participants reported on their resilience for the past week. Previous studies had shown good reliability, validity [16], and measurement invariance across age and sex [17] for the CD-RISC-10. Alpha reliability coefficient in our sample was .94. The resilience score was dichotomized based on a median split (< 27 vs. ≥ 27).
The Perceived Stress Scale -PSS [18] is a self-report 10-item scale, designed to measure the degree to which situations in one's life are appraised as stressful. Using a 5-point rating scale (0never, 4very often), participants specify how often did they feel or think in a certain way during the last week. The reliability and validity of the PSS had been established as satisfactory [19]. In our study, the alpha reliability coefficient was .89. The perceived stress score was divided into the categories of low vs. high perceived stress based on a median split (< 17 vs. ≥ 17).
The short form of the Mental Health Continuum -MHC-SF [20] consists of 14 items that measure positive mental health. The overall score reflects emotional, psychological and social well-being. Respondents rate the items on a 6-point scale (0never, 5every day during the past week). The MHC-SF has shown good internal consistency and sound validity [14]. The alpha coefficient obtained with our sample was .91. The presence of flourishing mental health is indicated when a person feels at least one of the three hedonic well-being symptoms "every day" or "almost every day" and at least six of the eleven psychological and social well-being symptoms "every day" or "almost every day" in the past week. The absence of flourishing mental health reflects moderate to poor well-being.
Results
Demographic characteristics and descriptive statistics were examined for the entire sample and separately for those with flourishing vs. non-flourishing mental health and low vs. high perceived stress in the past week. Overall, 40.7% (n = 1109) participants were classified as having flourishing mental health in the past week and 54.4% (n = 1242) participants perceived high levels of stress. More precisely, 28.4% of the sample had favourable scores on both indicators of mental health and 42.0% disadvantageous scores on both indicators, while 17.2% reported low stress and low flourishing, and 12.3% high stress and flourishing mental health.
Next, Chi-square tests were performed to examine the association of independent variables with flourishing mental health and high perceived stress. Generally, flourishing mental health was more common among men, older participants, and highly educated participants (Table 1). Flourishing was also more common among participants who reported having good health, had no chronic health conditions and were less worried about their own and other's potential infection with COVID-19. High stress was associated with female sex, younger age, lower educational level, lower subjective health and worrying about one's own and other's potential infection with the new coronavirus. Finally, the strongest association was observed between high resilience and both indicators of good psychological functioning.
Hierarchical logistic regression modelling was employed to examine independent effects of demographic characteristics, health-related variables, and resilience on flourishing mental health and high perceived stress. In the first step, age, sex, and education were entered as covariates in the models. In the second step, self-rated health, chronic health conditions, and worry about one's own and other's potential COVID-19 infection were added to the models. Finally, resilience was entered in the models. Except from age, all predictors were treated as categorical.
The results of the first step of the hierarchical logistic regression models (Table 2) revealed that men, older, and more educated participants were more likely to have flourishing mental health during the previous week compared to women, younger, and less educated participants, who were instead more likely to report being highly stressed. Both regression models were significant, but explained rather low shares of variance in the dependent variables (see Nagelkerke R 2 values in Table 2).
Adding health-related variables to the models as covariates revealed that participants who rated their health as poor, reported having chronic health condition(s), and were worried about their own and other's potential COVID-19 infection were less likely to have flourishing mental health in the previous week, but more likely to report high perceived stress (with one exception the presence of chronic health conditions was not a significant predictor of high stress). The associations with sex, age, and education remained stable. Again, both models were significant and some additional variance was explained in the two dependent variables.
Lastly, participants who were more resilient during the previous week had almost 7 times higher odds of flourishing mental health and 9.3 times lower odds of high stress levels compared to those who were less resilient. This was by far the strongest predictor in both models. Moreover, resilience attenuated the negative effects of female sex, lower education, and health-related variables on flourishing mental health. Apart from poor self-rated health, these covariates no longer had significant negative effects. The attenuation effect of resilience was also observed when predicting high levels of stress, although it was weaker, with most of the predictors from previous steps remaining significant. The two final models were significant, with 28% and 40% of variance explained in flourishing mental health and high perceived stress, respectively.
Discussion
The present study investigated the buffering role of personal resilience in two aspects of psychological functioning, stress and mental health, during the outbreak of COVID-19 and subsequent social lockdown, while taking into account individuals' demographic and healthrelated characteristics.
The results obtained showed that demographic characteristics and health-related variables contribute significantly to favourable psychological functioning during the COVID-19 pandemic, but their predictive value is rather weak and diminishes further once personal resilience is Note. * p < .05, ** p < .01, *** p < .001 accounted for. Nevertheless, younger age seems to represent a risk factor for poor psychological functioning during the pandemic, which is consistent with findings in China [9]. This results could be seen as counterintuitive as the symptoms and consequences of the new coronavirus are worse for older as compared to younger adults [21]. However, there is some evidence that flourishing is more common in middle and late adulthood than early and emerging adulthood [22], and the present study suggests that this holds true even in the face of such an adversity as the COVID-19 pandemic. In addition, the present results suggest that women may be at a higher risk for nonflourishing mental health and high stress. While the statistics show somewhat higher COVID-19 mortality rates for men than women [23], our results are in line with the notion that other consequences of the pandemic and lockdown, such as financial challenges, increased informal care of children and their schooling as well as sick family members, and decreased employment opportunities, could be more detrimental for women than men [24]. This finding is also consistent with previous research showing somewhat higher susceptibility of women to elevated levels of stress and mental health problems than men [25]. Finally, in line with previous findings [26,27], our results indicated a protective role of higher education in good psychological functioning, although this association was weak and diminished to the level of insignificance after controlling for personal resilience.
Our results further suggest that the subjective perception of one's health is more important for perceived stress and mental health during pandemic than objective health indicators, such as the presence of chronic health conditions. The later variable was included as the COVID-19 mortality rates are higher for people with other medical conditions than those without [28]. However, according to our results psychological functioning outcomes seem to be more contingent on subjective assessment than objective measures of health functioning. Furthermore, high concerns about possible COVID-19 infection also proved a significant predictor of high perceived stress and lower levels of mental health. Nevertheless, the predictive value of subjective and objective health indicators and infection concerns diminished substantially once the resilience was taken into account.
As our results show, the crucial factor of psychological functioning during COVID-19 pandemic seems to be individual level resilience. Even after taking into account demographic characteristics and health-related variables, presumed to be associated with risk of COVID-19 infection and mortality, the probability of experiencing high stress and flourishing mental health during the current pandemic and lockdown depends mostly on the level of personal capability to cope with adversity and achieve good adjustment. The results thus support the hypothesized buffering role of resilience against diminished psychological functioning due to the COVID-19 pandemic and associated preventive measures that may have concurrent and long-lasting negative effects on diverse aspects of people's everyday lives. Furthermore, resilience was found to buffer against detrimental effects of various demographic and health-related variables on mental health as it noticeably attenuated their role in stress and particularly in mental health. These findings corroborate the conceptualization of resilience as a trait that protects individuals against the impact of adversity or traumatic events [11,29], and extend them to the context of the COVID-19 pandemic with its unprecedented scope and wide-spread corollaries.
The good news concerning our findings is that resilience can be effectively enhanced and thereby the risk of poor psychological functioning due to the pandemic and its consequences can be reduced. Two evidence-based intervention programs may be especially suitable in the pandemic context [4]: (1) Folkman and Greer's approach [30] promotes problem-focused coping for controllable events, emotion-based coping for boosting support and reducing isolation, and meaning-based coping for persistent events; (2) the psychological first aid approach [31] facilitates resilience immediately after trauma. In addition, previous studies provided evidence on effectiveness of several psychological interventions for boosting resilience, for example mindfulness [32], resilience regimen [33], self-efficacy training [34], and cognitive behavioural therapy [35]. The American Psychological Association [36] advises that individuals themselves can advance their resilience by building their social relationships (e.g., by keeping in touch with friends, accepting and offering support), fostering physical and mental wellness (e.g., practicing mindfulness, taking care of one's body), finding purpose (e.g., by helping others, being proactive, setting and moving towards realistic goals), embracing healthy thoughts (e.g., keeping things in perspective, accepting change, staying optimistic) and seeking professional help when feeling unable to function well.
Certain limitations of the study should be highlighted. First, the study relied on selfreported questionnaire data, which are susceptible to various biases [37]. However, stress and well-being are inherently subjective phenomena and thus may be best assessed by selfreports. Second, the data collection took place on an online survey platform. Even though 83% of Slovenians, aged from 16 to 74 years, regularly use the Internet [38], the method of data collection and study advertising may have led to self-selection of participants, especially in late adults as half of the Slovenian adults, aged over 65 years never use the Internet at all [38]. The older adults who did participate in our study are (compared to the non-participating older adults) probably more familiar with modern digital technology, which could be associated with better cognitive and social functioning [39], leading to better mental health and confounding possible age effects investigated in our study. Also, our sample was not representative in terms of sex structure, with more females than males participating. Third, the study presented had a correlational cross-sectional design precluding any causal interpretations. To overcome this drawback, we asked the participants to continue taking part in the study and the follow-up data collection is under way.
The main message for the policy makers, media, educators etc. is that while mental health problems increase during pandemics, one way to prevent these problems and increase good psychological functioning is to build individuals' resilience by educating general public and healthcare workers on evidence-based effective strategies, organizing and promoting intervention programs, and taking measures in work (especially healthcare) organizations aimed at fostering resilience. The results of the present study suggest that the intervention providers should pay special attention to younger adults, women, less educated people and individuals who subjectively perceive their health to be rather poor. In addition, our results support that it is important to consider indicators of both good and poor psychological functioning, as low stress does not necessarily imply flourishing mental health and vice versa [14,15], and the predictive associations were not the same for stress and mental health. | 2020-06-18T05:06:32.629Z | 2020-06-17T00:00:00.000 | {
"year": 2020,
"sha1": "db3b14134ee137cfad958cac6cf0d5aaecde9230",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11126-020-09789-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "db3b14134ee137cfad958cac6cf0d5aaecde9230",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
266377194 | pes2o/s2orc | v3-fos-license | Economically viable co-production of methanol and sulfuric acid via direct methane oxidation
The direct oxidation of methane to methanol has been spotlighted research for decades, but has never been commercialized. This study introduces cost-effective process for co-producing methanol and sulfuric acid through a direct oxidation of methane. In the initial phase, methane oxidation forms methyl bisulfate (CH3OSO3H), then transformed into methyl trifluoroacetate (CF3CO2CH3) via esterification, and hydrolyzed into methanol. This approach eliminates the need for energy-intensive separation of methyl bisulfate from sulfuric acid by replacing the former with methyl trifluoroacetate. Through the superstructure optimization, our sequential process reduces the levelized cost of methanol to nearly two-fold reduction from the current market price. Importantly, this process demonstrates adaptability to smaller gas fields, assuring its economical operation across a broad range of gas fields. The broader application of this process could substantially mitigate global warming by utilizing methane, leading to a significantly more sustainable and economically beneficial methanol industry.
M
ethane is the most abundant energy source and an important hydrocarbon feedstock for producing fuels and chemicals.It is also considered a transitional fuel that alleviates the current reliance on finite crude oil reserves for energy and chemical synthesis 1 .Recent technical advances in shale gas exploitation and drilling have significantly increased methane production 2 .Consequently, the utilization of methane for fuel and chemical production has received considerable attention.
Until now, the only available route for commercial methane utilization producing value-added liquid products was an energyintensive indirect conversion that includes syngas production followed by a series of refinement processes 3 .Although the indirect conversion of methane is mature enough to be widely applied in chemical production 4 , such technologies are not adequate for local and small-scale facilities, such as remote oil fields.Consequently, 143 billion m3 of natural gas has been flared for over the past fifteen years, wasting potential feedstock and causing greenhouse gas emissions 5,6 .
In the context of converting methane into methanol derivatives, the use of methyl bisulfate (MBS, CH 3 OSO 3 H) as an intermediate offers several advantages.Firstly, its synthesis through the oxidation of methane with SO 3 in H 2 SO 4 media is a cost-effective method.Additionally, MBS exhibits a high product yield, making it an attractive option for further processing into methanol 1 .Periana et al. reported the synthesis of MBS using a bipyrimidyl-bonded platinum catalyst, (bpym)PtCl 2 , which showed a methane conversion of 72% and MBS selectivity of 81% 33 .Although the activity of the Periana catalyst was surprising compared to those of other catalyst systems reported at that time, its performance was still insufficient for industrial applications; the maximum turnover number (TON) of the catalyst was 500.Recently, Pt-black, K 2 PtCl 4 , and (DMSO) 2 PtCl 2 catalyst systems have shed light on the industrial potential of methane activation to MBS by attaining high reaction performance under relatively mild conditions 31,35,41,42 .T. Zimmermann et al. reported K 2 PtCl 4 could convert methane to MBS with a TOF over 25,000/h, which is enough for commercial process 35 .Dang et al. developed (DMSO) 2 PtCl 2 catalyst having >94% selectivity with >84% MBS yield 41 , which was followed by deactivation-free Pt-black catalyst with similar activity 42 .Nevertheless, the practical application of these catalysts to industrial methanol production was still challenging because of the separation of MBS from the sulfuric acid solution 35 .
The separation of MBS from sulfuric acid requires distillation at high temperatures or depressurization to 100 mbar, which in turn decomposes MBS into SO 3 , dimethyl ether, and dimethylsulfate 42 .Michalkiewicz et al. suggested the use of membranes for the separation of products 43 , however, the functionality of the membrane under such strong acidic conditions is still questionable.Furthermore, addition of water to the mixture of MBS and sulfuric acid to hydrolyze MBS to methanol wastes large amount of diluted sulfuric acid.According to Ahlquist et al., methanol concentration cannot be higher than 10 µM in sulfuric acid as methanol undergoes additional oxidation 44 .Accordingly, the produced MBS should be separated from sulfuric acid before it is converted into methanol [45][46][47] .
The direct methanol synthesis method presented here is fundamentally different from the above-mentioned approaches.The proposed reaction pathway directly converts methane to methanol through oxidation, esterification, and hydrolysis.The benefits of this reaction sequence are significant, as the second step of esterification can alleviate the burden of separating methyl bisulfate (MBS) from the sulfuric acid solution.This is due to the relatively lower boiling point of methyl trifluoroacetate (Me-TFA), 43.5 °C, compared to that of MBS, which exceeds 170 °C.Subsequently, Me-TFA is hydrolyzed to methanol and trifluoroacetic acid (TFA), with the latter being recycled for the synthesis of Me-TFA.The economic feasibility and carbon footprint of the proposed sequential reaction was evaluated through process design and optimization.Numerous process alternatives inherently included in the proposed reaction are evaluated using a superstructure and machine learning-based optimization method.The results of this study reveal that the methanol price can be reduced to $203 ton −1 , which is approximately twice lower than the current price, when the sulfuric acid price is maintained at its market price.Additionally, the proposed process can be applied to small gas fields, which can be used to produce 16,000-ton methanol per year while meeting economic viability.
Results
Experimental investigation of sequential reaction.The sequential reaction starts with methane oxidation with SO 3 to synthesize MBS in the sulfuric acid media, which is followed by transferring the methyl group of MBS to Me-TFA (Fig. 1a).It is worth noting that previous research has investigated direct Me-TFA synthesis from methane by conducting oxidation in TFA media [37][38][39][40]48,49 . Howevr, due to low methane conversion yields and significant solvent decomposition during oxidation, further catalyst development is required to advance this method beyond academic interest 40 .The net reaction of Fig. 1a can be expressed by Eq. ( 1), whereby one mole of methane reacts with two moles of SO 3 and one mole of water to produce methanol and sulfuric acid as follows.It is noteworthy that the use of SO 3 oxidant enables the co-production of both methanol and sulfuric acid as valuable economic products.
The methane oxidation is the most crucial step because the yields of subsequent esterification, and hydrolysis reaction are mainly depended on the methane conversion to MBS.The reaction equations for the methane oxidation to MBS and the side product CO 2 formation are shown in Eqs. ( 2) and (3), respectively 41 .
To maximize the MBS yield, we carried out methane oxidation reaction experiment using Gaussian process Bayesian optimization (GPBO) 50,51 .GPBO captures the correlation between the input (i.e., experimental conditions) and output (i.e., reaction yield), and builds a surrogate model based on given data.Then, the GPBO suggests the next experimental condition decided by optimizing the surrogate model, and in turn, the acquired experimental data is used for updating the surrogate model.This way, the GPBO extensively focuses on the region of interests and the optimum operating condition can be economically obtained.
Methane oxidation was conducted between 180 and 235 °C for 3 h as described in the "Methods" section.Among the various Ptcatalyst reported, Pt-black was chosen due to its stability and reusability 42 .Figure 1b shows the impacts of the reaction time on the methane oxidation efficiency at 180 °C and 20 wt% of oleum concentration.The catalytic activity of Pt-black gradually increased over reaction time.At 30 min of reaction, 19.6% of CH 4 was converted into the oxidation products (MBS and CO 2 ) and the yield of MBS was 19.1%.As the reaction time increases up to 3 h, the conversion of CH 4 reached 81.3%, which in turn increased to 93.8% after 6 h of reaction.The formation of CO 2 was gradually increased during the reaction; at 3 h, the CO 2 selectivity was 2.9%, which was increased to 5.1% by the reaction of 6 h.Supplementary Fig. 2 shows the impacts of the catalyst concentration and the reaction time on the methane oxidation efficiency.As shown in Supplementary Fig. 2, when the catalyst concentration is low (0.31 mM), 26.4% of MBS was formed and gradually increased up to 81.3% at the catalyst concentration of 0.94 mM.The use of catalyst above 0.94 mM did not significantly affect the oxidation results.This is because Pt-black dissolution in oleum (20 wt%) gets saturated.
The optimum reaction temperature and oleum concentration obtained through GPBO is presented in Fig. 1c and Supplementary Table 1.The nonconvex response of TON on experimental conditions was found within the search domain (see Supplementary Fig. 3a).However, a Gaussian nonlinear regression model was successfully applied to optimize the operating conditions.The predicted optimum reaction temperature and oleum concentration are 200 °C and 33 wt%, respectively.It is noteworthy that the TON of the oxidation reaction exhibits varying responses depending on the reaction conditions based on the optimum point.In Supplementary Fig. 3b, which illustrates the regression of TON as a function of the reaction temperature and oleum concentration, the TON appears to be highly correlated with the reaction temperature when the temperature is lower than the optimal condition.Conversely, when the temperature exceeds the optimal condition, both reaction conditions have an impact on TON, and, in particular, the reaction temperature is negatively correlated with TON.
As shown in Fig. 1d, the experiments to calculate equilibrium constants of the esterification reaction (Eq.( 4)) have been carried out at 25, 40, and 60 °C and the equilibrium constants were calculated as 6.71, 6.43, and 6.11 for the esterification reactions.The tendency of the equilibrium constants indicates that the esterification of MBS and TFA is an exothermic reaction.
The esterification process undertaken through batch reaction yielded a maximum MBS conversion of approximately 73% (see Supplementary Table 3).The optimal conversion was attained at a temperature of 40 °C.This outcome can be attributed to the fact that the equilibrium concentration of Me-TFA decreases with an increase in temperature.While it may be possible to achieve higher MBS conversion with lower temperature, this would come at a cost to the economic viability of the process, as it would require a larger reactor volume due to the slow reaction kinetics.In fact, a lower MBS conversion rate was observed even with twice the time (48 h) at a lower temperature of 25 °C.However, the esterification reaction requires less than 2 h to converge to its equilibrium state when the experiment is conducted over 60 °C.Considering these results, the esterification reaction was designed to be operated over 60 °C.The conversion efficiency of MBS can be further enhanced by employing reactive distillation column techniques (see Supplementary Note 2).Notably, in a reactive distillation setup, the removal of Me-TFA from the feed stream was observed to result in an 86% conversion of MBS (see Supplementary Table 4).
From the regression analysis, the correlation between the equilibrium constant, K eq,est , and the reaction temperature, T, was obtained as Eq.(6).
The equation for K eq,est was adopted to design the reactive distillation column producing Me-TFA, and the accompanying decomposition of sulfuric acid (Eq.( 5)) was assumed to be in Gibbs equilibrium.
In order to determine the correlation for the hydrolysis reaction between the equilibrium constant, K eq,hyd, and the reaction temperature, the hydrolysis products were measured between 20 °C and 150 °C (Eq.( 7)).The experimental results are presented in Fig. 1e.The hydrolysis reaction seems strongly affected by the reaction temperatures.At 25 °C, the efficiency of this hydrolysis reaction was quite low producing a negligible amount of methanol (1.5%) and TFA (1.3%).As the temperature increases up to 100 °C, the proportions of methanol and TFA gradually grow up to 32% and 35%.Further increase in the reaction temperature over 100 °C does not show significant impact on the reaction performances.According to the hydrolysis reaction equation, the obtained amount of methanol and TFA should be equivalent to each other, however, as can be seen in Fig. 1e, from the temperature range over 100 °C, deviation in the amount of products can be observed.This might be caused by dehydration of methanol to dimethyl ether in acidic solution can occur at high reaction temperatures 52 .Although we did not isolate and analyze dimethyl ether, it is assumed that dimethyl ether was formed as much as the deficient amount of methanol in Fig. 1e in order to confirm the mass balance of the experiment.According to the experimental data, we regress the equilibrium constant in terms of temperature as Eq. ( 8), where T is the reaction temperature.
Methanol co-production process design.The proposed direct conversion reaction yielded promising results for the direct conversion of methane to methanol.However, to evaluate the economic viability, a process-level assessment is necessary, taking into account factors such as product separation, raw material recycling, and auxiliary operations.To do this, we used a superstructure-based process design and optimization methodology, as shown in Supplementary Fig. 8a.Superstructures encompass various process design options, and using proper optimization can help achieve an optimal design.This method also allows a detailed analysis of the proposed reaction system, by quantifying uncertainties originated from different process alternatives.
Our superstructure consists of 730 potential process configurations, involving 10 binary integer variables that determine the process configuration, and 9 continuous variables that set the optimal operating conditions.Despite the ability of superstructure optimization to concurrently identify the optimal process configuration and operating conditions simultaneously, this approach is computationally challenging for process design due to its large search space and nonconvex domain.To address this, we implemented a hybrid method of variable decomposition method integrated with a Gaussian process Bayesian optimization, which is a machine learning-based optimization solver as shown in Supplementary Fig. 8b.All the calculations for the optimization procedure have been automated by the Aspen Plus -MATLAB interface which allows MATLAB to access the simulation data of Aspen Plus.Based on the flowsheet model simulation using Aspen Plus, an economic analysis was carried out using MATLAB.
The proposed optimization method first determines the optimal process configuration using a hybrid method combining genetic algorithm 53 and Bayesian optimization.Although the process design attained is yet to be optimal and requires additional optimization to fine-tune continuous variables, the configuration is fixed as the optimal one in this step.The process operating variables were then divided into subgroups based on the binary interaction between the two variables.The binary interactions were calculated using a two-level factorial design, and the variables were then classified into two clusters using a hierarchical clustering method with a dendrogram 54 .As the variables contained in each cluster are gathered based on the proximity of their impact on the objective function value, the variables in different clusters can be considered irrelevant to each other.Accordingly, each variable cluster is sequentially optimized to consider a smaller number of optimization variables at a time and obtain the optimum process design within an affordable computation time.In addition, the Gaussian process Bayesian optimization method was adopted to efficiently attain the optimal solution 51,55,56 .
The optimal process configuration identified through superstructure optimization is shown in Fig. 2a.In the oxidation section, which is indicated by yellow lines, methane and SO 3 reacted to produce MBS and CO 2 .A constant temperature gas induction stirred-tank reactor was utilized for the oxidation reaction, and the conversion and selectivity were directly obtained from lab scale experiments (see Supplementary Fig. 2).The optimum process recovers unreacted raw materials using a threestage flash column rather than a distillation column and recycles the unreacted raw materials back to the oxidation reactor through a CO 2 removal unit.The by-product of the oxidation reaction (i.e.SO 2 ) is recycled back to the oxidation reactor through the SO 2 oxidation unit to reduce the raw metric cost.The produced MBS is then introduced to a reactive distillation column to produce Me-TFA and sulfuric acid, where the esterification reaction and sulfuric acid separation take place at the same time.Me-TFA is the light substance in the product mixture from esterification; thus, it can be easily separated from sulfuric acid, thereby avoiding the MBS purification issues raised in previous studies [42][43][44] .The dissolved Pt in the bottom sulfuric acid stream was concentrated in a platinum recovery unit and recycled to the oxidation reactor.Finally, the Me-TFA hydrolysis is converted to methanol in a hydrolysis reactor.
In the hydrolysis reaction, which is indicated by purple lines, Me-TFA was converted to methanol and TFA with the aid of water in a hydrolysis reactor.The recovered TFA was then recycled back to the esterification reaction after water separation to close the TFA loop.It is worth mentioning that both the continuous stirred-tank reactor and the reactive distillation system were considered options for esterification and hydrolysis reactions.As the reaction performances of esterification and hydrolysis are more sensitive to chemical concentrations and temperature than oxidation, the adoption of reactive species would be helpful in increasing the efficiency of esterification and hydrolysis.
In the optimum process, the conversion of the overall system was maximized using a reactive distillation column for the esterification reaction.This is because the liquid Me-TFA product is continuously removed from the reactant in the reactive distillation column, and thus, the equilibrium of the esterification reaction moves forward to produce more Me-TFA.Interestingly, the optimum strategy for the hydrolysis reaction is not reactive distillation, but a sequential CSTR reactor and separation unit.In the esterification reaction, as the boiling point of the product, Me-TFA, is significantly lower than that of other chemicals, the product can be selectively separated in the distillation column.In contrast, in the hydrolysis reaction, separation of the feed, Me-TFA, and the product, methanol, requires intensive energy input owing to their close boiling points.Thus, the adoption of a reactive distillation column has little impact on moving the hydrolysis equilibrium forward; rather, it increases the size required for methanol separation, resulting in an increase in cost.To recover the unreacted methane, the superstructure optimization selected flash separation instead of a distillation column as methane separation does not require a thermal separation unit, which consumes a substantially larger amount of energy than flash units 7 .
The results of the economic analysis show that OPEX is dominant over CAPEX.Over 90% of the production cost originates from the process operation because of expensive raw materials and extensive steam consumption.As shown in Fig. 2b, the SO 3 feed costs account for approximately 40% of the OPEX.The excessive cost of SO 3 feed is primarily because we assume that it is supplied from the commercial market.Thus, the economic feasibility of the proposed system would be further improved if SO 3 could be supplied from a cheap source, such as power plant waste.Figure 2c shows the cash flow diagram of the optimum process design.As indicated in the figure, the proposed design can achieve a positive NPV of $144 million, and its payout time is calculated as a year.The proposed process is particularly competitive for methanol production because it coproduces sulfuric acid, compensating for 93% of the operating cost.The levelized cost of methanol can be as low as $203 ton −1 , which is relatively low considering the current methanol market price ($270-$450).
Analysis of suboptimal process configurations.In addition to the analysis of the optimal structure, the analysis was also performed on all alternative cases obtained during the optimization to ensure the reliability of the optimal structure.Supplementary Fig. 6 illustrates the t-distributed stochastic neighbor embedding (t-SNE) results for the collected data obtained during optimization.t-SNE is a nonlinear dimension-reducing algorithm capable of visualizing high dimensional vectors 57 .The large distance between the centers of the two clusters indicates that the NPV calculation results show large differences.Supplementary Fig. 6a shows that all optimization data can be categorized into seven different clusters.As the integer variables have decided the process configuration which has the most dominant impact on economic feasibility, each cluster contains a distinguishable integer variable set except for clusters #5 and #7.As shown in Supplementary Fig. 6c, which presents the simplified process configurations corresponding to each cluster, clusters #5 and #7 use a distillation column to recycle unreacted methane.In this case, the economic feasibility can be largely changed by the amount of energy input to separate methane; therefore, clusters #5 and #7 are dissected, even though they share the same configuration.Each group has a different combination of three configuration variable sets, which consist of the use of a distillation column for separation of CO 2 and methane, a type of unit operation used in esterification, and separation column sequences for the separation of Me-TFA and methanol.
Among the available process alternatives, product separation column sequencing to purify methanol yielded the longest distance in t-SNE.When both methanol and Me-TFA were recovered as a light product in the first column (Supplementary Fig. 6c, #7), the total steam consumption increased by 15-20% compared with the #2 cluster, and even the purity of the methanol product deteriorated, resulting in the loss of expensive TFA.As TFA cannot be neatly separated as a light product under feasible operating conditions, #7 alternative column sequencing limits methanol purity in the next column.Alternative #7 imposes an excessive energetic burden on the first column, and thereby its NPV is significantly lower than that #2 cluster in which column sequencing is supposed to separate Me-TFA first.
The use of reactive distillation (Supplementary Fig. 6c, #6) and CSTR (Supplementary Fig. 6c, #7) is another feature that can clearly distinguish the corresponding cluster through t-SNE.As mentioned earlier, the conversion of the esterification reaction is limited to approximately 28% in the CSTR, but that of reactive distillation can be 99% due to the simultaneous occurrence of the reaction and product separation.In the consecutive CSTR and distillation processes, the unreacted feed, mainly MBS and TFA, inevitably flows to the next unit operations, resulting in an increase in the overall process flow rate and heat duty.
As shown in the rightmost column of Supplementary Fig. 6c, cluster #6 contains two different alternatives that can be identified depending on whether a distillation column can be used for methane recycling and CO 2 removal.The recycling stream from the oxidation reactor contains about 10.7% methane, so purging this stream leads to a large amount of economic loss.Therefore, it is preferable to include a separation unit in the optimal process in terms of economic feasibility.However, the recycle stream contains a small amount of CO 2 ; thus, a similar and small energy input is required to separate it, regardless of the use of a distillation column or flash unit.
Among the clusters in Supplementary Fig. 6a, only #1, #4, and #5 met economic viability, supporting the suggestion that the optimization procedure presented in Fig. 2a works efficiently by screening out economically viable process configurations beforehand.Clusters #1 and #4 share the same optimal process configuration (see Supplementary Fig. 6a), and the optimal configuration contains all the advantageous characteristics to reduce the operational cost (the use of reactive distillation and column sequencing to separate Me-TFA first).Cluster #4 was a collection of the originally selected optimization variables from the pre-screening procedure.The variables in cluster #1 can be obtained via additional continuous variable optimization, while fixing the integer variables in the same manner as those in cluster #4.
Sensitivity analysis. Local and global sensitivity analysis (GSA)
were carried out to assess the influence of uncertainties in the operating variables and TEA parameters.Fourier amplitude sensitivity testing (FAST) was used to calculate the sensitivity indices and obtain the required samples.FAST is one of the most widely used techniques for quantifying uncertainty and calculating variance-based sensitivity indices that indicate the impact of uncertainties in parameters 58,59 .Prior to analyzing the economic parameters and operating variables, sensitivity-varying reaction performance was analyzed.The GSA results indicate that the oxidation parameters exert a dominant impact.
Figure 3a shows the change in NPV depending on the selectivity and conversion of the oxidation reaction when the conversion of esterification and hydrolysis reactions is fixed at the optimum point obtained from the simulation.The oxidation selectivity seems to have less impact on the NPV than the conversion.This is because, as shown in Eqs. ( 1) and ( 2), the side reaction of oxidation produces commercially viable sulfuric acid which can defend the economic feasibility from the deterioration caused by the reduction of methanol production.Figure 3b shows the NPV sensitivity results obtained by varying the conversions of the esterification and hydrolysis reactions when the oxidation yield is fixed based on the above-mentioned experimental results.In contrast to hydrolysis, esterification exerts a considerable impact on NPV, but the suggested process can earn profit in most of the tested conversion ranges.
Figure 4a shows the NPV distribution based on the varying operation variables.For the eight different operational conditions, the sampling bound was set to be 80-120% of the optimal value to ensure operational feasibility and obtain accurate sensitivity indices.It turns out that the selected process configuration guarantees NPV of $1.4 × 10 8 even with the worst operating conditions.Among the operation variables, the temperatures of the inter-stage coolers (E110, E111, and E112) had the highest influence on the NPV.The temperature changes in these units significantly affect light gases (methane and SO 2 ) recovery, which in turn changes the material consumption.When the temperatures of the coolers are lowered by 20% of their optimally selected values, the amount of light gases decreases by 20%, while requiring an even larger amount of cooling utilities.In contrast, when the oxidation product is separated at a temperature higher than the optimum condition, the loss of the oxidation product, MBS, increases, resulting in a poor NPV.Thus, temperatures of all coolers have converged to approximately 190 °C, which was identified as optimal under the given composition of the feed.
Among the other variables, the reflux ratio of the reactive distillation column had a significant impact on the NPV.The reflux ratio has a conflicting effect on NPV, as its increase can improve the esterification conversion by extending the contact between the reactants, MBS and TFA, but simultaneously, increases the heat duty in the reboiler.To operate the reactive distillation optimally, the reflux ratio should be low as long as the column can consume most of the MBS input.As a result of the optimization, the reflux ratio that satisfies the abovementioned condition is determined to be 5.1.
The effects of the economic parameters (feed and product prices, utility cost, and interest rate) obtained from the GSA are presented in Fig. 4b.The most influential parameters were the prices of methanol and sulfuric acid.Methane and TFA, which are designed to be recycled in the process, have little impact on process profit, implying that optimization minimizes their losses.In the case of utilities, the sensitivity index of the steam cost shows the highest value; however, the overall influence of utilities on the NPV is incomparable to that of material prices.Although the suggested process requires intensive energy input for product separation and feed recycling, the heat network selected by superstructure optimization efficiently reduces heat wastage.This means that the methanol-sulfuric acid co-production process can secure economic feasibility regardless of an unexpected increase in energy use owing to the uncertainty existing in the suggested design.
In addition to the sensitivity of the operating conditions and economic parameters, a sensitivity of the economics of the process as a function of process scale was performed.To utilize methane from small-scale facilities, the conversion process should be able to overcome the issue of economies of scale as production costs increase with diminution in scale 60 .Supplementary Fig. 7 shows a linear relationship between production capacity and NPV, showing the maximum profit when the process meets the commercial scale (generally 12,500 kg hr −1 = 100,000 tons yr −1 ).The process can be economically viable even at a production capacity of 2000 kg yr −1 , which implies this process can be applied to small-scale gas fields.Detailed graphical result is provided in the Supplementary Fig. 7.
Mitigation of carbon emission.
To analyze the global warming impact of the suggested process, the carbon footprint (CFP) was estimated and compared with that of a conventional process.As the suggested process produces methanol and sulfuric acid simultaneously, the CFP is allocated for both products according to the production rate.The CFP of each product was compared with that of a conventional process, steam methane reforming, and sulfur oxidation, as presented in Fig. 5a.
According to the Ecoinvent 3.71 database 61 , the CFP of methanol and sulfuric acid are 0.98 kgCO 2 eq kg −1 and 0.14 kgCO 2 eq kg −1 , respectively.The CFP allocation results for the proposed process are shown in Fig. 5b.With respect to methanol, the suggested co-production process can significantly reduce carbon emissions.However, a relatively large amount of carbon emission is estimated in sulfuric acid production, which is mainly due to energy-intensive oxidation catalyst recycling which requires the evaporation of sulfuric acid.However, with respect to methanol, the CO 2 emission is reduced by 0.6 kgCO 2 eq kg −1 methanol production.Furthermore, considering the global warming potential of methane (25 kgCO 2 eq kg −1 ), practical application of the suggested process to gas fields of which the scale is not available to conventional processes would contribute further mitigation carbon emissions.When the same amounts of both products are produced, co-production processes emit only 68% of CO 2 as compared with the conventional process.More positive scenarios by adopting renewable energy are also presented in Supplementary Figs.9-10.
Figure 5c shows the ratio of the emission factors of the CO 2 produced by the raw materials and utilities used in the process and the by-products emitted.The emission factor can calculated as the product of the inherent emission factor of each material, utility and by-product and the amount consumed or emitted to produce equivalent amount of methanol.As shown in Fig. 5c, the main source of carbon emissions is the use of electricity to generate steam energy and to operate distillation and recycling units, accounting for 56% of CO 2 emissions.Excessive energy use is inevitable as the suggested process in the distillation of sulfuric acid which is conventionally supplied by exothermic reactions using sulfur and air.
Other than methane oxidation, CO 2 hydrogenation can be viewed as a competitive alternative for methanol production when considering carbon emissions.In a comprehensive analysis by Rumayor et al., the CFP of methanol produced from thermochemical CO 2 hydrogenation-with the assumed hydrogen source being water electrolysis-can be reduced to 0.23 kgCO 2 eq kg −1 due to the carbon negative impact of CO 2 utilization 62 .However, given that CO 2 hydrogenation requires 3 moles of hydrogen for every mole of methanol, the CFP could exceed 1 kgCO 2 eq kg −1 , depending on the hydrogen source 45 .If hydrogen derived from methane reforming is used in CO 2 hydrogenation, the CFP of methanol would be approximately 2 kgCO 2 eq kg −1 63 .For methanol production via the electroreduction of CO 2 , the CFP with current technology is even higher because of the low CO 2 conversion, which results in high energy consumption for methanol purification 62 .This suggests that CO 2 utilization in methanol production effectively reduces CO 2 emissions only when green hydrogen is available and reaction performance is significantly improved.When utilizing natural gas, the proposed process can be environmentally competitive.
Discussion
The process of co-producing methanol and sulfuric acid through direct conversion of methane reactions was designed and optimized in this study.Experiments were conducted for the methane oxidation, MBS conversion, and Me-TFA hydrolysis reactions, and the use of the experimental data in modeling the process improved the reliability of the optimization results.The proposed hybrid optimization procedure marginalizing the integer variables in the subsequent continuous variable optimization step enables the efficient identification of the optimal process design.Superstructure optimization determined the reactive distillation column used for the esterification reaction of MBS and TFA.This improves the esterification conversion from 30% to 99% of the MBS feed by simultaneously separating the reaction product Me-TFA and thereby moving the reaction equilibrium forward.Including the adoption of the reactive distillation column, the determinations by optimization enable finding the economically feasible process design retrieving the CAPEX within three project years, when the process aims to produce 100,000 tons yr −1 of methanol.According to the analysis results, the proposed process can reduce the levelized production cost of methanol to $203 ton −1 through co-production with sulfuric acid, which means that it can significantly reduce the current price of methanol production.The proposed process is attractive because its production capacity can be as small as 2000 kg hr −1 of methanol while meeting economic viability, meaning that it should be implemented to produce an additional profit and reduce carbon emissions by converting the methane currently flared from small gas fields into methanol.
Methods
Materials.All chemicals were prepared as analytical reagent grade and used without further purification.Oleum (20 wt% SO 3 ), methanesulfonic acid, methyl trifluoroacetate, and trifluoroacetic acid-d 99.5 atom% D were purchased from Sigma Aldrich Co. and trifluoroacetic acid was purchased from Sejinci Co. Pt-black (surface area: 27 m 2 g −1 ) was obtained from Alfa Aesar Co.. High-purity methane gas containing 1% of argon was supplied Shinyang Gas Co.
Pt black-catalyzed methane oxidation reaction.The partial methane oxidation reaction using Pt-black as a catalyst was carried out in a stainless-steel reactor (SS 316) equipped with a glass liner, thermocouple, pressure gauge and thermal jacket.In the reactor prepared for the methane oxidation, Pt-black catalyst and 30 g of 20 wt% oleum were introduced into the reactor and pressurized with 25 bar of CH 4 at room temperature.The reactor was subsequently heated to 180 °C and stirred (800 rpm) for 3 h.When the oxidation reaction is finished, the reactor was removed from the heating jacket and placed into a water bath to be cooled down (Supplementary Fig. 1).
After the reaction, the liquid product was analyzed using 1H NMR (400 MHz, Varian) to estimate the obtained amount of MBS.To detect CO 2 gas produced in the reactor, the gas product was collected in a plastic gas bag and analyzed using GC-MS (HP 6890 GC with a 5973 MSD) equipped with capillary column (Poraplot Q 30 m × 25 um).Argon gas which was included in methane (1%) was used as a reference to measure the concentration of CO 2 in the gas mixture.The yield and selectivity were determined as follows.Esterification reaction of MBS and TFA.In preparation for the esterification experiment, 0.5 g of the liquid product from the oxidation reaction containing MBS (0.54 mmol) was mixed with 0.3 g of the reference solution (5% of methanesulfonic acid as an external standard in CF 3 COOD).The concentrations of MBS and Me-TFA were measured using the 1H NMR spectroscopy (Supplementary Fig. 4).Based on the measured amount of MBS and Me-TFA, H 2 SO 4 and TFA concentration could be determined.
YieldðMBS
Hydrolysis reaction of Me-TFA and water.3 g of Me-TFA (0.0234 mmol) was hydrolyzed with 0.84 g of water (0.0469 mmol) within 1 h under vigorous stirring in the glass pressure tube (Ace glass, max.150 psig) which was placed and heated in the oil bath.The correlation of the produced hydrolysis products (methanol, TFA, dimethyl ether) and the starting reagent (Me-TFA) were measured at 20 °C-150 °C using Gas chromatography (7890 A, FID, Agilent Technologies).
Economic analysis.The net present value (NPV) is adopted as the objective function of optimization as it can provide an insightful evaluation of the suggested process in terms of economic feasibility by analyzing cash flows over the project years.The formulation of the NPV is expressed by Eq. ( 9).min NPV x; y À Á ¼ ∑ where x and y represent the operating and design variables of the process, respectively.The NPV considers a total of 15 years of projects and two years of construction.The depreciation at interest rate r is considered for the NPV calculation.The process design and NPV calculation were based on a methanol production scale of 100,000 tons yr −1 .Operating expenditure (OPEX), capital expenditure (CAPEX), and revenue were calculated based on the simulation results from Aspen Plus.The detailed procedure for the NPV calculation is provided in the Supplementary Methods.
Fig. 1
Fig. 1 Experimental results of direct methanol synthesis method.a Reaction scheme for the proposed direct methanol synthesis method.b Effect of reaction time on the Pt-black-catalyzed methane oxidation to methyl bisulfate (MBS).Conditions: 3 mg of Pt-catalyst (0.015 mmol), 30 g of oleum (20 wt % SO 3 ), 25 bar of CH 4 , 180 °C.c Bayesian optimization.Conditions: 5 mg of Pt-catalyst (0.025 mmol), 35 bar of CH 4 , 3 h d Time dependence of the reaction between methyl bisulfate (MBS) and trifluoroacetic acid (TFA) at different temperatures on methyl trifluoroacetate (Me-TFA) formation equilibrium state.Conditions: 0.5 g of liquid product (0.54 mmol of MBS in H 2 SO 4 ), 0.3 g (2.24 mmol) of CF 3 CO 2 D e Temperature scanning experiments on methyl trifluoroacetate (Me-TFA) hydrolysis reaction.
Fig. 2
Fig. 2 Optimization result and economic analysis of methanol synthesis process.a Optimal process configuration resulted from superstructure optimization.The dotted squared area indicates the 3 stage consecutive flash units to recycle methane.b Capital expenditure (CAPEX) and operational expenditure (OPEX) breakdown of optimum process design.c Discounted cash flow (DCF), present value (PV) and net present value (NPV) over 15 years for optimum process design.
Fig. 3
Fig. 3 Sensitivity analysis varying reaction performance parameters.a Sensitivity result varying oxidation reaction sensitivity and conversion and b conversions of esterification and hydrolysis.The dashed line indicates the conditions where the NPV becomes zero.
Fig. 4
Fig. 4 Sensitivity analysis results for operating conditions and economic parameters.a NPV distribution depending on varying operation conditions.Whiskers were obtained by multiplying interquartile range (IQR) by 1.5.b Sensitivity indices of economic parameters.
Fig. 5
Fig. 5 Carbon emission analysis result.a CO 2 emission source breakdown.b Carbon footprint result.c Source of carbon emissions. | 2023-12-21T14:14:16.598Z | 2023-12-20T00:00:00.000 | {
"year": 2023,
"sha1": "f47ee98098dbb648696379d360a0cbddfa3aaf10",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42004-023-01080-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c32047d21cb67afa15b64024822dea32d7fd90c4",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51900011 | pes2o/s2orc | v3-fos-license | Clinical efficacy of two vaccination strategies against Mycoplasma hyopneumoniae in a pig herd suffering from respiratory disease
Background A randomised field trial was conducted on an Austrian farrow-to-finish farm for one year to compare the efficacy of two commercial Mycoplasma hyopneumoniae vaccines. 585 piglets either received the one-shot formulation in group 1 (Hyogen®, 23.9 days of age) or a two-shot vaccine (Stellamune® Mycoplasma, 4.3 and 24.0 days of age) in group 2. Assessment of vaccine efficacy was evaluated by regression analyses through cough monitoring from nursery to slaughter, average daily weight gain from inclusion to slaughter, antibiotic treatment rate (ATR), mortality rate, and lung lesion scoring at slaughter. Results In general, coughing was more frequent during late nursery and finishing. No significant differences were found in the coughing index (0.02 vs 0.03) and mean average daily weight gain (560 vs 550 g) between the two groups. ATR was higher in group 2 (3.8 vs 9.6%). At the slaughterhouse check, significant differences in the prevalence of bronchopneumonia (62.9 vs 71.2%) could be found. Extension of lung lesions was also significantly lower in group 1 in terms of enzootic pneumonia (EP) values (p = 0.000, z = − 4.269). There were no significant differences in the rate of scarred lungs (20.0 vs 24.0%) or those affected by dorsocaudal pleurisy (36.8 vs 34.3%). Conclusions This trial demonstrated that Hyogen® was superior to Stellamune® Mycoplasma in reducing (I) the prevalence of bronchopneumonic lungs and those affected by cranioventral pleurisy, (II) the extension and severity of EP-like lung lesions, and (III) the rate of antibiotically treated animals against respiratory disease.
Background
Mycoplasma hyopneumoniae (M. hyopneumoniae) is considered a primary pathogen of the porcine respiratory system, playing an important role in the porcine respiratory disease complex. The first stage of pathogenesis is the adhesion of M. hyopneumoniae to the ciliated epithelial cells of the respiratory mucosa by means of the adhesins P97, P102, and P159 [1][2][3]. In addition, M. hyopneumoniae is able to produce hydrogen peroxide, thus leading to inflammatory lesions at the respective sites [4]. Thus ciliostasis, clumping and loss of the cilia, and direct toxic harm to the respiratory epithelium are induced, which eventually leads to a decreased clearance of bacteria and opens the gate to secondary respiratory infections [5]. Genetic analyses showed that there is a strong heterogeneity in M. hyopneumoniae isolates originating from different herds [6]. A recent study reported that different M. hyopneumoniae strains can also be isolated from different batches of slaughter pigs of the same herd, with the severity of pneumonia at slaughter being significantly higher in those batches where multiple strains co-existed [7].
Possible methods to prevent and control M. hyopneumoniae are optimization of management practices such as all-in/all-out production and multisite-operations, the use of antimicrobials, and vaccination. Although national eradication programs have been carried out in some countries, reinfection of herds frequently occurs, as documented in Switzerland [8,9]. The M. hyopneumoniae-free state of herds is difficult to maintain especially in pig-dense areas, since airborne spread of the pathogen may occur over several kilometers [10]. Tetracyclines and macrolides are used most frequently to control and treat respiratory disease induced by M. hyopneumoniae [8]. Other potentially active antimicrobials include lincosamides, pleuromutilins, fluoroquinolones, florfenicol, aminoglycosides, and aminocyclitols [11]. Nevertheless, antibiotics are neither able to eliminate M. hyopneumoniae from the respiratory tract nor restore already developed lung lesions [5]. Additionally, the massive and often not justified use of antibiotics has led to a rise in antibiotic resistances, which has important drawbacks for animal and human health.
Commercial vaccines are extensively used in controlling M. hyopneumoniae. Several vaccination schemes exist: traditional two-shot formulations, which are still favoured in some European countries like Austria, one-shot formulations, and bivalent one-shot formulations containing both M. hyopneumoniae and porcine circovirus type 2 (PCV2) antigens. In general, vaccination reduces the occurrence of clinical signs and lung lesions and improves performance, but on the other hand does not prevent colonization of the respiratory tract epithelia by mycoplasma organisms [12,13]; yet variable results can be observed under field conditions. Vaccine storage, administration and compliance play an important role in the efficacy of the products [14]. Furthermore, according to a field study comparing two different one-shot and a two-shot vaccine, vaccine efficacy is more likely to be dependent on the composition of vaccines used and to a lesser degree on the number of vaccinations [15]. Aim of this study was to compare the efficacy of a single-shot vaccine against M. hyopneumoniae based on a novel bacterin using the 2940 strain and Imuvant™ (combination of light liquid paraffin O/W and Escherichia coli J5 lipopolysaccharide (ECJ5L)) as adjuvant with a two-shot product based on the strain P-5722-3 (NL 1042) adjuvated by a mixture of Amphigen base and Drakeol 5, by assessment of clinical signs, performance, and macroscopic lung lesions at slaughter.
Animals and trial setting
The study was performed on a closed combined family-owned single-site farm in Lower Austria, housing 84 Large White sows working in a 3-weeks rhythm. 600 fattening places were assigned to 10 pens in one stable and therefore also one air space. Every four months, all sows were vaccinated with a modified-live porcine reproductive and respiratory syndrome virus (PRRSV) vaccine as well as with a combined vaccine against Erysipelas and Parvovirosis. The PRRS-MLV vaccine was administered also to the piglets at their fourth week of life immediately after weaning. Other piglet vaccinations included a live, attenuated vaccine against Lawsonia intracellularis (week 3) and an inactivated PCV2 vaccine (week 4).
After anamnestic reporting of dry recurrent coughing beginning in the nursery, PCR testing for M. hyopneumoniae out of lung samples at a local Animal Health Service Lab in May 2015 gave positive results and therefore a two-shot vaccination program using a commercial vaccine (Stellamune® Mycoplasma, Elanco Animal Health) was introduced. However, coughing persisted and the pathogen was isolated again in 2016 before the start of the study. At that time, an additional PCR for M. hyorhinis, Haemophilus parasuis (HPS), and Actinobacillus pleuropneumoniae (APP), as well as serological analyses for APP-and HPS-antibodies gave no positive results. Additional serological survey for PRRSV-antibodies showed homogeneous titers with higher levels in sows due to vaccination and negative results in fatteners. Two slaughter lung checks in March and April 2016 revealed high rate bronchopneumonia (BP) lesions with prevalences of 84 and 92%, respectively, and extended cranioventral consolidations. The combined occurrence of clinical signs, enzootic pneumonia (EP)-like lesions at slaughter, and detection of M. hyopneumoniae by PCR were indicative of a still ongoing infection with this pathogen. The veterinary practitioner and the farm owner then decided to perform a comparative study between the actual two-shot vaccine and Hyogen® (Ceva Santé Animale), a novel single-shot bacterin. When doing random microbiological analyses from lungs of four euthanized animals in the course of the study, one animal was found to be completely free of lung pathogens in PCR and bacteriology, another animal exhibited infection with only M. hyopneumoniae, but was negative in bacteriology, the third animal was positive for M. hyopneumoniae, M. hyorhinis, HPS, and APP, and the fourth animal was positive for M. hyopneumoniae, M. hyorhinis, and Pasteurella spp..
The field trial began in May 2016 and ended a year later in May 2017. In summary, 585 healthy, on average 4-day-old piglets of six consecutive farrowings were individually weighed and sexed. Then, starting with the heaviest piglet and ending with the smallest one, piglets were alternately assigned to the two groups and ear-tagged at the same time within each farrowing group, so that in the end we had an approximately 50:50 proportion of both vaccination groups within each litter. On average sow parity was 3.3 in group 1 with 62 sows included and 3.1 in group 2 with 63 sows included. Both vaccines were administered intramuscularly according to manufacturers' instructions: group 1 piglets were injected in the neck once with 2 ml of the one-shot vaccine at 23.9 days of age in the mean. Group 2 piglets were injected in the neck twice on average at days 4.3 and 24.0 with the two-shot product. Male piglets were castrated in their first week of life. Animals of each group were raised in different pens in the nursery and fattening unit but shared the same air space. Cough monitoring was performed by only one veterinarian once weekly in each group starting from weaning until the end of the fattening period. Pigs in each pen were solicited to get up and the number of coughs was counted during a period of two minutes. The coughing index (CI) was obtained by dividing the number of coughs by the number of observed animals and examination days. Weights were measured at the end of nursery and before slaughter beside the time point of inclusion, when piglets had an age of 4 days. Average daily weight gain (ADG) from inclusion to slaughter, overall mortality rate, as well as the antibiotic treatment rate (ATR) against respiratory disease with amoxicillin, fluoroquinolones, and florfenicol were also documented. Animals were only treated by injectables by the farmer, who was blinded. No oral-route antibiotics were used.
Assessment of lung lesions
Lungs were blindly scored at the slaughterhouse according to a methodology combining the detection of four different types of lesions [16]. Due to the high speed of the line process, the two investigators, who were always the same, shared the work. One person did the lung check and the second one was responsible for the documentation by using the software tool Ceva Lung Pro-gram®, meaning, they stood side by side at the site of the line, where lung and heart were prepared from the carcass. First, each lung lobe was individually evaluated according to a scoring system for EP-like lesions based on the Madec and Kobisch score [17,18]. Scores 0-4 are attributed to lesions according to the percentage of surface affected per lobe with score 0 representing 0% affected surface, score 1 representing 1-25%, score 2 representing 26-50%, score 3 representing 51-75%, and score 4 representing 76-100%. Consequently, each lung can achieve an EP value between 0 and 28, with values > 0 being considered a bronchopneumonic lung.
Second, for each lung, pleuritic lesions exclusively affecting the dorsocaudal lobes were evaluated according to a modified Slaughterhouse Pleurisy Evaluation System (SPES), with no lesion being score 0, score 2 resembling a dorsocaudal monolateral focal lesion, score 3 resembling a dorsocaudal bilateral focal lesion or extended monolateral lesion (at least 1/3 of one diaphragmatic lobe), and score 4 resembling a severely extended bilateral lesion (at least 1/3 of both diaphragmatic lobes) [19].
Third, each lung was inspected for the presence of cranioventral pleurisy (CP) without describing the extension of the lesion. Finally, each lung was also visually inspected for the presence scars or fissures.
Statistical analysis
During the analyses the following three regression models were used.
Mixed effect ANOVA: Mixed effect logistic regression: Mixed effect Poisson regression: In these models y represents the observed result, p represents the probability of occurrence of the observed event, μ represents the constant term, β represents the fixed factor effects (treatment group, sex, sow parity and in case of ADG time between first and last weighing), n represents the number of fix factors used in the model, x represents the factor configurations, γ represents random intercept of the farrowing group factor, δ represents the random intercept of the mother sow number factor, and ε represents the residual error.
ADG was compared with mixed-effect analysis of variance (ANOVA) models with vaccination group, gender, time between first and last weighing as well as sow parity as fixed factors and farrowing group and sow number as random factors. Indicator variable data (0 and 1) were compared with mixed-effect logistic regression models with vaccination group, gender, and sow parity as fixed factors and farrowing group and sow number were used as random factors. Ordinal data were compared with generalized Wilcoxon-Mann-Whitney ranksum test (van Elteren's test) using farrowing groups as strata. Here the z score is the measure of the deviation of central tendency from the hypothetical perfect equivalence of the two groups. A negative z score means that the examined population is stochastically smaller than the other population. The results of the ordinal data evaluations were supplemented with mixed-effect logistic regression models using the indicator value categorization of the ordinal data where vaccination group, gender, and sow parity were used as fixed factors and farrowing group and sow number were used as random factors. Score zero was coded as 0; score values higher than zero were coded as 1. Mortality data were compared with mixed-effect logistic regression models, where the vaccination group was used as fixed factor and farrowing groups and sow numbers were used as random factors. As the time of the events (deaths) was not available, Kaplan-Meier estimation was not possible. ATR was compared with mixed-effect logistic regression models where the vaccination group was used as fixed factor and the farrowing group was used as random factor. Cough monitoring data were compared with mixed-effect Poisson regression models. Here, again the vaccination group was used as the fixed factor and the farrowing group as the random factor. If an estimate of a random effect was negligible (less than 10 − 4 ), the effect was omitted with the exception of cough monitoring and the regression model was refitted to the data. Model outcomes were described using the 95% confidence interval, effect sizes (ES) and odds ratios (OR) are representing the one-shot vs two-shot vaccination comparison in that order. All statistical computations were performed using Stata 15 software (StataCorp. 2017. Stata Statistical Software: Release 15. College Station, TX: Sta-taCorp LLC). The type I error for all statistical tests was set to 5% (p < 0.05).
Furthermore, a significant difference in the presence of cranioventral pleurisy between the vaccination groups was found (p = 0.038, ES = 0.368 (0.020, 0.715), OR = 1.444 (1.020, 2.045)). It can be estimated from the regression model that 50.1% (38.7, 61.5%) of the animals in the one-shot group will suffer from CP, whereas in the two-shot group 59.2% Prevalences and descriptive statistics of all data sets are presented in Tables 1 and 2.
Discussion
Although vaccination against M. hyopneumoniae is applied worldwide, variable results are observed [14]. Most current vaccines are still based on the J-strain, isolated in 1963 from a pig herd in the United Kingdom [20]. The one-shot formulation used in this study is based on the M. hyopneumoniae strain 2940, isolated in 1999 from a farm facing a severe outbreak of enzootic pneumonia, which might be beneficial for vaccine efficacy as low virulent strains might not be the best choice [21]. Furthermore, adjuvants also play a key role in the efficacy of vaccines [22]. Apart from light liquid paraffin O/W-formulation, the vaccine tested in this study is also adjuvated by inactivated Escherichia coli J5 non-toxic LPS (ECJ5L), which was shown to exert a significantly stronger cell-mediated immune response in terms of specific interferon-γ producing T cells when compared to solely paraffin-adjuvated or non-adjuvated test vaccines [23]. Furthermore, Hyogen® has been shown to be efficacious against experimental challenge with both low and highly virulent M. hyopneumoniae strains [24].
Although EP-like lesions are generally not considered pathognomonic of M. hyopneumoniae, they are considered suggestive for previous EP due to mixed infections with M. hyopneumoniae and other pathogens [25]. M. hyopneumoniae was demonstrated to be a key factor for respiratory disease and EP-like lung lesions at slaughter in the herd under investigation, although besides M. hyopneumoniae also other respiratory pathogens had been isolated in the herd under investigation, and therefore the decision was made to introduce a new vaccination program against M. hyopneumoniae, as the two-shot vaccination regimen against M. hyopneumoniae and additional management optimizations had not yielded any improvement. The present study was therefore conducted in order to compare the efficacy of a novel one-shot vaccine against the two-shot vaccine, which was already in use. To the authors' knowledge, this is the first randomised field trial comparing 6 consecutive batches of differently M. hyopneumoniae-vaccinated groups for an entire year. Also, we also had the opportunity to verify if gender had a significant impact on the development of gross lung lesions as previously described in literature [26,27].
In terms of clinical observations, coughing generally became more prominent in late nursery and finishing but CI did not differ between the two groups, which contrasts with the results of the slaughter lung lesions. This is in accordance with a study suggesting that weekly assessment of coughing is not a predictive indicator of lung lesions at slaughter [28]. ADG did not differ between the groups, which is also in accordance with other field studies [29,30]. However, a recent study investigating the impact of lung lesions on production performance showed that each categorial increase in EP-like lesion severity, according to a 5-step scoring system different from the one used in this study, resulted in a reduction of 0.37 kg in post-trimming carcass weight [31]. Mortality accounted for 13/293 of the animals in group 1 and 21/292 in group 2 without showing any significance, which is in accordance with most field studies comparing M. hyopneumoniae vaccines [29,30]. Mortality rates in our study can be explained to some extent by crushing of piglets to death by the sows. Remaining animals died due to fibrinous bronchopneumonia or septicemic heart disease, thus reflecting the additional problems caused by HPS and APP in that herd. Individual treatment against respiratory disease was recorded by the farmer and later evaluated. The Hyo-gen®-group had a significantly lower ATR than the two-shot group for the whole observation period and in some way the ATR refined what was previously missing for the CI. This finding is of importance, as a low ATR against respiratory disease can be used as indicator of lung health on the one hand and support the rationale of using effective vaccines to avoid otherwise indicated antibiotic treatment regimens on the other hand. However, our results are in contrast to the results of another field study, where no reduction in antibiotic treatment between differently vaccinated groups and the control group could be found [29].
Over the study period the proportion of lungs affected by bronchopneumonia was significantly lower in the Hyogen®-group. Also, severity of lung lesions in terms of EP-values was significantly lower in this group. However, gender and sow parity had no influence on lung lesion prevalences. The same applied to CP values. In a comparable field-study, three M. hyopneumoniae vaccines (two one-shot vaccines and a two-shot vaccine) were compared in terms of lung lesions, lung histopathology, and M. hyopneumoniae load [15]. One one-shot vaccine showed significantly higher median Madec and Kobisch lung lesion scores (3) than the other one-shot vaccine and the two-shot product (both 0). Although mean lesions between the latter two vaccines did not differ significantly, the two-shot vaccine had a higher prevalence of lungs with score 0 (64.2% vs. 55.6%) and a lower prevalence of lungs with score 5-9 (5.3% vs. 14.9%) and 10-20 (1.6% vs. 2.3%). Thus, in this study the two-shot formulation proved to be higher protective in terms of lung health than the two one-shot formulations. This is in contrast to our study and demonstrates that continuous development of vaccines can lead to even unexpected results.
The study presented has two major limitations. First, only one farm has been included. This farm represents a typical Austrian farm, although production units in other countries house much higher numbers of sows. Second, no continuous monitoring of the M. hyopneumoniae load was performed. However, our primary aim was to demonstrate clinical non-inferiority of Hyogen® in comparison to an established two-shot regimen, which could be clearly shown.
Conclusions
Under the conditions of the present study, pigs vaccinated with the one-shot vaccine Hyogen® did not differ from the two-shot group in terms if coughing index, ADG, or mortality rate, but exhibited a significantly better lung health status at slaughter in terms of a lower proportion of bronchopneumonic lungs and lower Madec and Kobisch score values as well as lower incidences of cranioventral pleurisies. Furthermore, a significantly higher proportion of pigs needed antibiotic treatment against respiratory infections in the two-shot group. Funding This study was funded by Ceva Tiergesundheit GmbH.
Availability of data and materials
Data that support the results of this study are available upon reasonable request from Vojislav Cvjetković.
Authors' contributions WS conducted the study design, supervised the study, participated in data collection and corrected the manuscript. IS performed the statistics. SS chose the farm for the study and participated in data collection. VC participated in data collection and wrote the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate The present trial did not include any invasive procedures or treatments to the pigs, therefore animal ethics committee approval was not required. An owner consent was provided by the farmer prior to starting the study.
Consent for publication
All authors agreed to the publication of the present manuscript.
Competing interests
First and third author of this research article are employees from the company sponsor. | 2018-08-01T07:04:54.935Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "eb5fadf0c8d5bf75aebfea8681acb9fbd5e9d737",
"oa_license": "CCBY",
"oa_url": "https://porcinehealthmanagement.biomedcentral.com/track/pdf/10.1186/s40813-018-0092-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb5fadf0c8d5bf75aebfea8681acb9fbd5e9d737",
"s2fieldsofstudy": [
"Medicine",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268917333 | pes2o/s2orc | v3-fos-license | Fabrication and Characterization of Non-Conventional Starch Based Biofilms Employing Different Plasticizers
Increasing stress on environmental awareness has promoted the use of alternative strategies for common petrochemical based plastic sources to environmentally friendly biofilms and packaging materials. Food by products and underutilised indigenous plants sources are rich in biopolymers which can naturally decompose. The present study focusses on preparations of bioplastics from unconventional starch sources of litchi seed and churkha tuber as novel initiative to combat environmental pollution along with valorisation of undervalued plant sources and agrobased wastes. Ternary blend films of poly vinyl alcohol, starch, plasticizers like (glycerol, sorbitol, mannitol, propylene glycol, polyethylene glycol), citric acid prepared by casting technique produced best result. Comparative study of test samples (TH11, TH12, TH13, TH14 etc) with standardized thin films (TH1, TH2, TH3 etc.) showed variation in appearance, water activity, thickness, since the starch differ in their compositions, gelatinization and gelation time and temperature. High amylose containing litchi seed starch forms firm and more flexible films in comparison to low amylose containing churkha tuber starch. Maximum hygroscopicity was recorded in case of TH1 (77.4%) while thickness was maximum in case of TH12 (0.18 mm). Micro structural analysis clearly showed the nature of crosslinking between starch molecule and different plasticizers. The sensory analysis of food packed with the best among the formularized thin films showed mild change in taste while the other parameters remain same as normal. Better acceptability was in case of TH12 wrapped food rather than TH1. Thus, fabrication of biodegradable packaging is considered as the most sustainable alternative for food preservation.
Introduction
In this present century food packaging industry is dominated by petroleum-based packaging materials which includes polyethylene, polystyrene, polyamide, polyethylene, polyethylene terephthalate, etc. due to its excellent structural property, performance, cost, barrier property, aesthetic quality.However, this fossil fuel-based packaging materials is considered as the major cause for anthropogenic environmental pollution in every phase of life cycle from monomer synthesis to disposal in landfills.The contribution of synthetic plastic materials alone is about 400 million ton in total waste generation every year (Maraveas et al., 2020).Recent global trend is to maintain
Research Article
environmental sustainability by shifting from petrochemistry to bioeconomy for preventing environmental pollution caused by toxic chemical released by fossil fuel-based packages (Silva et al.,2019).Thus, the production and development of novel biobased polymeric films is considered as the most innovative, promising and emerging matrices in food packaging.Every year food processing industry releases a huge number of wastes like fruit seed, fruits shells, peels, husks, protein isolates, oil seed cakes etc. which if not properly disposed can cause environmental pollution (Maraveas et al., 2020).These agro based wastes are huge sources of biomaterials like protein, polysaccharides, minerals etc. that can be utilized in product formation and development (Thakur et al., 2021).Thus, new environmental approach has been undertaken for development of green materials like biopolymers, biodegradable polymers, biobased plastics from agrobased waste materials in order to substitute non-ecofriendly synthetic packaging materials (Ramesh et al., 2021).Among the biopolymers starch is the most abundantly found polysaccharide from plant source which is widely utilized in preparation of ecofriendly bio material-based wrappers and packages (Zhang et al., 2018).Till date starch from various unconventional and conventional sources are utilized in preparation of biobased packaging films, but it was the first-time litchi seed starch and native starch of churkha tuber was utilized in biofilm preparation (Food packaging, 2022).Starch is the most abundant polysaccharide available in seed, roots, fibre as food reserve (A Text Book of Organic Chemistry,2011).Starch or amylum is a polymeric carbohydrate comprising glucose units joined by glycosydic linkage.Starch comprises of two glucose chain: amylose (linear), amylopectin (branched) with different structural and functional property (Biochemistry, 2013).The gelatinization and gelation property of starch is utilized in preparation of bio based thin films by casting technique along with plasticizers (Talja et al., 2007).Litchi (Litchi chinensis) seed kernels are most abundant source of highly soluble amylose rich starch which is unconventional in origin.The starch granule of litchi seed is elliptical in shape with gelatinization starting at 40℃ (Litchi, 2022).While churkha (Dioscorea pentaphylla) is one of the unexplored wild tubers with excellent nutritional and economical importance belonging to dioscoreaceae family.The starchy edible tuber churkha is widely distributed in tropical and sub-tropical region of the world (Kumar et al., 2017).
The main aim of the study is to utilize socioeconomically less important unconventional starch source of litchi seed and churkha tuber in fabrication of ecofriendly bio films which could effectively substitute for unsustainable petroleum-based food packaging materials and an innovative strategy for reutilizing the agro based waste materials.
Extraction of Starch
The technique of hydro milling was employed for starch extraction.50 g of dried litchi seed (Litchi chinensis) and churkha powder (Dioscorea pentaphylla) was drenched in 1000 ml aquadest separately with continuous agitation for 6-8 hours.The slurry was filtered through cotton mesh (150 mm) followed by washing in deionized water and left to combine at 4℃ for 24 hours.The supernatant was drained out and crude starch at bottom was subjected to washing, followed by oven drying at 65℃ (Palacios-Fonseca et al., 2013).
Estimation of Amylose Content
The amylose content was estimated by using Iodine-binding procedure as described by Juliano (1981) with certain modifications (Juliano et al., 1981).250 mg sample was taken in 100 ml beaker to which 1 ml ethanol and 10 ml of 1 (N) NaOH were mixed.The solution was subjected to boiling for 10 minutes followed by cooling at room temperature.2.5 ml of this solution was poured in a 50 ml test tube to which 20 ml distilled water was added.Then, 1-2 drops of 1% phenolphthalein indicator were added along with dropwise addition of 0.1 (N) HCl until pink colour disappeared.After which freshly prepared 1 ml iodine reagent was mixed and the reading of optical density was taken at 590 nm in spectrophotometer after volume makeup in a 50 ml volumetric flask (Thilakarathna et al., 2017).
Gelation Property
Powered sample was taken separately at different concentration ranging 5-25% w/v and mixed properly.The test tubes were subjected to heating in water bath at 85 o C for 30 minutes followed by rapid cooling under running cold tap water.The test tubes were further cooled at 4 o C for 2 hours.The lowest gelation concentration was determined as that concentration when the sample did not drop or slip from the inverted test tubes (Chowdhury et al., 2012).
Fabrication of Biofilms
Two different methods were employed for biofilms preparation by using solvent casting technique (Table 1).In first method, 1 g starch in 17 ml water was subjected to gelatinization along with 2 ml plasticizer (glycerol, mannitol, sorbitol, polyethylene glycol, propylene glycol) at 70-80℃ followed by casting on glass plate and oven dried at 40℃.The dried films were carefully peeled off and stored in desiccator (Pareta et al., 2006).In second one, film composition was prepared by mixing two different gelatinized solution containing 1.4 g of polyvinyl alcohol in 15 ml of water in boiling condition and another one containing 1.4 g of starch in 15 ml water along with 0.5 g citric acid and 1 g of plasticizer followed by continuous swirling over hot plate at 80-85℃.The homogenized mixture was casted on glass plate (25 cm X 25 cm) followed by drying at room temperature.The dried films were peeled off from plate and stored in desiccator for further characterizations (Wu et al., 2017).
Table 1: Various thin film combinations with different starch sources Characterization of biofilms.
Moisture Content
The moisture content of was evaluated by using a digital weighing scale.All samples were weighted before Wi g (initial weight) and subjected to oven drying until a constant weight Wf g (final weight) was achieved (Tarique et al., 2021).
Percentage of Moisture = (Wi-Wf)/Wi X100 Where, Wi= initial weight of the Sample; Wf=final weight of the Sample Thickness Screw gauge measurement technique was followed for measuring the thickness of the films.The biofilm thickness was measured with 0.001 mm sensitivity by using an advanced micrometer.Measurement was taken in five distinctive portion of each sample biofilms and the mean value was estimated for determining the thickness (Tarique et al., 2021).
Microscopic Studies
Microscopic observation of the biofilms was carried out under high magnification compound microscope (Leica ICC 50).Microscopic observation was done to observe the superficial morphological crosslinking of the starch films (Wu et al., 2017).
Packaging of Food Product
In this study biofilms were used as a packaging material.Food product (Peanut Chikki) was packed with the prepared biofilms and kept in air tight container for 21 days.Shelflife study and sensory characteristics of the food was evaluated at an interval of 7 days (Garcia et al., 2012).
Sensory Analysis of Packaged Food
The food product (Peanut chikki) was packed with biomaterial based thin films was supplied to the panel members for sensory characteristic observation.9-point hedonic scale was used for sensory characteristic evaluation (Garcia et al., 2012).
Amylose Content of Starch
The amylose content of litchi seed starch is higher (25 g/100 g) in comparison with churkha tuber starch (15 g/100 g).
The Table 2 shows the amylose and amylopectin content of litchi seed starch and churkha tuber starch in comparison to other unconventional starch sources (Romero-Bastida et al., 2005).The amylose content of litchi seed starch and Churkha tuber starch was determined by using standard curve.
The physicochemical property and the way in which the amylose and amylopectin chain are dispersed, is entirely dependent on the botanical origin of the starch (Bajaj et al., 2018).Recent studies on the effect of starch on structure, plasticization and properties of films prepared by solvent casting technique revealed that structure of amylopectin films is sensitive and dependent on preparation condition while amylose films are crystalline, mechanically stable and less effected by plasticization of the polyols (Myllarinen et al., 2002).Again, high amylose containing starch generally form strong and stiff films by linking linear chain with hydrogen bonding.Thus, high amylose containing litchi seed starch possess better film forming property in comparison to churkha tuber starch.This was similar to the findings of Tarique et al and Bertuzzi et al, who observed that high amylose content formed stronger and more crystalline films in arrowroot and corn starch respectively (Bertuzzi et al., 2007).
Gelation property of Starch
The gelation property of litchi seed starch starts from 15% concentration and at 25% concentration complete gelatinization takes place.While Churkha tuber starch possess least gelation property which show gluey tendency above 20 % concentration (Table 3).
The variation in gelation property was observed probably because of the difference in amylose and amylopectin content of starch.The structural features have complex effect on gelation property.In the present study better gelation capacity was observed in case of litchi seed starch than churkha tuber starch.The causal reason for this is the high amylose content of litchi seed starch which is associated with inhibiting swelling in initial stage, increasing pasting temperature, lowering peak viscosity and improving onset viscosity that determines the ability to form firm gel network while high amylopectin content of churkha tuber starch promotes swelling but lowers the gelation property by causing thinning of starch paste during heating.Similar feature was recorded by Jane et al., who observed that waxy starch with no amylose content reflects lower ability to form gel network (Jane et al., 1999).Furthermost according to the findings of Lim et al, absence of phospholipid and lipid in roots and tubers starch are responsible for lower resistance to shear thinning, lower set back viscosities as in comparison to cereal starch (Lim et al., 1994).In addition to low amylose content this property might be responsible for the low gelation property of churkha tuber starch in comparison to litchi seed starch.
Again, according to previous study minimum amyloseamylopectin ratio of 0.43 is required for formation of gel network.At lower amylose content the starch gel easily disrupts by heating, this might be responsible for least gelation capacity of churkha tuber starch (Sasaki et al., 2005).
Study on Prepared Thin Films
Variation in the psychochemical property of the films obtained probably because of the use of different method in film preparation.In the solvent casting technique only by utilizing different plasticizers along with starch produced mostly brittle, inflexible, unplasticized films which torn while removing from the glass plate.This finding was similar to the observation of Talja et al, Tarique et al, who prepared starch films with potato starch and arrowroot starch respectively when plasticizer content was less than 20%.While Ternary blend films of PVA, starch, citric acid combinations along with plasticizers provided best result (Pareta et al., 2006).Among all biofilms prepared the result obtained with TH12 was most promising in overall appearance.The influence of different film forming technique on the nature of biofilms prepared by utilizing various plasticizers is summarized in Table 4 & 5.
The structural difference of the films prepared might be due to variation in nature of crosslinking between starch molecule and plasticizers used (Hanani et al., 2014).The first method resulted in formation of brittle, rigid, wavy films which torn into bits while removal from glass plate.This might be due to insufficient molecular interaction between starch and different plasticizers, resulting in inadequate separation between the macromolecular chain of starch in presence of strong intramolecular hydrogen bonds (Tarique et al., 2021).The phase separation between starch and plasticizer matrix also might be responsible for the stickiness of film forming solution resulting in unplasticized film formation (Talja et al., 2007).In the second method of film preparation low molecular sized plasticizers might have slide into the space between the molecules of the starch polymeric chain boosting the molecular mobility by decreasing the strength of hydrogen bonds resulting in flexibility of plasticized films (Tarique et al., 2021).The addition of PVA in the second method might have improved the nature of cross linking between starch molecules and plasticizer, while citric acid addition increased the compactness and stability of starch films (Pareta et al., 2006).The rate of gelatinization and the nature of starch films produced is dependent on the concentration of amylose and amylopectin.Starch with high amylose content form strong and flexible films while starch with high amylopectin forms weak and brittle ones (Tarique et al., 2021).Thus, the biofilms from litchi seed starch with high amylose (25 g/100 g) content were more flexible in comparison to Churkha tuber starch biofilms with low amylose content (15 g/100 g).The film forming ability was more appreciable in case of litchi seed starch than churkha tuber starch, which might be due to the variation in amylose content.
Moisture Content
The study on water activity of the prepared films reveals that, maximum moisture content was recorded in case of TH1 (77.4%) while minimum moisture content was observed in case of TH12 (3.18%).Comparative study of thin films from litchi (Litchi chinensis) seeds starch and churkha (Dioscorea pentaphylla) tuber starch reveals that the moisture content was high in case of churkha tuber starch.In this present study moisture content of biofilms from litchi seed starch ranges between 3.18%-20.34%with different plasticizers while the moisture content of churkha tuber starch films varies from 15.23%-30.45%.This is in contrary with the findings of Hazrati et al who observed that the moisture content of D. hispida starch-based biofilm was 10.46% with sorbitol as plasticizer (Hazrati et al., 2021).
Difference in starch compositions and wide variety of plasticizer used might be responsible for variation in moisture content which determine the effectiveness of the starch films to be utilized as packaging material.The moisture content of biofilms might be dependent on the presence of hydrophilic and hydrophobic component in film forming solution, increase in the hydrophilic constituents increases the moisture content of starch films.The highly hydrophilic nature of corn starch (food grade) and glycerol might be responsible for high moisture content of the film obtained from it.The hydroxyl group present in glycerol might have form strong attraction with water which enable them to hold water by forming hydrogen bond within the structure (Tarique et al., 2021).The Fig. 2 and Fig. 3 represents the moisture content of various combinations with different plasticizers.
Thickness
Variation in the thickness of the films was noticed by utilizing different film forming solution and methods in preparation of biofilms.The plasticizer plays significant role in upsetting and restructuring the macromolecular polymeric arrangement of starch converting all free space into thickness.Thus, different plasticizers used is associated is with causing variation in thickness of biofilms.The film forming process is associated with formation of new strong bonds by destroying the original intermolecular bonds of starch chain which reduces the free space (Hazrati et al., 2021).
The thickness generally increases with increase in solid content, thus addition of citric acid along with PVA might be responsible for increase in thickness of films in second method.Maximum thickness of 0.18 mm was in case of TH12 while minimum of 0.13 mm in case of TH5 was recorded.The Fig. 4 represents the comparative study on thickness of the best combinations.Mostly the biofilms plasticized with glycerol and sorbitol were best in appearance.Thus, the thickness study elucidated the comparison between glycerol and sorbitol plasticized biofilms with different starch sources.In case of conventional starch sources, the thickness of biofilms plasticized with glycerol was higher in comparison to sorbitol plasticized biofilms, while this was reversed in case of unconventional starch sources.The thickness might be due to physical interaction between the polymeric matrix of starch, PVA, citric acid and carboxylation of citric acid with alcoholic hydroxyl group of PVA.Increase in molecular adhesion between litchi seed starch PVA and sorbitol might have resulted in increase in thickness of the film (Pareta et al., 2006).The difference in starch compositions and variation in size of starch granule might be responsible for the wide variations in thickness of the films.High amylose content and large molecular size in litchi seed starch might be responsible for increase in thickness.This was similar to the findings of Romero-Bastida,C.A et.al who observed mango and banana starch produced thicker films than okenia starch by utilizing same film forming solution due to the difference in amylose content.
Microscopic Observation
Microstructural analysis and study on structural morphology determines the nature of crosslinking between starch granules and plasticizers used.The structural topography of ternary blended films revealed that the surface was homogeneous, smooth, continuous with no pores on surface which indicated that the plasticizers and citric acid might have improved the binding between starch and PVA.Similar finding was also reported by Wu et al and Parvin et al, who prepared starch-PVA blended films from rice starch and corn starch respectively (Parvin et al., 2010).The change in starch micro domain to continuous phase from dispersed phase reveals the miscibility of starch in PVA.The microscopic view of the cross section also
TH3 TH12 TH16 TH11
revealed the composite film surface was free from phase separation interface between starch and PVA, projections and wrinkles (Pareta et al., 2006).The variation in colour was observed depending on the starch sources and natural interaction between the iodine and starch granules (Fig. 5).Comparative study on sensory analysis of biofilm wrapped packaged food TH1 TH12
Food Packaging Material Study
The apparently best film among the control (TH1) and test (TH12) category was selected and utilised in packaging of peanut chikki (Fig. 6).
Sensory analysis of food (peanut chikki) at an interval of 7, 14, 21 days using biobased thin films as packing materials revealed slight sourness of taste which may be due to the presence of citric acid in film forming solution while the color, hardness, odour, brittleness of the film remained normal (Fig. 7).No visible growth of microorganism was noticed on the surface of the food.In terms of acceptability TH12 wrapped food created better response in comparison to TH1.
Conclusion
This study is based on the novel approach to use biomaterial based thin films from unconventional starch and their application as food packaging material.Among the methods utilized in preparation of starch films, ternary blend films by combining PVA and citric acid produced best result.High amylose containing litchi seed starch possess better film forming ability as in comparison to low amylose containing churkha tuber starch.Wide variation in moisture content and thickness was noticed due to difference in composition of film forming solution.Microscopic studies revealed nature of crosslinking of starch with plasticizer.
The sensory analysis of food material packaged with biobased thin films showed significant quality maintenance in comparison with standard packaged food.From this study it can be concluded that bio polymer based thin films can effectively substitute non-biodegradable packing materials.
Fig. 1 :
Fig.1: Litch seeds and Churkha tuber samples used in the study
Table 2 :
Comparative study on the amylose and amylopectin of sample with other unconventional starch sources
Table 3 :
Gelation property of litchi seed starch and churkha tuber starch.
Table 4 :
Characteristics of the thin films obtained by different film forming techniques.
Table 5 :
Different thin films from unconventional and conventional starch sources | 2024-04-05T17:41:24.625Z | 2024-03-31T00:00:00.000 | {
"year": 2024,
"sha1": "ae337d803d26ed884f95f120561c931c149b4c70",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/IJASBT/article/download/64332/48789",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8dffba2eab3f6de0ec95e88a758ba7cef61e9c82",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": []
} |
2269768 | pes2o/s2orc | v3-fos-license | The Effect of Learning on the Function of Monkey Extrastriate Visual Cortex
One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams.
Introduction
It is well established that learning can have a strong impact on neural responses to visual stimuli in high-level association cortices such as inferior temporal (IT) or prefrontal (PF) cortex, where the activity of single neurons reflects learning in pair association, object identification, or categorization tasks (Sakai and Miyashita 1991;Logothetis et al. 1995;Booth and Rolls 1998;Kobatake et al. 1998;Erickson and Desimone 1999;Rainer and Miller 2000;Freedman et al. 2002;Sigala and Logothetis 2002). In these studies, learning is thought to modify neural activity to represent task-relevant attributes, such as trained views of three dimensional objects (Logothetis et al. 1995) or associations between paired visual stimuli (Sakai and Miyashita 1991;Erickson and Desimone 1999). The learned representations often exhibit invariance for stimulus features such as size (Logothetis et al. 1995), rotation (Booth and Rolls 1998), or stimulus degradation (Rainer and Miller 2000). Similar neural activity to within-category stimuli during categorization (Freedman et al. 2002) can also be thought of as a learning-dependent form of invariance. Several lines of evidence suggest that these learning effects involve synaptic plasticity and thus represent long-lasting modifications to visual association cortices.
Recent evidence suggests that neurons in early visual sensory areas can also modify their response properties with learning. In particular, several studies have revealed learningrelated changes in primary visual cortex (V1) Schoups et al. 2001;Ghose et al. 2002), although the extent and functional significance of these learning effects remains somewhat controversial (Schoups et al. 2001;Ghose et al. 2002). Available evidence suggests that classical V1 response properties such as receptive field size or orientation tuning parameters are affected relatively little by learning, while learning does appear to cause general reduction in activity for trained stimuli as well as a task-dependent increase in the influence of nonclassical surround stimulation on the neuron's response.
Learning thus appears to affect both low and high level areas of the ventral visual stream. The results obtained by studies in these two areas are, however, difficult to compare directly, owing to substantial differences in experimental design. In studies of IT or PF cortex, studies typically employ 'complex' visual stimuli such as Fourier descriptors (Sakai and Miyashita 1991), computer-rendered animals (Freedman et al. 2002), or colored photographs and artwork (Erickson and Desimone 1999). These stimuli are generally presented at the center of gaze and can be from 18 up to 108 of visual angle in size. Many studies also include a selection process that determines which of the neurons encountered in a given penetration are chosen for further quantitative study. By contrast, available learning studies in early visual areas follow well-established rules for investigation of primary and extrastriate visual areas. These studies employ 'simple' visual stimuli such as oriented bars or gratings (Schoups et al. 2001;Ghose et al. 2002). These stimuli are generally presented at eccentric locations, with stimulation parameters adjusted to the receptive field and orientation selectivity of the single neuron currently under investigation. Thus, both stimulus type and experimental procedure generally differ substantially, depending on whether a study investigates low-level sensory or high-level associative visual cortex.
For a comprehensive account of how learning affects visual processing, the same stimuli and experimental procedure must be used to study different levels of the visual processing hierarchy. What kind of stimuli might be suitable to study visual areas as different as early sensory visual and PF cortex? We decided to use natural images for several reasons: The primate visual system evolved in the natural environment under conditions of 'natural' stimulation; much is known about their statistical properties and they can therefore be well-controlled; they contain structure at all spatial scales and thus can be expected to activate a large fraction of visually responsive neurons. We avoid subjectively biasing our sample of recorded neurons by always recording from the first neurons whose waveforms we are able to reliably isolate. This ensures that our population of recorded neurons represents an unbiased sample in each brain region under study, and this in turn allows us to compare data obtained from different brain regions. We obtain a sensitive measure of behavioral performance and associated neural activity by employing a stimulus degradation procedure that makes stimuli harder to discriminate by adding various amounts of noise (see Figure 1A). With degradation, stimuli become increasingly indeterminate because all stimuli in a given session are combined with the same noise pattern. Noise is newly generated for every session so that monkeys cannot rely on the specific individual characteristics of a particular noise pattern. Instead, they need to extract task-relevant information from degraded displays, whose particular details vary from day to day. Similarly, outside the laboratory we are rarely presented with familiar stimuli in canonical views and conditions of standard lighting, but instead need to extract this information from complex scenes in which it is embedded. Previously these kind of stimuli were used to study neural activity in the PF cortex (Rainer and Miller 2000), where learning made neural activity more robust to stimulus degradation. After learning, PF neurons tended to fire in a similar manner to undegraded and moderately degraded versions of the same stimulus. Learning thus resulted in a form of neural response invariance, because degradation no longer had an impact on PF neural activity.
Here our aim is to use similar stimuli and behavioral procedures to characterize how learning modifies neural activity in extrastriate visual cortical area V4. Area V4 was chosen because it is considered to be a sensory visual area at an intermediary processing stage in the ventral stream and because it is directly connected to parts of the PF cortex (Petrides and Pandya 1999). Our task was a modified version of delayed-matching-to-sample (DMS) (see Figure 1B). After grasping a metal lever and subsequently attaining central fixation, monkeys viewed a sample stimulus presented at one of six coherence levels ranging from undegraded (100% coherence) to fully degraded (0% coherence). After a brief delay, monkeys were presented with a probe stimulus (always at 100% coherence) and had to release a lever if the probe matched the sample (i.e., if the sample was identical to or was a degraded version of the probe stimulus). During each session, we employed four highly familiar stimuli and four 'novel' stimuli that monkeys had not seen previously. Great care was taken to ensure that novel and familiar images differed only in terms of their familiarity to the animal (see Materials and Methods). Using novel and familiar stimuli allowed us to ask whether learning had any effect on monkeys' ability to identify degraded and undegraded versions of natural images. Intermixing novel and familiar images in the same session had the additional advantage of allowing us to estimate for each single neuron in our population, whether there were any learning-related changes in the amount of stimulus-specific information these neurons communicated.
Results
We found that learning resulted in significant and robust improvements in monkeys' ability to identify degraded Figure 1. Stimuli and Behavioral Task (A) An example natural image is shown at three coherence levels, corresponding to 100% (undegraded), 45% (degraded), and 0% (pure visual noise). (B) The sequence of trial events for the DMS task used in this study. After a fixation period, a sample stimulus (S) is briefly presented, followed by a delay period and the presentation of a probe stimulus (P). While sample stimuli were presented at different coherence levels, probe stimuli were always presented in undegraded form (100% coherence). The monkeys were required to release a lever if the probe matched the sample. DOI: 10.1371/journal.pbio.0020044.g001 stimuli. Behavioral performance varied systematically with coherence ( Figure 2A). Monkeys performed at chance level (50% correct) when stimuli were presented at 0% coherence and thus contained no task-relevant information. For degraded stimuli (35%-65% correct), monkeys performed significantly better with familiar than with novel stimuli (ttest, p , 0.01). For undegraded stimuli at 100% coherence, the monkeys' performance was near ceiling for both novel and familiar stimuli (92% and 95% respectively; t-test, p = 0.12). Learning-dependent performance improvements for degraded stimuli were highly consistent across stimuli and monkeys. There were in fact no significant differences in the monkeys' performance to each of the familiar stimuli across sessions at all coherence levels (one-way ANOVAs, p . 0.1), and this was also true for novel stimuli. In addition, performance for novel and familiar stimuli did not differ significantly between the two monkeys at any coherence level (t-tests, p . 0.1). Note that the monkeys' excellent performance with undegraded novel objects reflects the fact that they have acquired the rule of the DMS task and are thus able to perform it near ceiling with novel stimuli. The timecourse of this learning-dependent difference in performance is shown in Figure 2B. Session 1 represents a session in which a set of four initially novel stimuli is arbitrarily chosen and kept constant in subsequent sessions, thus becoming more and more familiar. Comparing performance for these stimuli with performance of novel stimuli that are randomly chosen in each session reveals that it takes several sessions for the learning effect to appear. Performance averaged across the first five session was similar for novel and familiar stimuli (ttest, p = 0.43). Furthermore, the learning-dependent difference in performance appeared to asymptote after around ten sessions. In summary, learning led to robust improvements in the monkeys' ability to identify degraded natural images while the monkeys performed near ceiling for novel and familiar undegraded images.
We now examine whether there were any learning-dependent changes in the activity of V4 neurons. Results described in this report are based on a population of 83 V4 neurons. We first asked whether there was any general difference in mean activity elicited by novel and familiar undegraded stimuli. We found that the response of V4 neurons to novel (hFR nov i = 36.7 6 2.8 Hz) and familiar stimuli (hFR fam i = 34.2 6 2.7 Hz) was similar (t-test, p = 0.14; see also Table 1). Out of the 14 neurons that individually showed a significant difference in activity between novel and familiar stimuli (t-test, p , 0.05), similar fractions preferred familiar or novel stimuli (6/14 or 43% and 8/14 or 57% respectively; v 2 test, p = 0.45). We thus found that learning did not lead to a change in the average activity of V4 neurons for undegraded stimuli. Next, we examined whether learning resulted in any change in the amount of stimulus-specific information that V4 neurons communicated. To do this, we computed the mutual information between the set of four familiar or novel stimuli and the associated neural responses (see Materials and Methods). We found that V4 neurons on average communicated similar amounts of information about novel and familiar undegraded stimuli ( Figure 3A). The average information communicated by each neuron in the entire population of 83 V4 neurons was similar for novel stimuli hI nov i = 0.48 bits and for familiar stimuli hI fam i = 0.45 bits (ttest, p = 0.16). We selected 25% of the population (21 out of 83 neurons), which communicated most information about novel or familiar objects (see Materials and Methods). For this population of most informative neurons (white circles in Figure 3A), we also found no difference between novel and familiar stimuli (hI nov i = 0.67 bits, hI fam i = 0.65 bits; t-test, p = 0.48). Thus, for natural images (undegraded stimuli) we saw no significant learning-dependent difference in performance and also no learning-dependent changes in the average activity or in the amount of stimulus-specific information communicated by V4 neurons.
At intermediate coherence levels, the monkeys' ability to correctly identify degraded stimuli was improved by learning, and we asked whether this behavioral improvement was associated with any changes in the activity of V4 neurons. We found that V4 neurons indeed communicated significantly more information about degraded familiar than about degraded novel stimuli ( Figure 3B). Considering the entire population, learning led to a significant increase in information about degraded stimuli from hI nov i = 0.34 bits to hI fam i = 0.40 bits (t-test, p , 0.05). For the 25% most informative Figure 2. Learning Improved Monkeys' Ability to Identify Degraded Stimuli (A) Behavioral performance for the sessions during which neural data was collected (n = 11) is shown as a function of the coherence of the sample stimulus for novel and familiar stimuli. Asterisks denote significant differences in performance for novel and familiar stimuli. (B) The performance at 45% coherence (%Correct 45 ) is shown for a set of novel stimuli that is introduced in the first session and then used during all subsequent sessions and thus becomes more and more familiar during subsequent sessions (circles). For comparison, performance with novel stimuli that are new and unique to each session is shown (diamonds). Sessions 1-20 represent purely behavioral training sessions (TRAIN), and sessions 21-26 represent combined behavioral and single unit recording sessions (REC). DOI: 10.1371/journal.pbio.0020044.g002 neurons (white circles in Figure 3B), we observed an even larger change from hI nov i = 0.47 bits to hI fam i = 0.67 bits (ttest, p , 0.001), corresponding to a 40% increase in information with learning. We further characterized this effect by examining how degradation affected the amount of information separately for novel and familiar stimuli. For both novel ( Figure 4A) and familiar ( Figure 4B) stimuli, V4 neurons communicated on average more information about undegraded (I 100 ) than degraded (I degrad ) stimuli (paired ttests, p , 0.001), reflecting the fact that behavioral performance was better for undegraded than degraded stimuli (see Figure 2A). The DI distributions (I 100 ÿ I degrad ) for familiar and novel stimuli shown in the insets ( Figure 4A and 4B), however, differed significantly (paired t-test, p , 0.001), and learning was associated with a rightward shift in this distribution (hDI fam i = 0.06, hDI nov i = 0.13). Interestingly, the kurtosis or skewness of the DI distribution changed by an order of magnitude from 0.13 for novel stimuli to 5.5 for familiar stimuli, similar to experience-dependent effects that have been observed on hippocampal place cell activity (Mehta et al. 2000;Mehta 2001). As a consequence of these learningdependent changes, many V4 neurons actually communicated more information about degraded than undegraded familiar stimuli (25/83 or 30%), whereas only a small minority did so for novel stimuli (6/83 or 7%). This difference in proportions was significant (v 2 test, p , 0.001). Taken together, learning accordingly resulted in an increase in the amount of information communicated by V4 neurons about degraded stimuli and many neurons actually communicated more information about degraded than undegraded familiar stimuli.
How did single V4 neurons mediate this learning-dependent increase in information about degraded stimuli? The activity of an example neuron is shown in Figure 5 in histogram and raster format for its preferred and nonpreferred familiar stimulus. This neuron showed little or no response to pure visual noise (0% coherence) or to its nonpreferred stimulus at any coherence level ( Figure 5B). It was activated to a peak firing rate of about 20Hz by its preferred stimulus (red curve in Figure 5A). Degradation of the preferred stimulus resulted in brisk activity of this neuron, and activity was greater to the preferred stimulus at all intermediate coherence levels (35%-65%) than to the undegraded preferred stimulus (paired t-tests, p , 0.01). For this neuron (see star in Figure 3), degradation resulted in a large increase in information about familiar stimuli from I 100 = 0.18 bits to I degrad = 0.74 bits. This example neuron thus displayed a nonmonotonic, inverted U-shaped response as a function of degradation. The responses of this neuron for the preferred and nonpreferred familiar stimuli and also for the corresponding novel stimuli are summarized in Figure 5C. While the preferred novel undegraded stimulus also activated the neuron, degradation of this stimulus was not associated with significant response enhancement. To examine whether the inverted U-shaped response was in fact characteristic of the V4 neurons that communicated most information about degraded stimuli, we plotted the activity of the neurons which were highly selective for degraded stimuli (see white circles in Figure 3B), as a function of coherence, using the preferred stimulus for each neuron ( Figure 6). We found that across this population, neural activity was indeed significantly enhanced for familiar stimuli at intermediate coherence levels of 55% and 65% relative to activity to undegraded familiar stimuli (paired t-tests: p , 0.05). By contrast, activity to novel stimuli systematically decreased with degradation and was significantly below activity to undegraded stimuli at coherence levels of 35% and 45% (paired t-tests, p , 0.05). As expected, V4 neurons generally showed greater activity to novel and familiar stimuli than to pure noise at 0% coherence (paired ttests, p , 0.05). As detailed in Table 1, mean activity was similar for undegraded familiar and novel stimuli, but significantly greater for degraded familiar than degraded novel stimuli (paired t-test, p , 0.05). Taken together, learning resulted in an increase in information communicated by V4 neurons about degraded or indeterminate stimuli. This increase in information was mediated by neurons that showed an enhancement in neural activity to degraded compared to undegraded familiar stimuli.
We performed additional behavioral experiments to assess whether learning led to any changes in fixational eye movements, because such changes might shed light on what mediates monkeys' behavioral advantages for familiar degraded stimuli. In these studies, we allowed the monkeys to freely view sample stimuli during task performance and then estimated a fixation probability map (FPM) for each familiar and novel stimulus presented at 45% and 100% coherence (see Materials and Methods. We applied a threshold to this map to identify regions where monkeys tended to fixate with high probability. The thresholded FPMs for 45% and 100% coherence versions of an example familiar and novel stimulus, along with the overlap between these regions, are shown in Figure 7. As can be seen, there was substantially more overlap between the regions of focused eye position at 45% and at 100% after learning. This effect was significant across sessions and stimuli: On average, the overlap region increased by a factor of 2.8 from 0.54 6 0.14 dva 2 (degrees of visual angle squared) for novel stimuli to 1.47 6 0.16 dva 2 for familiar stimuli (unpaired t-test, p , 0.0001). There were also significant learning-dependent increases in the high-probability FPM areas at 45% and 100% coherence (at 45% from 1.04 6 0.25 dva 2 to 1.88 6 0.19 dva 2 , unpaired t-test, p , 0.01; at 100% from 0.84 6 0.21 dva 2 to 1.74 6 0.21 dva 2 , unpaired t-test, p , 0.01). This learning-dependent increase in the high probability FPM regions and their overlap was highly consistent across sessions and monkeys, and we observed it during all six sessions in both monkeys. Note that the lower FPM values for novel stimuli indicate that eye position was less focused and therefore more distributed before learning, whereas for familiar stimuli robust regions of focused eye position developed.
Discussion
V4 neurons are generally conceptualized as detectors of visual features of intermediate complexity, such as non- Here we summarize how much information V4 neurons communicated about novel (I nov ) and familiar (I fam ) stimuli for undegraded (A) and degraded (B) stimuli. Each symbol in the scatter plot represents a single neuron and shows how much information this neuron communicated about familiar (x-axis) and novel (y-axis) stimuli. In each scatter plot, white-shaded symbols represent the 25% most informative neurons, i.e., the one-quarter of the population communicating most information about either familiar or novel stimuli. The remaining three-quarters of the population are shown in gray shading. The single neuron example in Figure 5 is represented by the star. The black 'x' represents the population mean for the 25% most informative neurons. DOI: 10.1371/journal.pbio.0020044.g003 Cartesian gratings (Gallant et al. 1996) or contour features (Pasupathy and Connor 1999). We have found that learning does not affect how V4 neurons respond to undegraded natural images, both in terms of mean firing rate and information communicated about these stimuli. This absence of learning-dependent differences suggests that this V4 selectivity for features of intermediate complexity is not modified by learning, at least during the several weeks of training in the adult monkey during our task. Basic response properties of V4 neurons thus appear not be altered by learning, similar to findings in V1 that have found that parameters such as receptive field size or orientation tuning width remain unchanged even after extensive training .
Learning does however lead to robust changes in how V4 neurons respond in the presence of degradation. For novel stimuli, V4 neurons tend to act as simple passive feature detectors for which the addition of increasing amounts of noise to the display results in successive reduction in neural activity. Consistent with this finding, we observed a systematic decrease of blood-oxygen level-dependent (BOLD) levels with decreasing stimulus coherence in area V4 of anesthetized monkeys using novel stimuli (Rainer et al. 2001) . After learning, many V4 neurons showed increased activity with the (C) The average firing rate during stimulus presentation as a function of coherence is summarized for this neuron for its preferred (þ) and nonpreferred (ÿ) familiar (fam) and novel (nov) stimuli. DOI: 10.1371/journal.pbio.0020044.g005 Figure 6. Learning-Dependent Enhancement for Degraded Stimuli-Population Activity These panels show the activity of neurons that communicated most information about degraded stimuli (i.e., white-shaded symbols in Figure 3B) as a function of degradation for familiar (A and B) and novel (C and D) stimuli. The preferred stimulus was used for each neuron. The left column shows activity in PSTH format and the right column shows the mean stimulus-evoked activity at each coherence level; asterisks denote significant differences between activity at each coherence level and activity to undegraded stimuli at 100% coherence (paired t-tests, p , 0.05). DOI: 10.1371/journal.pbio.0020044.g006 degradation of familiar stimuli, suggesting that they were specifically recruited for difficult discriminations involving the processing of these indeterminate visual inputs. The extraction and amplification of task-relevant elements from visual scenes is a key problem of intermediate-level vision. Our results suggest that V4 neurons play a crucial part in resolving indeterminate visual stimuli and signaling the presence of salient stimulus features. Consistent with this interpretation, studies have found that deactivation or ablation of V4 in monkeys has little impact on basic visual functions, but severely affects shape discrimination (Girard et al. 2002), the identification of images that are occluded or have incomplete contour information (Schiller 1995) or the visual selection in the presence of salient distracters (De Weerd et al. 1999). A recent study found severe deficits after V4 ablation in tasks that required making judgments about oriented line segments embedded in distracter arrays (Merigan 2000), a task that has many similarities to the extraction of task-relevant features from degraded displays in our study. We suggest that lesion-induced deficits are a result of disrupting the operation of V4 neurons which are engaged in selective amplification of task-relevant elements of the visual scene. This idea is consistent with our analysis of eye movements, because monkeys focused more reliably on particular stimulus features for familiar than for novel stimuli. This raises the possibility that allocation of focused attention during task performance under central fixation might have contributed to our results, since attention can greatly enhance the response of V4 neurons to visual stimulation (Moran and Desimone 1985;Connor et al. 1997). Indeed, we suggest that the enhancement in activity and information about degraded familiar stimuli can be conceptualized as a learning-dependent form of attention.
Our findings in V4 are in stark contrast to data obtained in the PF cortex using similar task and stimuli (Rainer and Miller 2000). In the PF cortex, learning resulted in qualitatively different changes in neural activity. Learning resulted in a robust reduction in average neural activity to undegraded stimuli in PF cortex, whereas we found no general differences in activity in V4. This implies that while PF cortex may play a particularly important role in processing novel stimuli (Ranganath and Rainer 2003), extrastriate visual areas communicate feature-specific information largely in the absence of learning-related changes for easy-to-discriminate stimuli. Learning led to neural response invariance across degradation in the PF cortex: neurons that responded differentially to two stimuli maintained this response difference for degraded stimuli after learning, whereas the difference in neural response collapsed with degradation for novel stimuli. Response invariance across degradation implies that the PF cortex does not differentiate between degraded and undegraded versions of a stimulus. Learning thus builds response invariance in the PF cortex. In V4, we found that learning led to a selective enhancement of activity for degraded stimuli over and above the response for undegraded stimuli. While PF neurons showed invariant activity, V4 neurons showed inverted U-shaped noise tuning and were thus most active during difficult discriminations, showing responses consistent with selective amplification of featurespecific activity. Our results suggest that the enhancement observed in V4 may be instrumental in establishing invariance in PF cortex and that interaction between these areas may be required to maintain it. Further experiments using simultaneous recordings from both regions are needed to directly test such a hypothesis. Several studies have identified learning-dependent increases in BOLD signals in extrastriate and temporal visual areas (Dolan et al. 1997;Grill-Spector et al. 2000). Because BOLD measures aggregate activation across many neurons, these studies cannot dissociate whether learning-dependent increases are due to building of invariance or selective enhancement of a subpopulation of neurons. This kind of question is certainly important for characterizing functional properties of brain regions and can be answered definitively only by detailed comparison of neural population activity with simultaneously acquired BOLD signal (Logothetis et al. 1999. The task dependence of learning effects in V1 (Gilbert 1998;Gilbert et al. 2001) has been taken as evidence that topdown modulation plays an important role in the learningdependent modifications seen in V1 neurons and that, accordingly, these changes are reflections of plasticity in higher areas of the visual system. Our findings are certainly consistent with this view and suggest that vision is an active process involving recurrent interaction of different brain regions rather than a purely feed-forward process (Thorpe et al. 1996), although our data are consistent with largely feedforward processing for familiar undegraded stimuli. A possible biophysical mechanism for this interaction was identified by a recent study, which demonstrated that subthreshold activation of the distal apical dendrite of layer V pyramidal neurons can greatly enhance their response to more proximal inputs (Larkum et al. 1999). Because feedback projections from higher cortical areas tend to arrive in upper cortical layers, this represents a mechanism by which feedback could exert control over activity in sensory cortices (Siegel et al. 2000) and thus contribute to the inverted Ushaped responses observed in the present study.
Several computational models have investigated how brain regions might interact during stimulus identification. A key feature of such models is the interaction between bottom-up and top-down processing (Carpenter and Grossberg 1987;Figure 7. Eye Movement Analysis during Free Viewing Regions of high fixation probability during free viewing of an example familiar and novel stimulus are shown. Monkeys viewed stimuli at 100% coherence (red-shaded regions) and at 45% coherence (yellow-shaded regions). The green-shaded regions represent regions with high fixation probability at both 45% and 100% coherence. DOI: 10.1371/journal.pbio.0020044.g007 Ullman 1995). Consider a neuron in an intermediate visual area such as V4, receiving bottom-up feature-tuned visual input from visual areas lower in the hierarchy and top-down feedback from higher areas representing possible interpretations of the stimulus. It has been hypothesized that a match between top-down and bottom-up inputs could result in elevated activity or nonlinear response enhancement. We have observed such enhancement for familiar but not for novel stimuli, indicating that learning plays a critical role in facilitating interaction between top-down and bottom-up processing streams. Another type of model has suggested that top-down feedback may represent a predictive code, where top-down signals effectively cancel predictable responses in the bottom-up signal (Mumford 1992;Rao and Ballard 1999). In this scheme, activity would be reduced for undegraded stimuli because it can be accurately 'predicted away' by higher level areas. Degraded stimuli containing noise might not be accurately predicted, leaving more remaining activity compared to undegraded images. However, based on this model, one would predict lower activity for familiar than for novel degraded stimuli, because more of the familiar stimuli can be predicted away-exactly the opposite of what we have found. Thus, our results are more consistent with theories that conceptualize top-down feedback as high-level stimulus interpretations rather than as an error signals.
Materials and Methods
Behavioral and electrophysiological methods. Two adult male rhesus monkeys (Macaca mulatta) participated in these experiments. All studies were approved by local authorities and were in full compliance with applicable guidelines (EUVD 86/609/EEC) for the care and use of laboratory animals. Stimuli were 108 3 108 in size, 24bit color depth, and presented at the center of gaze on a c-corrected 21-inch monitor with linear display characteristics placed at a distance of 97 cm from the monkeys. Stimuli were generated using Fourier techniques that have been described in detail elsewhere (Rainer et al. 2001). In brief, a large set of natural images was first normalized to have identical Fourier amplitude spectra. Degraded versions of natural images were generated mixing the Fourier phase spectra of natural images with a random phase spectrum corresponding to visual noise, independently for each of the RGB color channels. A different random phase spectrum was used during each session, and it was mixed with all images used during that session.
Each trial began when the monkey grasped a lever and then acquired fixation on a central fixation point. After 1000 ms, a sample stimulus was presented for 320 ms, which could be any one of eight different images at six coherence levels (0%, 35%, 45%, 55%, 65%, and 100%). After a delay of 1000 ms, a probe stimulus was presented for 600 ms, which could be any one of the eight undegraded images (100% coherence). The monkeys were required to release the lever if the probe matched the sample (i.e., if the sample had been identical to or a degraded version of the probe). In case of a nonmatch, a second brief delay (200 ms) followed the probe, and this delay was always terminated by the presentation of the correct matching stimulus, ensuring that monkeys had to make a behavioral response on every trial. The monkeys were rewarded with apple juice for making correct responses and were rewarded randomly at 0% coherence where the sample contained no task-relevant information. During each session, the monkeys performed the task with a set of four familiar stimuli, with which they had many weeks of practice, as well as with a set of four novel stimuli that they had never seen before. Matches occurred on 50% of trials; the other 50% were non-matches selected randomly from the remaining stimuli.
Owing to the normalization procedure, familiar and novel stimuli did not differ in terms of low-level characteristics of spatial frequency content and image intensity. Familiar stimuli from four categories were used (faces, flowers, birds, and landscapes), and one of the four novel stimuli also came from each of these categories. Fixation was monitored with a scleral search coil and sampled at 200Hz (CNC Engineering, Enfield, Connecticut, United States), and the monkeys were required to maintain fixation within a 61.258 window at all times during the trial. The monkeys completed at least ten trials per condition during each session.
Recordings were made from V4 using standard electrophysiological techniques. We employed a grid system (CRIST, Damascus, Maryland, United States) with eight tungsten microelectrodes (FHC Inc., Bowdoinham, Maine, United States). Preoperative magnetic resonance imaging (MRI) was used to identify the stereotaxic coordinates of V4, which was then covered by a recording chamber. To ensure an unbiased estimate of neural activity, we made no attempt to select neurons based on task selectivity. Instead, we advanced each electrode until the activity of one or more neurons was well isolated and then began collecting data. Comparison of the monkeys' performance during the last six training sessions to performance during recording sessions revealed that performance was unchanged for novel objects (t-test, p = 0.87), but significantly lower during recording sessions for familiar stimuli (t-test, p , 0.01), likely owing to nonspecific factors such as additional wait periods during these sessions.
Eye movement analysis. To determine whether there were any learning-related changes in the monkeys' fixational eye movements, we performed separate behavioral experiments in which we allowed the monkeys to freely view the sample stimulus for a period of about 2 s. As before, we presented four familiar and four novel stimuli during each session, but we only used two coherence levels of 100% and 45% to allow us to assess whether learning led to changes both for degraded and undegraded stimuli. Monkeys performed around 20 trials for each stimulus at each degradation levels during each session, and we report here the results from a total of six sessions. We identified periods of fixation during free-viewing as periods as periods of at least 100 ms duration during which eye position did not change by more than 0.38. We then marked off a region of 0.38 3 0.38 around this position and superimposed these regions for all fixations during all relevant trials. By normalizing the volume under this function to a value of 1, we created an FPM for each stimulus. We then applied a single threshold to the FPM for all stimuli and degradation levels. The threshold g was chosen to be an order of magnitude greater than the FPM value corresponding to randomly distributed eye position, i.e., to a value of g = 10 3 1/256 2 , and these areas were converted to degrees squared of visual angle (dva 2 ). The thresholded FPMs shown in Figure 7 depict the regions of the FPM that passed threshold for each of the two stimuli during an example session and thus represent the foci of eye position or regions of high fixation probability for that stimulus. Because FPMs are all normalized, a small or absent thresholded FPM region indicates that eye position was distributed on the stimulus without a clear focus. Note that for familiar stimuli, thresholded FPMs were highly consistent across sessions confirming the robustness of this measure.
Data analysis. Neural activity was assessed during a fixed period of 310 ms duration, beginning 50 ms after the onset of the visual stimulus to take response latency into account. Such a period roughly corresponds to the time between saccades during natural viewing conditions. Out of a total population of 116 neurons, 83 task-related neurons were identified as showing significant differences in activity between any of the eight stimuli at any coherence level using a Bonferroni-corrected t-test evaluated at p , 0.05. Mean firing rates, reported in Table 1, were computed using the preferred stimulus for each neuron.
To assess whether learning had any systematic effect on the amount of stimulus-specific information communicated by V4 neurons, we quantified how much information was contained in the pattern of neural firing rates about novel and familiar stimuli separately. This quantity is given by the mutual information between the set of four familiar or novel stimuli and the set of associated firing rates (Shannon 1948). We thus computed the mutual information (I) among the set of stimuli (s) and the neural responses (r): PðsÞ X r PðrjsÞlog 2 PðrjsÞ PðrÞ ; where P(s) is the probability of showing stimulus s, P(r|s) is the probability of observing a response r when stimulus s is presented, and P(r) is the probability of observing response r. Because calculation of information requires many trials, we computed information for two conditions: degraded and undegraded stimuli. For degraded stimuli, we pooled the coherence levels from 35% to 65%. For undegraded stimuli, we estimated the mutual information for 100% coherence stimuli during the sample period as well as during the probe period on nonmatch trials (to exclude possible movement-related activity). We report here estimates during the probe period because they are based more trials, but results were similar for the sample period. This ensured that information measures for degraded and undegraded stimuli were based in a similar number of trials.
For each neuron we estimated four different information values, describing how much stimulus-specific information was contained in its firing rate distributions about undegraded, as well as degraded, familiar (I fam,100 , I fam,degrad ) and novel (I nov,100 , I nov,degrad ) stimuli. Note that although across all sessions we employed many more novel than familiar stimuli, each individual neuron from which we recorded during a given session 'saw' exactly the same number of four familiar and four novel stimuli. We identified highly selective neurons in each population by selecting the 25% neurons that communicated most stimulus information about either novel or familiar stimuli (n = 21 out of 83 neurons total); i.e., we chose the top 25% of the distribution max(I fam ,I nov ). We did this because, owing to our unbiased procedure, our sample contains neurons that did not communicate large amounts of information, and we thus wanted to establish that our conclusions also applied to the neurons that communicated most information. These neurons are shown as white filled circles in Figure 3A and 3C, whereas the remaining 75% of neurons (n = 62) are shown as gray filled circles. There was significant overlap (13/21, 62%) between the populations of informative neurons for degraded and undegraded stimuli (v 2 test, p , 0.05), indicating that the majority of neurons that were informative for undegraded stimuli were also informative for degraded stimuli. There were no significant differences between informative neurons and the entire population in terms of mean firing rate. Unless otherwise noted, we used paired ttests to compare information measures obtained for novel and familiar stimuli. | 2014-10-01T00:00:00.000Z | 2004-02-01T00:00:00.000 | {
"year": 2004,
"sha1": "fbf8c81b3ffa5500613a5de691a621fb19ff1bb2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.0020044&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f556b4f5ca367510e89d4eb268394c466100051",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
125233597 | pes2o/s2orc | v3-fos-license | Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
Introduction
Since the publication from Makridakis and Hibon [1] about "M3 competition: results, conclusions, and implications", particularly the third result that stated "the accuracy when various methods are being combined outperforms, on average, the individual methods being combined and does very well in comparison to other methods", many researches about hybrid method in forecasting have been done in spectacular way. Zhang [2] is one of the researchers who firstly proposed hybrid model by combining ARIMA as a linear model and Neural Networks as a nonlinear model for time series forecasting. Zhang's work has influenced many forecasting researchers to develop hybrid method for solving forecasting problems.
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal pattern, but also affected by calendar variation pattern. There are two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. Monthly data about fashion sales in retail company [3], inflation both national and in certain city [4], and inflow-outflow of currency at central bank [5] are some examples about economics and business data that have trend, seasonal, and calendar variation patterns.
There are some methods that usually used for forecasting time series with trend, seasonal, and calendar variation patterns. Liu [6] and Cleveland and Devlin [7] were among the initial researchers who studied the calendar variation effect on time series. Currently, Suhartono et al. [3] proposed two level ARIMAX and regression models for modeling time series data with calendar variation effects due to Eid ul-Fitr, trend and seasonal patterns. In general, these models that were developed for forecasting trend, seasonal, and calendar variation patterns only focused on individual model, such as time series regression models, ARIMAX models, and models based on artificial intelligent methods such as Neural Networks. This paper focuses on developing a hybrid model by combining classical linear time series models (such as time series regression and ARIMAX modes) and modern nonlinear models based on artificial intelligent methods particularly Neural Networks for forecasting time series with trend, seasonal, and calendar variation patterns. The proposed models are two level models that are developed based on classical linear time series model in the first level and neural networks in the second level. This model is applied to two empirical data, i.e. simulation and real data about currency inflow and outflow in Bank Indonesia.
Method
In this section, three methods for handling trend, seasonal, and calendar variation patterns are presented, i.e. ARIMAX as classical linear models, Neural Networks as modern nonlinear model, and hybrid model that combining classical linear and modern nonlinear models.
ARIMAX
Cryer and Chan [8] stated that ARIMAX model is ARIMA model with additional variable or also known as exogenous variables. The general ARIMAX model for forecasting data with trend, seasonal, and calendar variation pattern is as follows [3] (1) where = 1,2, … , , is dummy variables for seasonal pattern , is dummy variable for -th calendar variation effect is a white noise process and is the seasonal period, is the backshift operator, and is a sequence of white noise with zero mean and constant variance.
Neural Networks
Neural networks (NN) is one of machine learning techniques that has been developed as a generalization of the mathematical model of the biological nervous system. The neural networks model that mostly be used in time series forecasting is feed forward neural networks (FFNN) or multilayer perceptron (MLP) [9]. The accuracy of neural networks model is determined by three components, i.e the network architecture, training methods or algorithms, and activation functions. FFNN with p input and one hidden layer that consist of m neuron can be illustrated as Figure 1.
The model of FFNN in Figure 1 can be written as follows: where w is the weights that connect the input layer to the hidden layer, v is the weights that connect the hidden layer to the output layer, g 1 (·) and g 2 (·) is the activation function, while w ji and v j are the weights. The widely used activation function are logistic sigmoid and tangent hyperbolic.
Hybrid Model
Hybrid model is a combination between linear and nonlinear models that usually be used for increasing the forecast accuracy. In general, the mathematical form of combination between linear and nonlinear models is as follows [2]: where is a linear component and is a nonlinear component of the model. In this paper, NN is used for modeling the nonlinear component as proposed by Zhang [2].
Estimation of this hybrid model is done in two steps. The first is modelling the linear component to get the residual and then applying a nonlinear model to this residual for handling the nonlinear component. In this paper, ARIMAX model is used for handling the linear component. Assume is residual at period from the first linear model or ARIMAX model, i.e.
= −̂ (4) where ̂ is the forecast of linear model at period . Then, NN is applied for modelling as follows: where (. ) is a nonlinear function from the NN model and is the residual of this NN model. Hence, the forecast value of the hybrid ARIMAX-NN model is as follows ̂=̂+̂ .
Results
In this section, the results from both simulation studies and real case studies are presented. Simulation studies is done for evaluating the performance of each method for forecasting data with trend, seasonal, and calendar variation patterns. Then, the proposed hybrid method applies to real monthly data about outflow and inflow currency at Bank Indonesia.
The results of simulation study
The simulation data that contains trend, seasonal, and calendar variation patterns are generated by using the model as follows: Three forecasting methods are applied to these four scenarios of simulation study and the forecast accuracy are compared. The comparison results of forecast accuracy between ARIMAX, NN, and Hybrid models in testing data based on RMSE are shown in Table 1. The results show that the Hybrid model yield more accurate forecast than ARIMAX and NN in each scenario. Hence, the Hybrid model is the best model for forecasting these four scenarios data, i.e. data with trend, seasonal both homogenous and heterogenous, calendar variation, and both linear and nonlinear noise patterns. This result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
The results of real data
In this paper, two real data are used as case studies, i.e. monthly currency inflow and outflow at Bank Indonesia in West Java Province for the period 2004:M1 until 2016:M12. The last 12 data in year 2016 is used as testing data. Figure 2 illustrates the training data in a time series plot. This plot shows the presence of calendar variation due to the celebration of Eid every year. Moreover, Figure 2a illustrates that the largest inflow in every year is at one month before or during month Eid holidays depends on the week that Eid occurred. Similarly, Figure 2b shows , where * is transformation data by Box-Cox transformation with = 0.2, = 1,2, … , ; 1, and 2, are the second and third trend respectively, , dummy variables for seasonal pattern, , is dummy variable for -th calendar variation effect, and is a white noise process.
The results of Neural Networks model
The inputs are determined by using PACF or Partial Autocorrelation Function from the data as proposed by Crone and Kourentzes [10]. For illustration, the best NN model for outflow data is the NN architecture as shown at Figure 3. This graph shows that the best inputs for forecasting outflow data are lag 1, 2, 3, 5, 12, 13, and 14 of outflow, and 2 neurons in hidden layer.
The results of Hybrid model
Hybrid ARIMAX-NN model is a combination between ARIMAX as linear model and NN as nonlinear model. Firstly, ARIMAX model is fitted to original data and then the residuals are fitted by NN. Finally, the forecast values are calculated by summing the forecast values of ARIMAX and NN.
As an illustration, the inputs of NN at this hybrid model for outflow data are the lags of residual from the ARIMAX model, i.e. lag 1, 2, 12, 13, 14, 25, and 26. By trying the number of neurons in hidden layer from 1 to 15, the results show that 3 neurons in hidden layer is the best model for forecasting the outflow data. Thus, NN with 7 lags input and 3 neurons in hidden layer is the best NN in this hybrid model with the nonlinear equation as follows: Finally, the forecast values for hybrid model are calculated by summing the forecast from linear model in equation (8) or ̂, and nonlinear model in equation (9) or ̂, i.e.
The evaluation of forecast accuracy
The root mean squared error (RMSE) in testing data is used as an evaluation index for evaluating the performance of three forecasting models that be applied for forecasting inflow and outflow data. The results of RMSE in testing data obtained using these models are listed in Table 2. Moreover, the graph between forecast and actual values from each method at testing data are illustrated in Figure 4. The results at Table 2 and Figure 4 show that the hybrid model generate more accurate forecasted values than ARIMAX and NN for outflow data. It supports the third results of M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model. Otherwise, ARIMAX yield more accurate forecast than NN and Hybrid for inflow data. This result is also in line with the first results of M3 competition, i.e. the complex methods do not necessary yield better forecast than the simpler one.
Conclusion and future work
In general, forecasting of time series data that have trend, seasonal and calendar variation patterns need special treatment. This paper showed that ARIMAX, NN, and Hybrid model could be used for forecasting these kind of time series. Moreover, the most important part in applying these models is how to determine or choose the appropriate inputs for each model, particularly the inputs for tackling the calendar variation effects. The results from simulation study showed that the hybrid model yield better forecast than ARIMAX and NN models. Otherwise, the results at real data showed that the hybrid model only yielded better forecast in one of two case studies, i.e. for forecasting outflow data. These results of simulation and outflow data are in line with the conclusion of many previous researches, particularly the third result of M3 competition [1] and Prayoga et al. [5] that stated hybrid model in average will give better forecast than individual model. In addition, the results at the second case study, i.e. inflow data, showed that simpler model yielded better forecast than the complex models. This result also in line with the first result of M3 competition [1] and Suhartono et al. [3] that concluded the complex model do not necessary give better forecast than the simpler one. Moreover, further research is needed to validate these results and to compare the forecast accuracy with other more intelligent methodologies. | 2019-04-22T13:07:59.413Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "999ed92222f433f55ccd601acb8c87a8e02c54c4",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/890/1/012160/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "58be60b32652af43fde0e8699081997a58e12c2b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
219061452 | pes2o/s2orc | v3-fos-license | EFFICACY OF INSECTICIDES AND BOTANICALS AGAINST BREVICORYNE BRASSICAE POPULATION ON CABBAGE
The investigation was conducted on the “Efficacy of insecticides and botanicals against Brevicoryne brassicae population on cabbage” at field of Vegetable Research and Demonstration Block, College of Horticulture, VCSG UUHF, Bharsar during 2018. Among the insecticides Imidacloprid 17.8% SL (0.0356%), Thiamethoxam 25% WG (0.0025%), Spinosad 45% SC (0.0144%) and Acetamiprid 20% SP (0.002%) were found effective against B. brassicae whereas, Novaluron 10% EC (0.0075%) was found least effective against the pest (i.e. 57.15 and 19.12 per cent change over control at the end of 1st and 2nd spray, respectively). In botanicals Neem leaf extract + Cow urine (5%) was observed highly effective (i.e. 32.29 and 8.59 per cent change over control at the end of 1st and 2nd spray, respectively) and Lantana leaf extract + Cow urine (5%) was observed least effective against B. brassicae (i.e. 22.08 and 6.02 per cent change over control at 15 days after both spray, respectively).
Cabbage
(Brassica oleracea var capitata) is the most popular vegetable grown throughout the world which belongs to the family Cruciferae and genus Brassica. Cabbages are a highly nutritious food source and contain a high amount of minerals and vitamins like A, B1, B2 and C (Hasan and Solaiman., 2012). Cabbage covers about 4.3% of total area under vegetables in India (Vanitha et al., 2013) and in cabbage production India is next to China. In India, a total of 37 insect pests have been reported to feed on cabbage (Lal., 1975). The extent of damage due to these pests in India is known to range from 7 to 90 per cent with consequent reduction in yield from 20 to 80 per cent on cabbage (Prasad., 1963). Among these pests, Cabbage aphids (Brevicoryne brassicae) are one of the most important pests of cabbage. Aphids feed by sucking sap from their host plants and continued feeding by aphids causes yellowing wilting and stunting of plants (Opfer and Mcgrath., 2013). Aphids infested plants show slow growth, which results in 35-75% yield losses (Khan et al., 2015).
To reduce insect pest infestation, various insecticides are applied and those insecticides could lead to problem of the insect resistance, environmental and food contamination and reduced populations of natural enemies which may result in secondary pest outbreaks or pest resurgences (Garratt and Kennedy., 2006). Therefore use of alternatives including botanicals, bio pesticides and new insecticides is essential to reduce pest resurgences. Among the new insecticides Thiamethoxam, Imidacloprid and Acetamiprid are the important neonicotinoids insecticides ( Maienfisch et al., 1999). Iwasa et al. (2004) reported that Imidacloprid is a fastest acting neonicotinoid insecticide for controlling sucking insects. On the other hand several workers were reported on the use of plant extract and cow urine for the control of insect pests of field crops (Dubey et al., 2004 andSharma et al., 2009). In botanical plant extracts Azadirachta indica, Lantana camera and Eupatorium spp. are found promising to manage cabbage aphid (Sood et al., 2000). The efficacy of cow urine in combination with neem extract is found highly effective in aphid (Gupta., 2005).
Methodology
The experiment was conducted in the Vegetable Research and Demonstration Block, Department of Vegetable Science, College of Horticulture, VCSG Uttarakhand University of Horticulture and Forestry, Bharsar, Pauri Garhwal with ten treatments and three replications in RCBD (Randomized Complete Block Design).
The botanicals and insecticides of required concentration were prepared in water just before spraying during evening. All botanical leave extracts were prepared by soaking 100 g chopped leaves in 1 litter cow urine for 15 days before application and the insecticidal solution was prepared by using following formula: Where, V = Volume of the insecticide C = Concentration required A = Amount of spray solution needed %a.i. = Per cent of active ingredient of the insecticide The spraying was done by using hand sprayer fitted with hollow cone nozzle. The first spray was given on 03 th june 2018 and the second spray was on 19 th june 2018. Numbers of insects in cabbage field were recorded at 1 day before spray and 3,7,11 and 15 days after both spray. Percent reduction of insect over control was recorded according to the following formula
Cb Ta
Where, PROC = Population reduction over control Ta = Population of insects after treatment application Tb = Population of insects before treatment application Ca = Population of insects in control after treatment application Cb = Population of insects in control before treatment application
Treatments Details:
Nine treatments along with control were evaluated against cabbage aphids (B. brassicae) on cabbage. The details of treatment were given in Table 1.
Impact of various treatments on cabbage aphid after first spray
The data obtained after first spray are presented in Table No 2 and Table No
Impact of various treatments on cabbage aphid after second spray
The data of second spray are presented in Table No 4 and Table No
DAS-days after spray, ( ) = Values in parentheses are angular transformed value
After both sprays the efficacy was recorded to be minimum in plots treated with Imidacloprid 17.8% SL followed by Thiamethoxam 25% WG, Acetamiprid 20% SP and Spinosad 45% SC. The present findings are also correlated with observation made by Ghosal et al. (2013) who reported that Imidacloprid to be the best effective insecticidal treatment against the aphids with 84.54 per cent protection over control followed by Thiamethoxam and Acetamiprid. The present investigation on neonicotinoid molecules against aphid also appeared similar to the finding of Mishra (2002) and Jafarpour et al., (2011) who reported that neonicotinoids have good ingestion toxicity in comparison with others. Jadhav et al. (2016) also observed that Imidacloprid was best insecticide as compared to Thiamethoxam and sAcetamiprid. Varghese and Mathew (2012) concluded that Imidacloprid, Thiamethoxam and Acetamiprid were highly effective against aphid when compared with Spinosad.
Among the botanicals, after both sprays, Neem leaf extract + Cow urine caused maximum per cent reduction in aphid population followed by Parthaniun leaf extract + Cow urine, Eupatorium leaf | 2020-05-07T09:11:59.396Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "01d9d31717a8b22447b44ed9133d2410dd8f63a7",
"oa_license": null,
"oa_url": "https://doi.org/10.46344/jbino.2020.v09i03.12",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2681446b26a244b6df9d921d7e1e4951fe34b49b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
247368537 | pes2o/s2orc | v3-fos-license | IIoT platforms’ architectural features – a taxonomy and five prevalent archetypes
In the industrial Internet of Things (IIoT), digital platforms have recently received significant attention. Although IIoT platforms revolve around similar business objectives, they address various use cases and, thus, differ considerably in their architectural setup. While research has already investigated IIoT platforms from a business or design perspective, little is known about their underlying technology stack and its implications. To unveil different IIoT platform configurations and better understand their architectural design, we systematically develop and validate a taxonomy of IIoT platforms’ architectural features based on related literature, real-world cases, and expert interviews. On this foundation, we identify and discuss five IIoT platform archetypes (Allrounder, Device Controller, Data Hub, Service Enabler, Connector). Our findings contribute to the descriptive knowledge in this ambiguous research field while also elucidating the interplay of IIoT platforms’ architectural setup and their purpose. From a managerial viewpoint, our results may guide practitioners in comparing and selecting a suitable IIoT platform.
Introduction
In recent years, a large number of digital platforms emerged across industries.Digital platforms and their surrounding ecosystem form complex socio-technical systems that build on developing and managing an appropriate IT architecture and governance regime (Bazarhanova et al., 2020;Hein et al., 2020;Tiwana, 2018).In the uprising industrial Internet of Things (IIoT), the concept of digital platforms has received significant attention, leading to the emergence of more than 620 IoT and IIoT platforms by today (Lueth, 2019) and building a market that is growing by more than 26% a year until 2024 (Industry ARC 2019).Such IIoT platforms provide a digital infrastructure to connect industrial devices into digital networks to collect and process the generate data and consequently facilitate data-driven services (Pauli et al., 2020;Petrik & Herzwurm, 2020).Thus, Pauli et al. (2021) define IIoT platforms as middleware systems to support and integrate heterogeneous hardware, on top of which third parties can develop complementary applications.Such applications cover manifold solutions, such as production optimization through asset monitoring and advising, machine health monitoring through anomaly detection, or customer transparency through better traceability.
Addressing a variety of use cases, IIoT platforms differ considerably in terms of their underlying technology stack and architectural setup (Guth et al., 2018;Mineraud et al., 2016).This is partly due to the technical complexity in business-to-business (B2B) environments and the lack of established standards in the IIoT, leading to rather siloed development (Khan et al., 2020).Consequently, the IIoT platform landscape, while revolving around similar business objectives, is scattered.This creates several issues: first, it creates issues for companies that must understand the IIoT platform market to select a vendor that successfully integrates into their existing IT infrastructure.Companies lack a comprehensive scale to organize and guide decisions in the scattered IIoT platform landscape (Hoffmann et al. 2019).Second, it creates issues for complementors that need to understand the internal architecture of platforms when they are synthesizing their code (i.e., application) with platform resources to create new offerings that are competitively faring (Tiwana, 2018).Last, it creates issues for researchers and strategists that seek to understand the interplay of IIoT platforms' architecture and business models, which are strongly interwoven in the context of digital business, in enabling a competitive advantage (cf.Cennamo, 2021;Zhu & Iansiti, 2012).Research has already put effort into investigating IIoT platforms, focusing on their business model (Hodapp et al., 2019;Petrik & Herzwurm, 2018), analytics framework (Moura et al., 2018), or design criteria (Werner & Petrik, 2019).However, we still miss a unified classification of IIoT platforms' fundamental building blocks, which we subsume as architectural design options, to enable a transparent evaluation and comparison of existing IIoT platforms.Thus, we ask: How can IIoT platforms be classified by their architectural features?
To answer this research question, we develop a taxonomy of IIoT platforms' architectural features following Nickerson et al. (2013) guidelines.Taxonomies are well suited to lay the groundwork for emergent research fields and serve as a first step toward systematizing the fundamental design decisions (Williams et al., 2008).For taxonomy development, we use both the literature and empirical knowledge from 22 IIoT platforms as well as seven semi-structured expert interviews.For taxonomy evaluation, we classify 78 IIoT platforms and, thus, identify and conceptualize five archetypes of IIoT platforms.
Our taxonomy contributes to the descriptive knowledge in this ambiguous research field by explaining the architectural dimensions and prevalent manifestations of digital platforms in the IIoT.Further, we contribute to the prescriptive knowledge by elucidating the interplay between IIoT platforms' architectural setup and their purpose.Lastly, our results provide a comprehensive overview of architectural dimensions that may guide managers in comparing and selecting a suitable IIoT platform to use as well as developers understanding a platforms' architecture when developing applications.
Digital platforms
Originally viewed as multi-sided markets that enable interactions between different actors, the digital platform concept increasingly captured innovation activities (Gawer & Cusumano, 2014).Today, digital platforms are a pivotal element for technological innovation, as the examples of Apple, Facebook, or Microsoft show.Capturing this essence, Tiwana et al., (2010, p. 675) see digital platforms as the "extensible codebase of a software-based system that provides core functionality shared by the modules that interoperate with it and the interfaces through which they interoperate."Adding to this view, the network of third-party providers (i.e., complementors) that builds around a digital platform is often referred to as a digital platform ecosystem (Hein et al., 2020;Reuver et al., 2018).We adopt this view and see a digital platform as an extensible technological foundation on top of which third parties can build platformaugmenting applications.Within this view, architecture plays a significant role in the overall design of a digital platform (Spagnoletti et al., 2015;Tiwana, 2018).Generally, the architecture of a system refers to the structure of interactions among subsystems that constitute it (Tiwana, 2018;Ulrich, 1995).The architecture of a digital platform serves thereby as the "conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both" (Tiwana et al., 2010, p. 677).Digital platforms' varying architecture makes it possible to differentiate between them and determines their respective evolutionary path (Agarwal & Tiwana, 2015;Cennamo, 2018).
Digital platforms bring together three important stakeholders: the platform owner, complementors, and users.The platform owner runs and governs the digital platform.Complementors build on the digital platform and broaden its functionality with applications.The users consume the functionalities provided by the digital platform (van Alstyne et al., 2016).
Industrial Internet of Things
The Internet of Things (IoT) integrates technology-enabled physical objects into a global cyber-physical network (Oberländer et al., 2018).It uses recent advances in digital technology such as ubiquitous communication, pervasive computing, or ambient intelligence to connect these objects based on standardized communication protocols.With the help of these technologies, everyday objects turn into so-called smart things (Püschel et al. 2020).Prior research examines the IoT in terms of its architecture, for example, as a layered reference model.This often results in a multi-layer description of services offered at different architectural levels, depending on the business needs, technical requirements, and technologies (Fleisch et al., 2015;Porter & Heppelmann, 2015;Sisinni et al. 2018;Yoo et al., 2010).A common three-layer IoT architecture differentiates the perception, network, and application level (Jing et al., 2014).The perception level controls objects and collects data, the network level enables information exchange of the data, and the application level supports business services by analyzing the data.The application of the IoT concept in an industrial context received particular interest in recent years as it proved to be a prime example of its applicability and its underlying economic potential (Papert & Pflaum, 2017;Wortmann & Flüchter, 2015).Current trends in the manufacturing industry point towards combining traditional production, automation, and computational intelligence into a complex system known as the industrial IoT.The literature describes the IIoT concept with different names such as Industry 4.0, Industrial Internet, or Internet of Production (Boyes et al., 2018;Wortmann & Flüchter, 2015).The terms IoT and IIoT are occasionally also used synonymously (Hanelt et al., 2020;Hodapp & Gobrecht, 2019;Pauli et al., 2021).Sisinni et al., (2018, p. 4725) describe it as being about "connecting all the industrial assets, including machines and control systems, with the information systems and the business processes."Thus, IIoT leverages the mechanical engineering industry into the digital era (Kiel et al., 2017).Through extraction and utilization of machine data, it is a key enabler for the creation of digital networks in manufacturing processes and ultimately lays the foundation for a smart production system (Pauli et al., 2020;Rehman et al., 2019).
Industrial Internet of Things platforms
IIoT platforms function as a middleware that orchestrates the heterogeneous device landscape in the IIoT and provides a technological infrastructure fostering connectivity and interoperability between smart machines, control systems, and enterprise software systems (Petrik & Herzwurm, 2020).On top of the technological infrastructure, applications provide data-driven services to platform users (Hodapp & Gobrecht, 2019;Pauli et al., 2021).These applications consequently extend the machines' functionality by collecting and processing the generated data, thus generating additional value.The recent literature on digital platforms has largely abstracted on the technological characteristics of different platforms, treating all technological platforms as a rather homogenous group (Reuver et al., 2018).However, IIoT platforms significantly differ from other kinds of platforms, in particular those studied by the IS community so far.For one thing, IIoT platforms operate in a B2B environment.This entails higher technological complexity due to heterogenous industrial assets and devices, IT infrastructures, and processes compared to business-to-consumer (B2C) markets in which most digital platforms operate (Hein et al., 2019;Pauli et al., 2020).Further, this implies additional actors involved (e.g., developer, machine manufacturer, or sensor manufacturer) for integrating third-party solutions into users' IT and business processes resulting in higher organizational complexity.For another thing, applications developed on IIoT platforms are often fragmented, addressing only one or few customers, or even being developed by customers for their own use (Pauli et al., 2020).This contradicts digital platforms' underlying assumption of efficiently serving a heterogenous market and in this way attracting complementors and ignite network effects, respectively.Even though IIoT platforms operate in similar context, they specialize in different service offerings (e.g., equipping devices with digital technology and connecting them to the Internet, managing the machinery for more flexible production, or deriving new insights through analyzing data).To realize these services, they require different architectural features.As a result, the IIoT platform landscape is scattered among different manifestations, making it challenging to compare IIoT platforms with each other and understand the value they can create.Research just recently began investigating IIoT platforms, covering different aspects such as their business model (Endres et al., 2019;Hodapp et al., 2019), frameworks for classification (Moura et al., 2018), design characteristics (Abendroth et al., 2021), or their design criteria (Werner & Petrik, 2019).Regarding the business model, Hodapp et al. (2019) focused on constituent elements of a business model and developed a taxonomy to understand the IoT platform market.Similarly, Endres et al. (2019) explored IIoT business models to identify their IIoT specific components and overall business model archetypes.One of the archetypes they identified is the "IIoT platform business model" which is characterized by data-driven analyses through platforms and the applications on them.Regarding IIoT frameworks, Moura et al. (2018) proposed a framework that is divided into layers responsible for describing and accommodating key elements for IIoT implementation in an organization.Lastly, researchers investigated how IIoT platforms can be set up by elucidating their design criteria (Werner & Petrik, 2019) or the concept of boundary resources (Petrik & Herzwurm, 2019, 2020).However, we still miss a unified classification of architectural features and a understanding how they interact with each other to enable a transparent evaluation and comparison of existing IIoT platforms.We deem this a practical approach to uncover underlying differences of IIoT platforms that research thus far has not demonstrated.
Taxonomy development
According to Glass and Vessey (1995, p. 66), taxonomy development refers to a method of "assigning members to categories in a complete and unambiguous way."Taxonomies are schemes with which specific amounts of knowledge can be structured, analyzed, and organized, thus fostering the understanding of the phenomenon (Glass & Vessey, 1995).Embedded in the field of design science research, taxonomies can contain both descriptive and prescriptive knowledge and represent artifacts in the form of models (Nickerson et al., 2013).In information systems research, taxonomy development is well received and has already been successfully applied in different contexts when exploring emerging research fields such as predictive maintenance business models (Passlick et al., 2021), smart things (Püschel et al., 2020), or agile IT setups (Jöhnk et al., 2017).In line with this exemplary work, we follow the iterative taxonomy development method proposed by Nickerson et al. (2013).
This method integrates conceptual and empirical perspectives into one comprehensive method and, thus, fosters the iterative usage of both paradigms.Figure 1 shows the seven-step-structure: (1) determination of a meta-characteristic that reflects the purpose of the taxonomy and its target group, (2) determination of ending conditions, (3) choice of either an empirical-to-conceptual (E2C) or conceptual-toempirical (C2E) approach, (4) conceptualization of characteristics and dimensions, (5) examination of objects, (6) initial design or revision of the taxonomy, and (7) testing of ending conditions.The taxonomy's purpose is reflected in its meta -characteristic, which the researcher defines, together with ending conditions, at the beginning of the development process.Several iterations of taxonomy design and revision, choosing either a C2E or an E2C approach, follow.After each approach, the researcher tests the resulting taxonomy against the ending conditions until they are met.
For step (1), we define our meta-characteristic as follows: Architectural features of IIoT platforms.Thus, our metacharacteristic reflects that we seek to guide both further research and practitioners.For step (2), we determine objective as well as subjective ending conditions of the taxonomy development process (Nickerson et al., 2013).As for the formal correctness of the taxonomy development, we test against the following objective criteria after each iteration: (I) every dimension is unique, (II) every characteristic is unique within its dimension, and (III) at least one object is classified under each characteristic of every dimension.Following Nickerson et al. (2013), we define our subjective ending conditions that taxonomy development is finished after the evaluation sees it to be concise, robust, comprehensive, extensible, and explanatory.Besides, we follow Jöhnk et al. (2017) and (Püschel et al., 2020) in combining mutually exclusive (ME) and non-exclusive (NE) dimensions to allow for a parsimonious taxonomy.
For steps (3) to ( 7), we alternately conducted two C2E and two E2C iterations.In the first iteration (C2E), we searched relevant literature following the guidelines of Webster andWatson (2002) andvom Brocke et al. (2015).We deliberately decided to start with a C2E iteration to account for the growing amount of literature as a means to initially structure the field.Thus, we considered research on IoT, IIoT, and digital platforms to gain a comprehensive perspective on the emerging phenomenon of IIoT platforms and to populate initial dimensions and characteristics in our taxonomy.We searched the scientific databases ACM Digital Library, AIS Electronic Library, IEEE Xplore Digital Library, and SpringerLink with the following search string: TITLE ("IoT platform*" OR "IIoT platform*" OR "internet of things platform*" OR "industrial internet of things platform*" OR "digital platform*") AND ABSTRACT ("architecture" OR "taxonomy" OR "classification").This search string resulted in 281 publications which we subsequently screened regarding information on architectural features of digital or (I)IoT platforms.Screening the results' titles, abstracts, and -where necessary -full-texts, we reduced the results to 91 remaining relevant publications.We used this knowledge base and additional literature from a forward-and backward search to extract and consolidate architectural features in a table.Drawing on this list in joint discussions, we developed the first increment of our taxonomy consisting of 19 dimensions and related characteristics organized in four overarching layers.Considering that the literature only rarely focuses on IIoT's specifics compared to the IoT and most architectural features in the literature revolve around security aspects, we decided to continue the taxonomy development process.
In the second iteration (E2C), we sought to back the preliminary insights with empirical evidence.Thus, we examined 22 IIoT platforms for their architectural features.We selected platforms identified through market research (e.g., from Gartner's Magic Quadrant and practitioner reports) and those mentioned in literature from the first iteration.For instance, Guth et al. (2018) describe architectural features for AWS IoT and Microsoft Azure IoT Hub, among others.Thus, the descriptions and analyses from previous work helped us to confront our emerging taxonomy with existing renowned IIoT platforms.We obtained relevant information for our taxonomy development from platform providers' technical documentation, websites, whitepapers, and relevant press releases.These insights helped us to identify new architectural dimensions and characteristics as well as to substantiate and improve the existing ones.By the end of the second iteration, our taxonomy consisted of 21 dimensions organized in four layers.
In the third iteration (C2E), we returned to the literature to ground the new observations in prior work.Thereby, we strengthened and verified the findings from the second iteration.Specifically, we searched for theoretical concepts describing our observations of IIoT platforms' architectural features and dropped or consolidated dimensions and characteristics in line with our meta-characteristic.For instance, while we found information on IIoT platforms' governance in the second iteration, it does not describe their architectural features in the narrower sense, which is why we removed them from the taxonomy.The third iteration resulted in a taxonomy of 13 dimensions and related characteristics that are organized in four overarching layers.
In the fourth iteration (E2C), we collected and analyzed additional primary data from seven expert interviews (see Table 1).We deemed this iteration necessary to account for IIoT platforms' novelty and peculiarities in developing and evaluating our taxonomy.Our interviews were semi-structured, following an interview guide to ensure coverage and comparability between the interviews (Myers & Newman 2007).Each interview consisted of four building blocks: introduction (participants, research project, taxonomy research, and clarification of focal terms and concepts), discussing the layers and dimensions of the taxonomy, discussing the characteristics for each dimension in the taxonomy, and overall feedback.We selected interviewees from our industry network (expert sampling) according to their knowledge in the field of IIoT and/or IIoT platforms.
Our experts contribute perspectives from different backgrounds and industries to offset potential biases.The interviews lasted between 55 and 78 minutes, and at least two of the authors were present in each interview.We recorded all interviews with the experts' consent and analyzed them systematically.Thus, all authors engaged in discussing the experts' feedback and further developing the taxonomy.We incorporated the proposed changes between interviews to discuss the improved taxonomy iteratively.
Cluster analysis and archetype identification
Based on our taxonomy, we seek to identify, conceptualize, and elucidate typical architectural setups of IIoT platforms (i.e., typical combinations of architectural features).This is to understand better the current IIoT platform landscape and guide scholars as well as practitioners in this field.We identified distinct IIoT platform archetypes using cluster analysis.This statistical technique groups objects with similar characteristics and aims for a high degree of homogeneity within each cluster group and a high degree of heterogeneity between cluster groups (Hair et al., 2010).For this step, we collected data on 78 IIoT platforms that provided real-world cases for cluster analysis.We used the publicly accessible IIoT supplier database of the market research company IoT ONE to obtain a comprehensive list of relevant IIoT platforms (IoT One 2021).Following a structured selection process, this platform sampling approach helped us to gain a larger number of IIoT platforms for classification compared to the taxonomy development phase.At the same time, this approach was detached from any focus and platform selection choices in previous work to increase the transparency and comprehensibility of our cluster analysis.
The IoTONE database contained information on 2,873 companies at the time of the data collection.We narrowed down the search results using the databases' filter options to select "platform-as-a-service" entries, resulting in a list of 560 elements.Subsequently, we filtered the list by the five available revenue categories (>$10bn, $1bn-$10bn, $100m-$1bn, $10m-$100m, <$10m) to cover IIoT platforms of different sizes, popularity levels, and with different value propositions.We classified every IIoT platform of the revenue categories 1 to 3 (see Table 2) that provided sufficient publicly available information (i.e., technical documentation, whitepaper, press release, website description etc.).For revenue categories 4 and 5, we limited the number of cases that we classified since we aimed at a balanced sample with respect to the different revenue categories.Thus, we deliberately emphasized the comprehensive sampling across revenue categories (i.e., potentially resulting in greater variety regarding the archetypes) over a 'representative' sampling that mirrors the relative number of IIoT platforms per revenue category (i.e., potentially resulting in archetype dominance that may not reflect the IIoT platform market).The selected IIoT platforms are listed in Table 5 in the Appendix.
Authors jointly engaged in IIoT platform classification, frequently discussing ambiguities within the research team to allow for alignment in applying the taxonomy.We choose agglomerative hierarchical clustering with the Ward algorithm and Manhattan distance function as our clustering approach for two main reasons.For one thing, Ward algorithm's characteristic to minimize intra-cluster variation and maximize inter-cluster variation (Strauss & Maltitz, 2017) helps us to delineate clearly distinguishable archetypes, especially considering the plethora of potential platform manifestations from our taxonomy (see Püschel et al. (2020) for an explanation of theoretical manifestations from taxonomies).For another thing, this combination of cluster algorithm and distance function is an established approach in extant research which has proven to be a valid and effective means to cluster taxonomy classifications (cf.Hodapp et al., 2019;Püschel et al., 2016).We coded every characteristic as binary (1: the IIoT platform offers this architectural feature; 0: the IIoT platform does not offer this architectural feature; see Table 6 in the Appendix for a detailed example of the coding process) and normalized the dimensions' distance as [0;1] to avoid overrating dimensions with more characteristics (Püschel et al., 2016).Thereby, we accounted for both the dimensions' exclusivity (mutually exclusive or non-exclusive characteristics) and number of characteristics.Agglomerative hierarchical clustering shows solutions for all possible number of clusters.Thus, we used triangulation to choose the optimal number of clusters based on different statistical measures, visual graph interpretation, as well as interpretability and meaningfulness based on our real-world observations (Jick, 1979).Regarding statistical measures, no cluster solution was dominant, but we were able to narrow the sensible number of clusters to a solution between three and nine clusters (e.g., C-Index proposing a four cluster solution).As established visual graph interpretations, we used the dendrogram and the average silhouette width to better understand the agglomeration in our hierarchical clustering.This step showed that the solutions for four, five, six, and seven clusters performed very similarly statistically.Thus, we engaged in joint discussions with all authors to review all four of these cluster solutions for their cluster composition and meaningful interpretation.Considering earlier work on IIoT platforms' architectural features [blinded for review], we finally decided on the final five cluster solution (see Figure 3 in the Appendix for the dendrogram).Subsequently, we conceptualized the archetypes' specifics and implications.
Taxonomy of architectural setups of industrial IoT platforms
In the following, we present our final taxonomy (see Fig. 2) and describe the dimensions and characteristics in detail.
The taxonomy consists of 13 dimensions encompassing 38 characteristics that we defined according to the pre-specified meta-characteristic.To improve our taxonomy's comprehensibility and real-world fidelity, we structure the dimensions in four layers, i.e., infrastructure, network, middleware, and application layer.
Infrastructure layer
Industrial IoT platforms are created and cultivated on top of digital infrastructures (Constantinides et al., 2018).In the context of IIoT platforms, such digital infrastructure is represented by the smart things that are connected to the platform and the technical resources on which the platform operates.In this layer, we found three relevant dimensions.
Hardware support Regarding the devices that IIoT platforms allow to be connected to it, we found that some IIoT platforms constrain the connectivity to certified hardware (e.g., proprietary or selected third-party devices) which are approved by the platform owner, while others are hardwareagnostic, meaning they support any hardware as long as it fits the platforms' rough technical specifications.
Platform hosting Another differentiation of the infrastructure is how the IIoT platform is hosted.While defining requirements for IIoT platforms, Petrik and Herzwurm (2018) name three ways of how IIoT platforms can be hosted: on-premise, in a cloud, or in a hybrid way using both approaches.We adopt these characteristics and extend them by differentiating between public and private cloud specifications as experts repeatedly pointed out the difference during the interviews.
Data processing
Our taxonomy research process revealed that IIoT platforms process data on different boundaries of the platform.We found that most IIoT platforms process their data on-platform, meaning that depending on the level of platform hosting, this happens on-premise or in the cloud.Many IIoT platforms though also offer to process data on the edge, meaning that processing happens in a local network or within the smart things without all generated data being sent to the IIoT platform.As some IIoT platforms offer a mixture of both approaches, we also included fog as a situation-based data processing characteristic.
Network layer
As connectivity and interoperability of devices and applications are core capabilities of any IIoT platform, we defined a network layer to collect the respective dimensions.Generally, two prominent frameworks can be found in the literature to describe the structure of networks: OSI and TCP/ IP model.We used these models to derive two dimensions that describe the network layer of an IIoT platform, similar to the proposed stack-lower and stack-upper layer of Sisinni et al. (2018).
Physical data transportation These options can be categorized into wired, meaning a cable-bound transmission, and wireless, therefore cable-unbound transmission.While the former represents a homogeneous group of transmission methods, the latter contains heterogeneous groupings of different wireless transmission methods.Therefore, we distinguish wireless transmission methods into three sub-categories: short-range wireless, which includes protocols with high performance but high power consumption and limited range (e.g., WiFi or Bluetooth), cellular, which have high performance, high power consumption, and long range (e.g., 5G or LTE), and low power wide area networks (LPWAN), which have low performance, low power consumption and medium to high range (e.g., SigFox or LoRa).
Logical data transmission Consequently, we found that IIoT platforms use different protocols to ensure a common data structure for information exchange.We distinguish between internet protocols, which emerged from the conventional internet (e.g., HTTP, XMPP, or Websockets), IoT-specific protocols, which meet specific requirements of the IoT and thus overcome many drawbacks of internet protocols (e.g., MQTT, AMQP, or CoAP), and industry-specific protocols, summarizing existing industry standards to connect machines (e.g., Modbus, CAN, or BACnet).
Middleware layer
Integrating data with applications on the IIoT platform leads to different specifications, which we summarize in the middleware layer.It is responsible for the accumulation and further processing of collected data (e.g., to applications) and consists of all functionalities required by a cyber-physical system.Thus, the layer is integrating the connected hardware to the platform and the software built upon it (Guth et al., 2018).
Data structure When generating data in the IIoT, data can be collected and streamed in different formats and structures.Some IIoT platforms explicitly state that they can deal with unstructured data, while others can only process structured ones.
Analytics types Making use of generated data is a central feature of every IIoT platform.We distinguish four types of analytics methods in the domain of IIoT: descriptive analytics, which is the most basic form, and which analyzes historical data to reconstruct events, real-time analytics that focuses on current data to identify events, predictive analytics, which uses both historical and real-time data to predict future events, and prescriptive analytics, which takes the predictive approach even a step further to advise on how to deal with upcoming events.
Analytics technology
Consequently, IIoT platforms use different kinds of technology to analyze data.We found that they can be categorized into basic technologies, such as statistical modeling, and advanced technologies such as machine learning and neural networks.
External integration
IIoT platforms can not only analyze data collected from devices directly connected to the platforms but also include data from external sources.We found that platforms differ in their offerings to integrate other (enterprise) systems.Business integration includes systems that deal with business processes and data from ERP, CRM, or SCM systems, machine integration includes legacy systems that are used in factories such as existing PLC or SCADA systems, and web services integration include internet-based data sources.
Platform source code
The examination of exemplary IIoT platforms revealed that they leverage different approaches to further develop their software.We distinguish between open source, meaning that platforms provide their complete source code to the public, open components, meaning that platforms release single modular parts of the platform source code to the public or leverage components already being open source, and closed source, meaning that platforms keep their source code proprietary.
Application layer
Based on the collected data as well as functionalities provided within the middleware layer, IIoT platforms offer the possibility of integrating applications developed internally or by third parties.We summarize the architectural specifics of this provision in the application layer.
Application Programming Interfaces (APIs)
To integrate not only external systems but also applications, IIoT platforms offer different APIs.While on some platforms we only found standardized APIs which are maintained by the platform owner, we found other cases where platforms offered possibilities to build custom APIs based on predefined syntax and specifications (e.g., via an API Manager).
Application deployment
The empirical analysis of IIoT platforms revealed that platforms use different approaches to deploy applications built internally or by third-party contributors.In most cases, applications are platformnative, meaning that applications have been built with tools provided by and directly running on the platform (e.g., rules engines).In other cases, we found that applications were containerized, meaning that the applications have been developed in an external environment but are deployed on the platform in a containerized environment (e.g., Docker), and in few cases, we found that applications were deployed off-platform, meaning that the applications are developed and hosted on different infrastructure (e.g., Cloud Foundry).
Marketplace For the provision of applications to platform users, we found that IIoT platforms use different approaches.They either run an internal marketplace, which can be understood like an app-store on a mobile phone, or they make use of an external marketplace, which integrates the app-store of another digital platform (e.g., Eclipse Kura Marketplace) into the IIoT platform, or they have no marketplace at all.
Industrial IoT Platform archetypes
Drawing on our sample of 78 IIoT platforms, we demonstrate the applicability and usefulness of our taxonomy.Thus, we first derive overarching observations on IIoT platforms' architectural features (see Table 3).
Overall, most platforms are hardware-agnostic (87.2%) and hosted via a public cloud service (96.2%), even though many platforms offer to choose other settings (on-premise 64.1%, private cloud 55.1%, hybrid 37.2%) as well.While almost all IIoT platforms can process data on-platform (97.4%) or on the edge (66.7%), we found that only a minority is capable of situation-based data processing (fog 17.9%).Most IIoT platforms rely on wired (85.9%) or short-range wireless (89.7%) data transportation technologies (cellular 59.0%, LPWAN 67.9%).Further, they use different combinations of protocols (internet 61.5%, IoT-specific 56.4%, industry-specific 71.8%).Note that we only considered this characteristic as existing if the IIoT platform offered more than one protocol to account for the diversity of data transmission.Regarding data analysis, most IIoT platforms can handle structured (91.0%) as well as unstructured (75.6%) data.Further, all IIoT platforms can analyze data descriptively (100%), with that number declining, the more complex analysis gets (real-time 89.7%, predictive 56.5%, and prescriptive 20.5%).Accordingly, our sample shows a fair split between basic analytics technology used (50%) and advanced methods (50%) used.For external integration of data, most IIoT platforms can integrate web services (89.7%, business 65.4%,machine 52.6%).As for source code openness, twothirds (71.8%) are closed source (open source 7.7%, open components 21.8%).Further, we found a majority of IIoT Based on the cluster analysis among the IIoT platforms, we identified five archetypes, which we describe hereinafter.These archetypes indicate typical combinations of IIoT platforms' architectural features.We emphasize distinctive characteristics per cluster and conceptualize the archetypes with real-world insights.Table 4 provides an overview of the different archetypes as well as their most frequent characteristics per dimension.For non-exclusive dimensions, we included all characteristics that cover more than one-third of the cluster.The Appendix shows to which cluster each platform of the total sample was assigned.
Archetype 1 -Allrounder IIoT Platforms of this archetype typically have strong markedness in many (non-exclusive) characteristics.While they are strong in different platform hosting options, they also offer various network data transportation options and data transmission protocols.Further, they stand out for strong analytics capabilities and external system integration possibilities.As the only cluster, these IIoT platforms strongly leverage external innovations through open components and deploy applications through various ways on the platform, while also maintaining an internal marketplace.IIoT platforms in this cluster offer a full-stack solution to their users.Our data sample shows that these platforms provide comprehensive services and cover a wide range of application scenarios, ranging from device connectivity and monitoring, over data visualizations and prescriptive processes, to over-the-air updates or command execution.Due to the broad coverage across all dimensions, we call this archetype Allrounder.Two prominent examples for this archetype are GE Digitals' Predix or Siemens' Mind-Sphere platform.Both platforms provide edge-to-cloud data connectivity, processing, analytics, and distributed application services to support industrial applications to optimize operations, create better quality products, and deploy new business models.Thereby, they leverage the latest big data and machine learning technologies for analytics-driven outcomes.Further, they provide the possibility to self-develop and distribute (via an internal marketplace) application microservices to extend the platforms' functionalities in analysis, data visualization, case management, and other areas.
Archetype 2 -Device Controller This archetype comprises
IIoT platforms that typically have strong markedness in only a few characteristics.As they strongly focus on public cloud hosting, they also tend towards on-platform data processing.Further, they offer only selected data transportation options and transmission protocols.Most IIoT platforms in this cluster utilize basic analytics technology, leading to lessdeveloped data analysis.However, to connect the platform with other Web Services, it often provides the relevant interfaces.Lastly, most platforms of this archetype do not maintain a marketplace for applications.IIoT platforms in this cluster focus on a narrow use and, thus, provide only necessary functionalities.They can be extended mostly through applications that are built with platform-native tools such as rules engines or low-code/no-code development environments.Due to its strong focus on data transportation in combination with limited data analytics capabilities, leading to pertinent use cases of these platforms, we call this archetype Device Controller.Two examples of this archetype are Airtel IoT and KITE platform.Both platforms have a strong focus on various connectivity technologies, especially in the cellular area, to enable real-time information, analysis, and control over devices for e.g., asset tracking or vehicle telematics.Therefore, the platforms are hosted in public clouds, deploy data analysis purely on-platform, and leverage primarily Internet or IoT-specific protocols.Finally, interfaces for Web-Service integration are offered to allow for further data distribution.
Archetype 3 -Data Hub IIoT platforms in this cluster show strong markedness in specific characteristics.They are characterized by specifications on data processing and analysis.Consequently, they focus not only on edge and onplatform but also on fog data processing.Their focus is on industry-specific protocols, while different data transportation options are offered.Regarding data analysis, these IIoT platforms provide strong analytics options backed by advanced technologies and comprehensive integration of other company systems.Further, their source code is mostly closed, applications are deployed internally, and they don´t maintain a marketplace for applications.Data Hubs are IIoT platforms that place a specific focus on datadriven insights and decision-making using high-end analytics technology.A widespread use case for this archetype is the linkage of production lines and their optimization.We also found that many platforms offer their own sensors or edge devices in an as-a-service model to make better use of data-gathering.As the platforms of this cluster focus strongly on data collection and analysis, we call this archetype Data Hub.Two examples of this archetype are Foghorn and Foghub.Both platforms concentrate on edge or fog data processing and primarily support a range of industrial protocols to transfer data from (industrial) machines.In addition, they offer extensive possibilities to analyze the data utilizing advanced analytics (i.e., machine learning algorithms).Furthermore, they provide the capability of various integrations, most notably machine integration, and feature only an internal deployment of applications without operating any marketplaces.Archetype 5 -Connector This archetype comprises IIoT platforms with strong markedness in the network layers' and middleware layers' characteristics.These IIoT platforms are more critical regarding the connected hardware, with every second platform only supporting certified hardware.Data processing is possible in multiple ways, with a strong focus on fog processing.Data transportation possibilities and logical transmission protocols are widely offered and are supplemented by rich external system integration options.
Regarding data analysis, this archetype uses basic technologies and offers only limited analytics types.Applications can be deployed either on or off the platform while using mostly a marketplace.Connectors are IIoT platforms that specialize in integrating devices into their platforms to extract and gather data.They put stronger restrictions on hardware support or only offer standardized APIs to comply with the technological complexity and provide a reliable basis for additional contributions of platform actors.As their focus is on these topics, they rely on other services and solutions to make use of the data and provide advanced analytics tools, which other users can adopt through the marketplace.As the primary goal of the platform of this cluster is to enable network integration of devices to the IoT, we call this archetype Connector.Examples of this archetype are Telits' device-WISE and Ciscos' IoT Control Center.The distinct focus of these two platforms is on extensive connectivity options, as both platforms offer all possible protocols in the dimensions of physical data transportation and logical data transmission.The analytics options are less advanced, as the platforms only offer basic analytics using non-complex analysis methods.However, in the area of external integration options, they again offer a wide variety of capabilities, as both platforms fulfill all possible options in the corresponding dimensions in accordance with their archetype.Furthermore, both platforms allow themselves to be expanded by an internally operated marketplace.
Discussion of cluster results
While exploring the five archetypes and the associated IIoT platforms in detail, we unveiled some specialties that we discuss in the following.Allrounders represent the most holistic archetype, characterized by an extensive list of architectural features that enable a wide range of possible application scenarios.However, this entails increased technical complexity, resulting in higher initial investment for end-users owing to the necessity of external system integrators, which are usually already partnered with Allrounders.
IIoT platforms of this archetype are suitable for end-users that pursue a comprehensive approach to their IIoT strategy and require an end-to-end solution.Device Controllers, in contrast, are defined by a lower technical complexity and selection of architectural features.Thus, while they focus on narrow application scenarios that involve device administration and management, they also foster a user-friendly experience and faster implementation.Hence, they are also suitable for smaller companies and applications where the available resources are scarce.Data Hubs are specialized IIoT platforms focusing on advanced data analysis through high-end technology (e.g., artificial intelligence).They often rely on users to provide adequate infrastructure to enable data transmission to the platform and are, thus, particularly suitable for users that already have a multitude of data that they want to exploit.Service Enablers provide advanced data analytics options and IoT solution enablement to enhance business processes.Thereby, self-developed solutions may also be deployed and distributed within the ecosystem of platform participants., thus advancing the functionality of the IIoT platform.Lastly, Connectors focus on connecting heterogeneous devices to their IIoT platform.As they tend to have less developed analytics tools, they rely on third-party developers to provide (individual) solutions via an internal marketplace to the users.While these findings rest upon the platforms' architectural features, they indicate relevant implications for platforms' business models and evolution.Previous research already provides some insights regarding typical characteristics (Abendroth et al., 2021) or business model archetypes for IIoT platforms (Hodapp et al., 2019).Comparing our taxonomy to respective dimensions of Abendroth et al. (2021), we find confirmation in selected architectural characteristics (e.g., platform openness, options for extensibility).The "core value propositions" further display stark similarities to our archetypes.Comparing the findings of Hodapp et al. (2019) and ours, we find parallels between archetypes.For instance, Connectors in our sample may contribute to 'device connectivity enablement' business models, Device Controller may contribute to "device data storage" business models because they facilitate the integration and monitoring of IIoT devices, and Allrounders, naturally, enable "multi capability" business models.In fact, we even found overlaps in the specific IIoT platforms across architecture and business model clusters.However, despite these overlaps, we argue that the interplay of IIoT platform architecture and business model is less apparent than the archetype labels suggest.As platforms' architecture constitutes "an information technology artifact´s virtually irreversible DNA" (Tiwana, 2018, p. 829), architecture decisions and investments are more persistent than the business models that are built on this foundation.Hence, IIoT platforms from all archetypes may use their specific architectural configuration to define and advertise individual business models.This may result in IIoT platforms from the same architectural archetype offering different business models.A comprehensive understanding of IIoT platforms' architectural features is thus beneficial to a better understanding of the actual value they can offer.
Taking a broader perspective, this understanding may also be relevant for broader IS literature on digital platforms.While digital platforms' conceptualization in two broad types (transaction versus innovation platforms, cf.Gawer and Cusumano (2014)) is helpful to demarcate their general foci, our results extend this notion by taking a closer look at the different architectural configurations of IIoT platforms as specific innovation platform manifestations.Thereby, we show that technological platforms that are currently treated as a homogenous group are in fact very heterogenous which bears important implications for their orchestration.For instance, we see in many of the archetypes that development activities happen within the platform users' organization for their own use.Schreieck et al. (2019) refer to this as "customers as developers" and show how platform orchestration must change to, among other, account for indirect network effects being not applicable anymore.Understanding such differences thereby also helps to clarify the distinction between B2C and B2B platforms.
Last, considering the different revenue categories in our data sample, we find that Allrounders are typically rather big (over 70% of our Allrounders make at least $1bn), while Data Hubs are often smaller IIoT platforms (almost 60% of our Data Hubs make less than $100 m).Thus, IIoT platforms' architectural features also help to better understand the antecedents and contingencies of platform evolution (Henfridsson & Bygstad, 2013).For instance, IIoT platforms architecture may constitute the foundation to enable business model changes over time.In addition, IIoT platforms may proceed from one architectural archetype to another over time, also enabling new or extended business models.We leave it to further research to apply our findings and investigate how the five archetypes may complement each other and how they enable the business models that build upon the architectural features.
Conclusion and outlook
Despite IIoT platforms' increasing importance for businesses, we still miss an understanding of different architectural setups and associated consequences of such digital platforms.Further, selecting the right IIoT platform in the heterogeneous solution landscape has become increasingly challenging for practitioners.To bridge this research gap and address the underlying practical problem, we developed a taxonomy of IIoT platforms' architectural features.In the development process, we built on empirical data from both analyzing IIoT platforms and conducting semi-structured expert interviews with practitioners involved with the IIoT, as well as conceptual data from the literature on IoT, IIoT, and digital platforms.Our final taxonomy comprises 13 dimensions organized in four layers that help researchers and practitioners to better understand this emerging phenomenon.Further, we identify and conceptualize four IIoT platform archetypes from 78 real-world cases that help us to systematize the IIoT platform landscape and add an architectural perspective to recent discourse.Thus, our theoretical contribution is threefold.First, our taxonomy adds to the descriptive knowledge in this relatively young research field by structuring and explaining what architectural features constitute prevalent manifestations of IIoT platforms.Thereby, we follow Reuver et al. (2018) recommendation to foster the development of contextualized theories on digital platforms as well as to conduct data-driven research.This may guide not only researchers in the field but also IIoT platform complementors, seeking to understand a platform´s architecture to align their app to the available platform resources (cf.Tiwana, 2018).Second, we offer researchers and practitioners a mutual nomenclature that specifies IIoT platforms' architectural features.With this, we extend current research, which is largely limited to rather simple category lists built through vague development processes.Third, we elucidate typical architectural setups of IIoT platforms and how this shapes their business logic.We see this as the necessary foundations to better understand the reciprocal interplay of both aspects, i.e. , how architectural design options enable IIoT platform business models and vice versa.From a managerial perspective, our taxonomy and the five archetypes help practitioners in comparing different IIoT platform solutions and enable them to select the one that not only fits the existing IT infrastructure but also provides desired solution capabilities.
We acknowledge some limitations in our research that open promising avenues for further research.Our taxonomy rests on the data used and the sequence of iterations.Although our dataset covers a fair amount of IIoT platforms of different sizes and with different foci in terms of their value proposition, we did not cover the exhaustive sample of the more than 620 available IIoT platforms.Future research may incorporate additional IIoT platforms and conduct further iterations to validate and update our proposed taxonomy and the resulting archetypes.In this regard, we also acknowledge the potential to substantiate our findings with additional qualitative empirical data to better control for potential specifics of platform users from different industries and their implications for platforms' architectural setup.Further, we did not address potential dependencies between dimensions and characteristics or the architectural success criteria of IIoT platforms.Investigating these aspects may help in the successful design and use of IIoT platforms.Further, it may help to answer some of the fundamental strategy questions such as how to earn a competitive advantage based on a distinct technological architecture (cf.Cennamo, 2021;Schilling, 2003;Zhu & Iansiti, 2012).Lastly, future research may test our archetypes' external validity to ensure their generalizability and to explore their evolutionary paths (e.g., IIoT platform sizes within and across clusters).
Table 1
Overview of the seven
Table 2
Distribution of IIoT Platforms across revenue categories
Table 6
Coding example for IIoT platform cluster analysis (ME: dimension is mutually exclusive; NE: dimension is non-exclusive).Siemens mindsphere classification and coding (prevalent characteristics in bold) | 2022-03-11T16:20:20.468Z | 2022-03-09T00:00:00.000 | {
"year": 2022,
"sha1": "d56bfbdbf9cf2a96d8d2c0051bdecc914f6b1e41",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12525-021-00520-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "bac1299142ca39fc943a4dc9025ecd1957ce1be8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
117037944 | pes2o/s2orc | v3-fos-license | Extended Chaos Theory and Multiparticle Production
First, using the method of the soliton-solution, the fermion probability density equation, which corresponds to the Dirac equation, is derived. Next, we extend the chaos theory, in which the period bifurcation is equivalent to the particle production. Then this extended chaos theory can be used for description of the multiparticle production and the extensive air showers at high energy. Let the parameter takes a suitable value, the quantitative results will be obtained, and an approximate formula will be derived. Many properties of the multiparticle production and of the chaos theory are universal.
Introduction
Cosmic rays, whose main compositions are some particles with positive charge, for example, proton and alpha-particle, may possess extremely high energies exceeding 10 10 17 20 eV [1,2]. When a charged particle at high energy was converted into the multiparticle production and the extensive air showers (EAS), giant arrays have collecting areas in the range 10-1000 km 2 [3,4].
Recently, an important question of high-energy cosmic ray is gamma-ray bursts, and corresponding high-energy neutrinos [5][6][7]. The usual descriptions of EAS and the multiparticle production at high energies (above 10 11 eV) are mainly various semi-phenomenological models. They are Fermi statistical model, Landau hydrodynamic model, the fireball model, the multiperipheral model, the FF quark cascade model [8] and a class of models based on QCD calculations, including the mini-jet model [9] and Dedenko-Kolomatsky quark-gluon-string model, etc. K.Boruah [10] considered a new mechanism for multiparticle production in the simulation of a cosmic ray cascade. It is the decay of Higgs particle that is produced through vacuum excitation in a cosmic ray collision. A.Lindner [11] reconstructed the energy and determined the composition of cosmic rays in EAS by a new Monte Carlo method.
We proposed [12] that the infinite gravitational collapse of any supermassive stars should pass through an energy scale of the grand unified theory (GUT). After nucleon-decays, the supermassive star will convert nearly all its mass into energy, and produce the radiation of GUT. It may provide some ultrahigh energy sources in astrophysics, for example, quasars and gamma-ray bursts (GRB), etc. This is similar with a process of the Big Bang Universe with a time-reversal evolution in much smaller space scale and mass scale. In this process the star seems be a true white hole. In this paper, we describe quantitatively the multiparticle production and the extensive air showers at high energy by an extended chaos theory.
From Dirac Field to Extended Chaos Theory
In quantum mechanics, the probability density is 4 . (1) For a free Dirac field, the continuity equation is At very high energies the nucleons possess interactions, at least a self-interaction. Based on QCD, the equations should be nonlinear. Since the primary particles of the multiparticle production and of EAS are usually fermions (for example, in cosmic rays), they correspond to the Dirac equation. We suppose that the equation is the Heisenberg unified equation or a nonlinear Dirac equation Eqs. (2) and (3) Eq.(4) becomes an ordinary differential equation At a mass surface p a b u 0, , so this equation may be simplified to [14,15] d dE a / ( ) 1 2 .
(6) For a Dirac field with nonlinear interaction, the continuity equation is For the momentum representation and j 0 , it is also d dE a / ( ) 1 2 . According to Eq.(6), the dimension of a factor a must be [ ) .
(8) It is just a logical model, whose chaos solution is well known.
The bifurcation-chaos theory is similar to the multiparticle production and EAS at very high energies in many aspects. In the momentum representation, X becomes unstable and branches continuously with change of a parameter , which corresponds to the energy here. We extend the chaos theory, in which the period bifurcations correspond to the bifurcations of the probability density in Eq. (8), and are equal to the unceasing production of particles. The period corresponds to the multiplicity, which increases as energy. Such a formal resemblance may turn into the quantitative descriptions. In quantum field theory, the total number operator is: which is similar to the total probability of single-particle theory. The normalized probability W dV =1 corresponds to the range of X X ( ) 0 1 . When 1 , there is a fixed point, which corresponds to a stable particle. When energy is higher than 10 11 eV, the secondary particles are produced from several to tens. When is equal to or approach to , the infinite bifurcations correspond to the multiparticle production, the cascade shower and EAS, etc.
Application of Extended Chaos Theory
Based on the above equations and some experimental results, we make some quantitative researches. Assume that the parameter where m 0 is a constant whose dimension is energy.
For the first approximation, we solve Eq.
These formulas are consistent with the prediction of the average particle number is n S ln by the multiperipheral model, the Mueller-Regge analysis and the bremsstrahlung model [16], etc. For the pp interaction with energy scale (3-152GeV) [ They are the relations between a changed process of single-particle, or the collision process of particle-cluster and their energies. Combing other models, for example, the fireball model, it is namely a changed process of one fireball, two or many fireballs.
Using an equation in the space-time, the dimension of parameter a 1 is the same with time (life) or space (distance).
Therefore, the equation will be able to describe similarly the relations between the multiplicity and time (life) or distance in the multiparticle production when the energy was fixed.
The universality of the chaos theory and the general constants , ,etc., are independent of the specific forms of the transformations ) (x f (i.e., the equations). It may explain just that the general scaling of the multiparticle production is independent of the particle-types and of the interacting ways at high energy; and explain that a law on the average multiplicity <n> is the same for various hadrons. Further, some quantitative relations on the multiparticle production should be independent of the | 2019-04-12T17:58:35.477Z | 2008-08-02T00:00:00.000 | {
"year": 2008,
"sha1": "966f2c190983da2d953e6bb6ec9a0a8beae84082",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "966f2c190983da2d953e6bb6ec9a0a8beae84082",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118641158 | pes2o/s2orc | v3-fos-license | Classical Aspects of Higher Spin Topologically Massive Gravity
We study the classical solutions of three dimensional topologically massive gravity (TMG) and its higher spin generalization, in the first order formulation. The action of higher spin TMG has been proposed in arXiv:1110.5113 to be of a Chern-Simons-like form. The equations of motion are more complicated than the ones in pure higher spin AdS$_3$ gravity, but are still tractable. As all the solutions in higher spin gravity are automatically the solutions of higher spin TMG, we focus on other solutions. We manage to find the AdS pp-wave solutions with higher spin hair, and find that the nonvanishing higher spin fields may or may not modify the pp-wave geometry. In order to discuss the warped spacetime, we introduce the notion of special Killing vector, which is defined to be the symmetry on the frame-like fields. We reproduce various warped spacetimes of TMG in our framework, with the help of special Killing vectors.
Introduction
Three dimensional topologically massive gravity(TMG) [1,2] provides an interesting modification of Einstein's general relativity. Besides the Einstein-Hilbert term and the cosmological constant Λ, the action of TMG theory includes also a gravitational Chern-Simons term Unlike the usual three-dimensional pure gravity which has no local degree of freedom, there is generically a massive propagating degree of freedom in TMG. The equation of motion in TMG is a third order differential equation, involving the Cotton tensor. As a result, the usual solutions of the Einstein gravity with/without a cosmological constant, which must be of a Einstein metric, are also the solutions of the TMG theory. Moreover there are new solutions which has nonvanishing Cotton tensor [3]. For the TMG with a cosmological constant, these solutions include the spacelike, timelike, null warped spacetimes, AdS pp-wave and also regular black hole solutions. Though the full classification of the solutions of the TMG theory has not been achieved, much progress has been made in the past few years. Especially, in [4,5,6], the Petrov-Segre classification of 3D TMG has been discussed carefully. On the other hand, there has been much progress on the higher spin AdS 3 gravity in the past two years. In 3D, the higher spin gravity is simpler and could be put into a Chern-Simons form even for finite spin. In a remarkable paper [7], the authors showed that the high spin AdS 3 gravity theory with spin up to n could be written as a Chern-Simons gravity theory with SL(n, R) × SL(n, R) gauge group.By imposing appropriate boundary condition and gauge choice, they showed that the asymptotic symmetry group of higher spin AdS 3 gravity is a classical W n symmetry with the same central charge in the pure AdS 3 gravity found in [8]. This indicates that the higher spin AdS 3 gravity is holographically dual to a conformal field theory with W n symmetry. Similar observation has been made in [9] starting from the n → ∞ theory.
Like spin-2 graviton in pure 3D gravity, the higher spin fields in AdS 3 are not dynamical neither. It is quite interesting to consider the generalization of TMG to the higher spin case. It was actually in 1988 that the Chern-Simons-like coupling for spin-3 field in flat spactime has been proposed [10], based on the gauge symmetry consideration. The similar Chern-Simons-like terms for higher spin fields in AdS 3 were suggested in [13]. The full action describing the higher spin topologically fields coupled to the AdS 3 gravity has been proposed in [11,12]. As in usual higher spin AdS 3 gravity, it is constructed in terms of the frame-like fields. In such first order formulation, it is remarkable that the action of the higher spin topologically massive gravity could be written as a Chern-Simons-like action as well. Besides two Chern-Simons action with different levels, there is another extra term which leads to torsion-free conditions by a Lagrangian multiplier. The presence of the Lagrangian multiplier field make the gauge symmetry not manifest. Fortunately, it was shown in [12], in the AdS 3 vacuum the gauge symmetry is reproduced and plays an essential role to relate the framelike fields to the physical fields. The analysis of linearized action of the fluctuations around the AdS 3 vacuum shows that the equations of motion of the higher spin fields are the same as the ones obtained from the Frondal fields with higher spin Chern-Simons terms [14].
In this paper, we would like to study the classical solutions of the higher spin topological massive gravity, in the first-order formulation. In the pure higher spin AdS 3 gravity, the classical solutions asymptotic to AdS 3 has been studied carefully in [16,17,18]. In this case, the solutions are the gauge flat connections, under appropriate boundary conditions. 1 In the case of higher spin TMG, the equations of motion are much more involved. It makes things worse that we do not know how to impose appropriate boundary conditions for the warped spacetimes in this formulation. Therefore in this paper, we take the strategy that we do not fix any boundary condition and solve the equations of motion directly. We manage to get a class of AdS pp-wave solutions with nontrivial higher spin fields. However, it turns out to be difficult to find the warped AdS 3 spacetime with nontrivial higher spin hair. Nevertheless, we find the classical solutions of 3D TMG theory in the first-formulation, corresponding to the spacelike, timelike, null warped AdS 3 spacetimes and also spacelike warped AdS 3 blackhole. Such investigation has not been carried out in the literature. This is another motivation behind our study.
In our study, we develop the notion of special Killing vector(SKV), which is defined to be the symmetry generators acting on the frame-like fields via Lie-derivative. The special Killing vectors form a subalgebra of the isometry algebra. As SKV is also the symmetry acting on the gauge potentials, it help us to solve the equations of motion. It turns out that the first-order differential equations of motion could be transformed to a set of matrix equations with appropriate SKV ansatz, which could be solved in principle.
Another remarkable fact from our study is that there are usually many solutions corresponding to one spacetime. For example, corresponding to the same spacelike warped AdS 3 spacetime, we find a two-parameter class of the solutions in the first-order formulation. And we also find the same spacelike warped AdS 3 spacetime appearing in different metric forms, due to the exchange symmetry in the matrix equations. Furthermore, we find the non-principal embedding of SL(2, R) in SL(3, R), which leads to different warped AdS 3 vacuum with vanishing higher spin fields.
The remaining parts of the paper are organized as follows. In Sect. 2, we review the HSTMG theory and its properties. In Sect. 3, we discuss the AdS pp-wave solutions with higher spin hair, after imposing appropriate boundary conditions to ensure asymptotically being AdS 3 . In Sect. 4, we investigate the various warped AdS 3 spacetimes. We pay special attention to spacelike warped AdS 3 spacetime, though the techniques and results could be generalized to timelike, null warped cases as well. We end with some discussions in Sect. 5.
Action and symmetry
The action of higher spin topologically massive gravity in the first order formulation could be cast into a Chern-Simons-like form [12] is proportional to the Chern-Simons action and Note that our definition of the Chern-Simons action S CS [A] is slightly different from the conventional one since we have taken out the Chern-Simons level k. There are two other parameters in our action. The parameter l is related to a negative cosmological constant Λ = −1/l 2 and is the radius of AdS vacuum. µ has the dimension of mass and it is proportional to the mass of the massive higher spin modes [12]. There is a one-form β in the last term, which imposes the torsion free condition. Without it, the action becomes the sum of two pure Chern-Simons action with different levels. The gauge field A andĀ are related to the vielbein e and spin connection ω by A = ω + e/l,Ā = ω − e/l and they take value in some suitable Lie algebra. If the gauge fields take value in SL(2, R), the above action is equivalent to the one for the topologically massive gravity. Therefore, for the general gauge group SL(n, R), the above action is a natural choice for higher spin topologically massive gravity. Actually, the study of the perturbations around AdS 3 vacuum in [12] shows that at the linearized level the action is equivalent to the one with higher spin Chern-Simons terms suggested from gauge symmetry.
Variating the action with respect to A,Ā and β correspondingly, we find the following equations of motion 2 where we have defined the two-form K andK as The last equation (6) gives torsion-free condition for the fields of various spins and the first two equations (4) and (5) modifies the flatness condition of the gauge connections. However, to have a well-defined variation, we need to deal with the boundary terms appropriately. The variation of our action, including the boundary terms, is (8) There are two kinds of methods to cancel the boundary terms. The first one is to add some counter-terms to cancel them, while the second one is to put suitable boundary conditions on the fields. In this paper we always assume that we can have a well defined variational principal to get the equations of motion. In other words, we regard the equations of motion as our starting point. Of course, if we are interested in the solutions that are asymptotic to AdS 3 , we require the following reasonable condition that once we are near the boundary, the extra lagrangian multiplier should be asymptotic to zero But one should note that these cannot be the whole boundary conditions. As we have shown in [12], for general β, the higher spin topologically massive gravity is invariant only under the infinitesimal gauge transformation which can be interpreted as generalized local Lorentz transformation. In our case, we ignore the possible subtlety of large gauge transformation. Then the finite gauge transformation of β can be written as 2 From now on, we set l = 1.
We may use β to classify the classical solutions of the theory. The solutions of higher spin gravity with β = 0 and β = 0 are not equivalent. The reason is that once β = 0, one cannot make a gauge transformation (11) to set it to zero. At the same time, one cannot make a coordinate transformation to make it to zero since β is a coordinate invariant one-form. This distinction is important once one find a "new" solution, one can check the one-form β to see whether it is really a new solution.
Note that the point β = 0 is quite special since it is the point where the gauge symmetry is enlarged. In this case, the equations of motion show that the gauge potentials should be pure gauge, which is exactly the case in higher spin AdS 3 gravity. In other words, all the solutions in higher spin AdS 3 gravity are automatically the solutions of higher spin TMG.
In this paper, we would like to discuss the nontrivial solution with β = 0. As we are only interested in searching for the solutions, we do not impose any boundary conditions in advance. For principal embedding, once we solve the equations of motion (4-6), we may read the frame-like fields and obtain the metric and higher spin fields via the relations and similar expressions for other higher spin fields. Our definition of the metric and higher spin fields in terms of veilbein is the same as pure higher spin gravity. The reason are: 1. The physical fields (φ µ 1 ···µs ) must be local lorentz invariant.
2. Once β → 0, we need to go back to the corresponding definition in higher spin gravity.
It is easy to show that the above definition satisfy such conditions. It is interesting to see whether such kind of constraints forces us to give the unique definition.
Higher Spin AdS pp-wave Solutions
In this section we want to discuss higher spin AdS pp-wave solutions. These solutions are much like the higher spin realization of AdS pp-wave solutions.
Since the equations of motion for µ > 1 and µ = 1 are quite different, we consider the µ > 1 case firstly. As we do not want to deform the conventional higher spin gravity much, we require and we also assume that Note that such requirements should be regarded as part of our ansatz in contrast to the higher spin gravity case, in which one can always choose a gauge to set the solution to obey (15). In our case, we cannot do this gauge fixing procedure so (15) is just part of ansatz which will turn out to be consistent. Under this ansatz, one can solve the ρ+ and ρ− components of the equations of motion. The results are and Note that the above solutions must satisfy the constraints from the +− components of the equation of motion To solve the above equations of motion is still quite difficult, so we further make the following ansatz to simplify the equations of motion. If we further require that all the fields are not independent of coordinatez, and take the condition (14) into account, then for the gauge group SL(2, R) we find the solution A − = 0; The solution (20) plus (15) give us the metric via (12) This is exactly the AdS pp-wave solution found in [20] 3 . Note that we have replaced z(z) with x + (x − ) to match the usual conventions of pp-wave solution. We can redefine β(x + ) to absorb the factor 1 µ 2 −1 which appears in the metric. We can do the same thing in the following.
For the gauge group SL(n, R), one kind of higher spin AdS pp-wave solutions is of the following form Note that we have used the convention of principal embedding and the identification W 2 −1 = L −1 , W 2 1 = L 1 . It is straightforward to translate these solutions to the metric-like fields. For the special case n = 3, we have for the metric field and the nonzero components of spin-3 field are up to a normalization constant corresponding to spin-3 field. It is surprising that the metric takes the same form as AdS pp-wave solution (21) while the higher spin fields are non-vanishing at the same time! Namely, the turning-on of the higher spin hairs does not affect the the AdS pp-wave spacetime geometry. However, for more general solutions, this situation changes. We find that when β + , β − , β ρ , A − , A ρ ,Ā + ,Ā − ,Ā ρ take the same form as (22), A + have a large number of degrees of freedom. Actually, we only require that a + (z,z) is a SL(n, R) matrix function of z.
The more general solutions are To see what happens we choose n = 3, and set a(z) = L 1 + κ 2 (z)W 3 2 . Then the metric becomes Therefore we find that in this case the higher spin fields contribute to the AdS pp-wave geometry. For the spin-3 fields, the non-vanishing components include φ +++ , φ ++− , φ +−− . The analysis of the µ = 1 case goes parallel to the µ > 1 case. We give the final result as where β ′ + denotes ∂ ρ β + . We can generalize this solution by using the same method from (22) to (25).
In the previous discussion, we have assumed (19) to simplify the constraints (18). However, there are other ways to simplify those constraints.
Here we just give one such way. We can assume that the matrix functions X + , X − , a + ,ā + , a − ,ā − in (16,17) to be constants, namely, independent of any coordinates. Then the constraints (18) become These equations are independent of the choice of gauge group. If we choose SL(2, R) and assume that we find that there are many solutions which leads to well defined metric. For example, one kind of solution is in which there are five undetermined parameters. The corresponding nonvanishing metric components are Note that we have replaced x −1 µ 2 −1 ( y −1 µ 2 −1 ) to x −1 (y −1 ). Certainly, we may choose the gauge group to be arbitrary SL(n, R).
Warped AdS 3 spacetime
In 3D topologically massive gravity, there are other solutions besides the AdS pp-wave solutions discussed in the last section. Among them, the warped AdS 3 spacetimes are of particular interests. These spacetimes could be considered as the U(1)-fibred AdS 2 with a warping factor, so they have isometry group SL(2, R) × U (1). More interestingly, it has been conjectured that for spacelike warped AdS 3 , there exists holographically 2D CFT description under appropriate boundary conditions [15]. Therefore it would be very interesting to discuss such spacetimes in our framework.
As the warped spacetimes have much isometry, it would be nice to see if the Killing symmetry could be helpful to find the solution. In the next subsection, we will introduce the notion of special Killing symmetry. Then in the following subsections we use it to find the solutions corresponding to the warped spacetimes.
Special Killing Vector
In the usual formulation of gravity, the Killing symmetry is important to make metric ansatz to solve the Einstein's equations of motion. A suitable assumption of the symmetry of the solution always simplify the equations of motion. In the first order formulation of the gravity, it is frame-like fields which appear in the Lagrangian and the equations of motion. It would be interesting to generalize the notion of Killing symmetry to the frame-like fields. The key observation is that once then the metric must satisfy the Killing equation 2. The set of SKV forms an algebra which is a subalgebra of the whole Killing symmetry algebra.
Actually, due to the fact that Lie-derivative and differential operator are commutative, from the Cartan's equation the torsion-free spin-connection 1-form satisfies provided that theξ is SKV. Therefore, we have From the equations of motion, it is natural to set The relations (35,36) are quite restrictive. We have shown that the SKV is not local Lorentz invariant. One resolution is that we only require that δ ξ e µ = 0 is valid only in some special frames. Another resolution is to change the definition of SKV to make it local Lorentz invariant. To require δ ξ g µν = 0, the constraint δ ξ e µ = 0 could be too strong. Actually, δ ξ e µ = [e µ , G(ξ)] is sufficient, where G(ξ)is an arbitrary zero-form depending on the choice of vector. One can verify easily that this modification does not change the first three properties of the SKV and moreover the concept of SKV becomes local Lorentz invariant. However, though this modification sounds attractive, it may break the argument which leads to (35) and (36). In the following discussion, we do not need to take into account of the local Lorentz invariance, so that we still use Eq. (32) as the definition of the SKV.
Timelike warped AdS 3
For the timelike warped AdS 3 , it has isometry SL(2, R) L × U (1). The generators of SL(2, R) L can be parameterized as which satisfy the commutation relations If we requires that the above SL(2, R) L algebra generators are the SKV's, then the gauge potentials and field β should satisfy (35,36) withξ being J i 's, we find that A = iC 0 dτ + (−i sin τ C 1 + cos τ C 2 )dρ + i(sinh ρC 0 + cosh ρ(cos τ C 1 − i sin τ C 2 ))dφ, A = iC 0 dτ + (−i sin τC 1 + cos τC 2 )dρ + i(sinh ρC 0 + cosh ρ(cos τC 1 − i sin τC 2 ))dφ, where C i ,C i , B i are constant matrices. Moreover, the equations of motion (4-6) lead to a set of equations It is interesting that these set of equations are independent of the choice of group algebra. Hence it seems that the structure under these set of equations are quite rich. Note that we have 9 unknown matrix to be determined while at the same time we have the same number of equations, hence there is potential that we have nontrivial solutions beyond warped AdS 3 solutions, though the structure may prevent us giving an explicit result. One can also observe that the equations (38) are invariant under an SL(n, R) transformation This is related to the global Lorentz invariance of the SKVs.
For general gauge group, the above equations are hard to solve. Even for the pure gravity with gauge group SL(2, R), one has to make ansatz to simplify the equations. It is tempting to make ansatz that C i ,C i , B i are proportional to SL(2, R) generators J i respectively, namely Here the SL(2, R) generators satisfy The definition of J i , i = 0, 1, 2 are given in the appendix. With this set of ansatz, the gauge potential and β could be rewritten as where σ 0 ≡ dτ + sinh ρdφ σ 1 ≡ − sin τ dρ + cos τ cosh ρdφ σ 2 ≡ cos τ dρ + sin τ cosh ρdφ (42) satisfy the identities The field strengths are now We further make ansatz that a 1 = a 2 ,ā 1 =ā 2 , u 1 = u 2 , then we can solve the equations and obtain From the definition of the metric, we find that where we have introduced a parameter ν = µ 3 . This is exactly the timelike warped AdS 3 spacetime. It certainly has an isometry SL(2, R) generated by SKV's, and also another Killing symmetry as the translation along τ . Note that the U (1) Killing symmetry generated by ∂ τ is not a SKV. In a sense this symmetry is emergent from our solution.
From the equations of motion (44), it is easy to find the exchange symmetry among a 0 , a 1 , a 2 . This suggests that we may make ansatz that a 0 = a 1 etc. or a 0 = a 2 etc.. Then we can find another solutions in the former case At first looking, the metric seems different from the one in (47). But actually both of them describe the same spacetime.
More generally, allC i ,C i ,B i can be a combination of J 0 , J 1 , J 2 . First, we assume thatC i ,C i ,B i have the similar form and proportional to each other:C i = k iCi andB i = l iCi . With this ansatz, we can fix the coefficients from the equations of motion or or They are very similar to each other, with the subscript exchanged. The solutions are consistent with the solutions got above. Let us consider the first solution without losing the generality. After taking the k i 's and l i 's into the equations, we get We can setC j = a i j J i , then we get algebraic equations for a i j . There are only 9 unknown numbers, but the equations cannot be solved easily. If we set two of a i j are zero, then according to the equations, the other two will be zero, too. So, there are only 5 unknown numbers, which could be determined from the equations. For example, if we set a 1 0 = a 2 0 = a 0 1 = a 0 2 = 0, the solutions can be written as the following with a 1 1 being free If we set only one of a i j to be zero, for example, we set a 0 0 = 0, the solutions have two free parameters a 1 0 and a 0 where we have defined k = ν 2 +3 (ν+3)(ν−1) , q = ν+3 (ν−1) . For the case of timelike, the solutions are similar with a change of sign. Even though there are two free parameters in the above solutions, they all lead to the same spacelike warped spacetime, as the two parameters get canceled and do not appear in the explicit form of the metric.
We have emphasized before that the matrix equations make sense for general gauge group. We may take the gauge group to be SL(3, R), whose generators include the SL(2, R) generators L 0,±1 and the other generators W ±2,±1,0 . Let us first choosẽ C 0 = aL 0 , Then we find the one-parameter class of solutions characterized by c −1 is where k, q are defined as before. Then the metric are and spin-3 fields are φ µνρ ≡ 0 There is no surprise since we just use the principal embedding of SL(2, R) into SL(3, R). On the other hand, as we know, we can choose another embedding likeC The one-parameter class of solutions characterized by c −2 becomes The corresponding metric becomes This kind of non-principal embedding changes the radius of warped AdS 3 . So we see that in the warped spacetime solution, there exist non-principal embedding as well.
Warped black hole solution
In 3D TMG theory, there exist warped black hole solutions. They are locally warped spacetimes and could be constructed by quotient identification of globally warped spacetimes. However, such quotient identification is useless in finding the solution in the Chern-Simons-like theory of TMG. Nevertheless the singular coordinate transformations between warped black holes and warped spacetime are still useful for finding the solutions. It turns out the for the spacelike stretched black holes, whose metric is , θ ∼ θ + 2π, the gauge potentials and β are respectively Unlike the pure AdS 3 gravity, due to the shortage of the gauge symmetry, the holonomies of the gauge potential do not give the global charges of the black hole.
Null Solution
For the null warped AdS 3 , its Killing vectors are given by One could take SL(2, R) generators N ±1 , N 0 as the SKV, then one obtain that with A 0 , G 0 , F 0 being constant matrices. Such kind of gauge potential cannot lead to null warped AdS 3 spacetime. Certainly, it may give us some other configurations.
It turns out that we should choose U (1) × U (1) N Killing vectors N 0 and N as the SKV's. As a result, we find that the gauge potential should take the following form where C + , C u − , C − − , C u u , C − u are the constant matrices to be determined. To simplify our discussion, we set C − − , C − u to be vanishing and denote C u − , C u u as C − , C u respectively, then we get a set of equations from the equations of motion The null solution turns out to be As above, solving the equations of motion and then one finds the corresponding metric to be 5
Conclusion and Discussion
We have studied the classical solutions of topologically massive gravity and its higher spin generalization in the first-order formulation. We found the AdS pp-wave solution and its higher spin cousins, by requiring suitable asymptotic behavior of the lagrangian multiplier β. These AdS pp-wave solution can receive the higher spin modification. It would be interesting to study such kind of higher spin modified AdS pp-wave spacetime.
To find the solutions not asymptotic to AdS 3 , we just made ansatz and tried to solve the equations of motion directly without imposing any boundary condition. We introduced the notion of special Killing vector and apply it to find the solutions. We managed to rediscover the timelike, spacelike and null warped AdS 3 spacetimes, whose SKV were assumed to be SL(2, R) R , SL(2, R) L and U (1) × U (1) N respectively. It turned out that SKV are powerful enough to fix the ansatz on the gauge potentials and Lagrangian field. The equations of motion are transformed into a set of matrix equations, which could be solved in an algebraic way. It is interesting to see whether this set of matrix equations lead to spacetimes beyond warped AdS 3 . As SKV's form a subgroup of isometry group, the less strict the SKV, the more difficult to solve the matrix equations.
From our study, we also noticed that usually there are many solutions of gauge potentials, corresponding to exactly one spacetime. One class of degeneracy resides in the gauge potentials, which could be one or even twoparameter class of solutions, corresponding to the same metric. The other class of degeneracy comes from the fact that the different metrics could describe the same spacetime, where the different metrics are from the intrinsic symmetry in the matrix equations. Therefore, this poses an interesting question how to classify the gauge potentials. Notice that though there is no gauge symmetry corresponding to the diffeomorphism, there is gauge symmetry corresponding to the local Lorentz transformation, which relate different gauge potentials to each other.
Moreover, we obtained the spacelike warped AdS black hole through singular coordinate transformation in our framework. Unfortunately, due to the shortage of the gauge symmetry, the holonomy of the gauge potential does not encode the information of the black hole. It seems hard to read the global charges of the black hole from the gauge potentials. Similarly, there are many gauge potentials corresponding to the same black holes.
Another remarkable fact is that for the warped AdS 3 spacetime, there exist non-principal embedding as well. In our study, we are free to choose gauge group other than SL(2, R). We showed that in the case of SL(3, R) gauge group, it is possible to consider non-principal embedding, which leads to a spacelike warped AdS 3 with different radius. Such phenomenon happens for other warped spacetime.
There are many open questions: • How to classify all the solutions in our framework? This question has two-fold meaning. On one side, we need to find a way to classify the gauge potentials in our framework. On the other side, we would like to know if it is possible to find all the solutions of HSTMG.
• In our study of the warped spacetimes, the final question is how to solve a set of matrix equations. In principle, the equations do not prevent us from considering the gauge groups other than SL(2, R). We have found the AdS pp-wave with nonvanishing higher spin, it would be nice to see if there exists nontrivial higher spin warped spacetime; • It would be interesting to discuss the warped black holes with higher spin hair. In our study of warped spacetime, we started from the ansatz constrained by the SKV. This usually gives global warped spacetime. After a singular coordinate transformation, then one arrive at the corresponding black hole solution. Therefore, once we find a global warped spacetime with nonvanishing higher spin fields, it is possible to get the warped black hole with higher spin hair. Another relevant question, is that once we find the black hole solutions with higher spin hair, how to study its physical properties? It seems that we cannot define higher spin charges. This is one of the most fundamental questions hinders us in higher spin topologically massive gravity. Though at the linearized level we have shown that HSTMG makes perfect sense [12], we do not know how to deal with it at the non-linearized level besides constructing so many classical solutions.
• We did not discuss the higher spin perturbations around the warped spacetime. It would be interesting to study this issue.
Energy and Fundamental Theory" supported by the Special Fund for Theoretical Physics from the National Natural Science Foundations of China with grant no.: 10947203 for stimulating discussions and comments. | 2012-08-10T14:52:03.000Z | 2012-04-15T00:00:00.000 | {
"year": 2012,
"sha1": "d89103f390d340bd3175159c0f5078c5db655103",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1204.3282",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d89103f390d340bd3175159c0f5078c5db655103",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254248157 | pes2o/s2orc | v3-fos-license | Evaluating the effects of curcumin nanomicelles on clinical outcome and cellular immune responses in critically ill sepsis patients: A randomized, double-blind, and placebo-controlled trial
Introduction In sepsis, the immune system is overreacting to infection, leading to organ dysfunction and death. The purpose of this study was to investigate the impacts of curcumin nanomicelles on clinical outcomes and cellular immune responses in critically ill sepsis patients. Method For 10 days, 40 patients in the intensive care units (ICU) were randomized between the nano curcumin (NC) and placebo groups in a randomized study. We evaluated serum levels of biochemical factors, inflammatory biomarkers, the mRNA expression levels of FOXP3, NLRP-3, IFN-γ, and NF-κp genes in the PBMCs, and clinical outcomes before the beginning of the supplementation and on days 5 and 10. Results NLR family pyrin domain containing 3 (NLRP3), interferon gamma (IFN-γ), and nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) mRNA expression levels significantly P = 0.014, P = 0.014, and P = 0.019, respectively) decreased, but forkhead box P3 (FOXP3) mRNA expression levels increased significantly (P = 0.008) in the NC group compared to the placebo group after 10 days. NC supplementation decreased serum levels of IL-22, IL-17, and high mobility group box 1 (HMGB1) (P < 0.05). Nevertheless, biochemical factors and nutritional status did not differ significantly (P > 0.05). NC supplementation resulted in decreased sequential organ failure assessment and multiple organ dysfunction syndromes scores, while it did not have significant impacts on length of stay in the ICU, systolic blood pressure, diastolic blood pressure, a saturation of oxygen (%), and respiratory rate (breaths/min) PaO2/FiO2 (p > 0.05). Conclusion For critically ill patients with sepsis, NC supplementation may be an effective therapeutic strategy. More randomized clinical trials involving longer follow-up periods and different doses are needed to achieve the best results.
Introduction: In sepsis, the immune system is overreacting to infection, leading to organ dysfunction and death. The purpose of this study was to investigate the impacts of curcumin nanomicelles on clinical outcomes and cellular immune responses in critically ill sepsis patients.
Method: For 10 days, 40 patients in the intensive care units (ICU) were randomized between the nano curcumin (NC) and placebo groups in a randomized study. We evaluated serum levels of biochemical factors, inflammatory biomarkers, the mRNA expression levels of FOXP3, NLRP-3, IFNγ, and NF-κp genes in the PBMCs, and clinical outcomes before the beginning of the supplementation and on days 5 and 10.
Introduction
Sepsis is a complex and severe disorder that is caused by a strong response of the body's immune system to an infection. The disease is by far the most serious medical problem related to acute organ dysfunction and the high hazard of death in the intensive care unit (ICU) (1). The immune system's excessive response leads to an enhancement in inflammatory oxidative stress and an increase in organ failure (2, 3). In the world, disease continues to be the leading cause of death (2, 3). Globally, sepsis affects an estimated 30 million people worldwide, and this number has increased annually by nine to 13 percent. According to global statistics, sepsis affected 48.9 million people worldwide in 2017 and caused 11.0 million deaths (4).
There are two parts to the human immune system: innate and adaptive (5). The innate immune system responds nonspecifically to infections (6). Adaptive immunity is slower than innate immunity but may recognize unique antigens and establish immunity after multiple exposures. Innate system cells include basophils, mast cells, eosinophils, natural killer (NK) cells, dendritic cells, macrophages, and neutrophils (7). B and T cells comprise the adaptive immune system responding to pathogens (8). B cells generate antibodies and plasma cells for long-term immunity, whereas T cells, namely gamma delta (γδ), CD8+, CD4+, and regulatory T cells (Tregs), create plasma cells and antibodies for long-term immunity (9). In addition to activating immune responses, the entry of pathogens into the body leads to the activation of various inflammatory pathways, namely NLRs, HMGB-1, and NF-κB (10). The NLRs play a significant role in recognizing invading bacteria and initiating the innate immune response. Inflammasomes can be activated during sepsis to augment inflammatory responses (11). As a result of NLRP3 inflammasome activation, caspase-1 is activated, producing pro-inflammatory cytokines, IL-18, and IL-1β (12).
On the other hand, the activation of NLRP3 leads to upregulated NF-κB pathway, which can cause the production of various swelling factors. The nuclear protein HMGB1 regulates innate immune responses both intracellularly and extracellularly and is found ubiquitously in almost all cells (13). HMGB1 also functions as an acute-phase cytokine during infection. Serum and tissue HMGB1 levels rise during infection, particularly in sepsis, and play a crucial role in systemic inflammation (14). On the other hand, the enhancement in the serum level of inflammatory cytokines causes a decrease in the level of albumin, urea, and BUN bilirubin, LDH, in sepsis patients (15). Impaired nutritional variables (energy intake and serum albumin) are expected to exacerbate clinical outcomes, namely sequential organ failure assessment (SOFA) and multiple organ dysfunction syndromes (MODS) score, PaO2/FiO2, duration of mechanical ventilation, and duration of ICU stay (16). In addition, sepsis is associated with an excessive reduction of forkhead box P3 (FOXP3) (12). FOXP3 regulates the development and activity of CD25 + CD4 + Treg cells, which play an important role in the immune response (12).
Notwithstanding the complexity of sepsis in people admitted to the ICU, a variety of treatments, such as corticosteroids and broad-spectrum antibiotics, are used today, but efforts to find effective treatment with minimal side effects are still ongoing (17). Accordingly, various studies indicate that natural immunomodulatory agents might ameliorate bacterial and virus diseases when combined with routine treatment (18). Rolta et al. (19), Rolta et al. (20) showed that herbal compounds can be used as an adjunctive treatment in corona. In other studies Rolta et al. (21) phytocompounds indicated (emodin, rhein13c6, chrysophenol dimethyl ether and resveratrol) have antibacterial and antifungal properties.
A hydrophobic polyphenol and an active component in turmeric is curcumin, derived from Curcuma longa rhizomes. Curcuminoids are comprised of three components, including bisdemethoxycurcumin (10 to 15%), demethoxycurcumin (20 to 27%), and curcumin (60 to 70%) (22). Numerous pieces of evidence show that curcumin has many pharmacological and therapeutic activities, including antimicrobial, antioxidant, and antiviral effects, anti-cancer, and anti-inflammatory (23). Curcumin works by targeting multiple biochemical pathways, such as reducing lipid peroxidation, increasing the expression of antioxidant-producing genes, attenuating NF-κB and NRLP3 signaling pathways, and, most importantly, modulating the immune system response (24). Although curcumin has many medicinal benefits, it is unfortunately absorbed in very small amounts due to its low bioavailability and rapid metabolism, which is exacerbated in patients admitted to the ICU (23). However, today, through effective methods such as the use of liposomes, the use of nanoparticles, including nonpolar sandwich technology, nano micelles, complexing with phospholipids or piperine and solid lipid particle formulations leads to a substantial rise in the absorption of hydrophobic substances such as curcumin (25). Also, due to the proven advantageous impacts of curcumin in cell lines and on models of septic rats, as well as some human studies on sepsis (24), this randomized clinical trial (RCT) aims to the effects of NC on immune system responses and clinical outcomes in critically ill patients with sepsis.
Study design
We conducted this study on the 40 hospitalized patients in the ICUs at Imam Reza and Shohada Hospitals (Tabriz University of Medical Sciences, Tabriz, Iran). Criteria for inclusion included critically ill patients feeding enteral nutrition and patients who had been in the ICU for at least 10 days. Furthermore, An exclusion criteria included participants with the following conditions: patients with intestinal ischemia, pancreatitis, intolerance to enteral feeding, short bowel syndrome, pregnant and lactating women, intestinal obstruction, and use of non-steroidal antiinflammatory drugs (NSAIDs). Researchers registered the study at the Iranian Registry of Clinical Trials (IRCT) website (IRCT20110123005670N7). A nosocomial infection is defined by the most recent guidelines of the CDC (26).
Randomization and intervention
Patients in blocks arranged according to gender and age score were randomly divided into placebo or NC groups in a 1:1 ratio using RAS software. Both patients and researchers were blind to the allocation of the study. Patients in the supplementation group received routine therapy, namely antibiotics (Meropenem, Imipenem, Ciprofloxacin) with two 80 mg NC capsules, while the placebo group received routine therapy with a placebo for 10 days. Enteral feedings were administered through the nasogastric tube to all patients from their first 24 h of admission (Karen Company, Tehran, Iran; Table 1). Depending on each patient's metabolic status and weight, the amount of energy required was calculated at 25 to 30 kcal/kg. Starting with 25 ml/h of enteral feeding, the rate was enhanced by 25 ml/h every 4 h until the aim rate was reached. In cases where the gastric residual volume exceeded 150 ml, prokinetic agents were administered. Exir-Nano-Sina company produced a placebo and NC capsules (batch A specialist evaluated patients according to inclusion criteria before enrolling them in the study. Nurses without knowing which is the placebo and the NC, every 12 h (9:00 a.m. and 9:00 p.m.), an hour later than enteral feeding (to prevent interaction with the contents of the formula received), NC capsules and placebo (in terms of form and size) were given to patients as a solution through a nasogastric tube. Since the vast majority of patients with sepsis have a low Glasgow Coma Score, in this study, we obtained informed consent from first-degree (but legal) relatives such as mothers, fathers, sons, or daughters of patients before entering the study.
Laboratory evaluates
Before the intervention, 5th, and 10th, every day between 12:00 and 3:00 p.m., venous blood samples were taken from each patient. The biochemical factors, namely blood urea nitrogen (BUN), albumin, fasting blood sugar (FBS), hemoglobin, indirect bilirubin, direct bilirubin, lactate dehydrogenase (LDH) and total plasma protein, were specified using Abbott ALCYON-350 auto-analyzer kits. The blood samples of patients were centrifuged for 10 min at a speed of 2500 rpm (Beckman Avanti J-25 -Beckman Coulter, Brea, CA). The serum was stored at 70 • C before biochemical assessments. According to dual biotin antibody sandwich technology, inflammatory markers (IL-17 and IL-22) were assessed using the enzyme-linked immunoassay (ELISA) method. In the present study, Human IL-17 and IL-22 ELISA kits made by the Assessment Technology Laboratory (Crystal Day Biotech Co., Ltd., Shanghai, China) and HMGB1 were used.
Peripheral blood mononuclear cells and RNA isolation
Whole blood samples were directly examined for isolation of peripheral blood mononuclear cells (PBMCs). Separating PBMCs by density gradient centrifugation was accomplished using Ficoll-Histopaque solution gradient centrifugation. To isolate total RNA from the blood, TRIzol was used (Sigma Aldrich, Germany). Quantitative and qualitative characteristics of extracted RNA were determined using a NanoDrop spectrophotometer (Nano-Drop One/Once, Thermo Scientific). Then, we performed reverse transcription with random hexamer primers and oligo (dT) to transform the total RNA into complementary DNA (cDNA) based on the producer's instructions (BioFact, RTase, South Korea). Gel electrophoresis on 1% agarose gel was used to determine the RNA integration.
Real-time polymerase chain reaction for genes
Measuring the levels of mRNA expression levels of FOXP3, NLRP3, IFN-γ, and NF-kβ was done using real-time polymerase chain reaction (RT-PCR) (Sigma Aldrich, Germany). The manufacturer's instructions were followed for the reverse transcription of 25 ng of total RNA and for constructing complementary DNA using reverse transcription reagent kits (Thermo Scientific, EU). With the Light-Cycler 480 instrument (Roche, Germany), qRT-PCR was performed in a volume of 10 µl using SYBR Green PCR Master Mix (Sigma-Aldrich, Germany). A three-phase thermal cycling procedure was conducted: phase one (primary denaturation: 95 • C for two min), phase two (30 s at 63 • C, and 30 s at 74 • C, 34 to 42 cycles of 30 s at 96 • C), and last phase to form the melt curve (5 min at 74 • C). A Primer Bank sequence was used to design the primers. The characteristic primers for the human of FOXP3, NLRP-3, IFN-γ NF-kβ, and β-actin genes are resumed in Table 2.
Using the 2 − CT method as the comparison of placebo/postintervention, the relative expression levels for each gene were calculated for each reaction in triplicate.
Statistical analysis
In this study, sampling based on the standard equation (Pukak) and based on mean and standard deviation with a significance level test of 5% (α = 0.05) with considering 80% (β = 0.2) power and distance 95% confidence was performed. According to the method for calculating the sample size for clinical trials, each group's sample size was 17 individuals (27). In addition, 20% of dropouts were taken into account, increasing this to 20 people. The data was analyzed using SPSS software version 24 (Chicago, IL, USA). For the assessment of the normal distribution of continuous variables, the Kolmogorov-Smirnov test was used. Quantitative data are presented as frequency (%). For normally distributed data, the means and standard
Genes
Forward and reverse Frontiers in Nutrition 04 frontiersin.org Study flow diagram.
deviations (SDs) are shown as a mean ± standard deviation; for non-normally distributed data, the Q1 and Q3. We used Mann-Whitney U, independent t-tests, and chi-square tests to compare group changes (endpoint minus baseline). A paired t-test was used to determine if there were significant differences between baseline and after the intervention. Analyzing covariance (ANCOVA) was used to eliminate confounding variables and examine differences between post-intervention groups.
Characteristics of patients participating
In this clinical study, 81 patients were included in the study. Also, 41 patients were excluded from the study due to discharge, refusal to participate, and intolerance to enteral nutrition. A total of 20 patients in the NC group and 20 patients in the placebo group participated in the current study, as shown in Figure 1. Table 3 summarizes the demographic data of participants. The baseline characteristics of the participants did not significantly differ between the two groups.
Effect of nano curcumin on nutritional status
Between the NC group and placebo group, there were no considerable alterations in energy intake during the study period, as shown in Table 4.
Effect of nano curcumin on biochemical factors
In Table 5 the effect of curcumin on biochemical factors during the study stages is shown. Albeit serum levels of BUN, FBS, albumin, hemoglobin, total bilirubin, direct bilirubin, lactate, and total plasma protein decreased significantly in the NC group, the intergroup changes were not statistically significant.
Effect of nano curcumin on Clinical outcomes
In Table 7, effect of curcumin on clinical outcomes are presented for the participants. At the end of the study, the NC group's MODS and SOFA scores decreased significantly compared to the placebo group (P < 0.05). Furthermore, there was no remarkable alteration between the two groups in the length of stay in the ICU, systolic blood pressure, dystopic blood pressure saturation (%), respiratory rate (breaths/min) PaO2/FiO2.
Discussion
The present RCT evaluated the impacts of NC supplementation on immune response in septic patients admitted to ICU. The current study revealed that 10 days of NC supplementation could significantly reduce IL-17, IL-22, SOFA, and MODS scores serum levels. It also decreased the mRNA expression of NLRP-3, NF-êB, HMGB-1, and IFN-γ genes and increased the mRNA expression of FOXP3. As far as we are aware, this is the first study to evaluate NC's effect on immune response in patients with sepsis. Sepsis, which is defined as the excessive activity of the immune system in dealing with pathogenic factors, leads to systemic inflammatory response, coagulation disorders, endothelial function, and immune response (28), and it is a major cause of death in ICUs (29). We found that NC supplementation significantly decreased serum concentrations of IL-17 and IL-22 in septic patients after 10 days of intervention. Various studies have assessed the effect of curcumin on pro-inflammatory cytokines. A study conducted by Silva et al. (30) showed that treatment of septic rats with 100 mg/kg of curcumin remarkably lessened the pro-inflammatory cytokines, namely IL-1β and IL-6. In another study, Djalali et al. (31) reported that 2 months of NC supplementation declined the serum concentration of IL-17 and its mRNA expression in patients with episodic migraine. There is growing evidence that high concentrations of IL-17 are correlated to a higher peril of sepsis, which could provide a biomarker for the prognosis of sepsis (32). IL-17 interacts with various mediators, namely IL-1β, TNF-α, and IL-22 to exert its pro-inflammatory impact (33). Also, IL-22 plays a pivotal role in chronic inflammatory diseases and polymicrobial sepsis (34). In a trial conducted by Antiga et al. (27), curcumin (2g/day) supplementation considerably decreased the serum levels of IL-22 in participants with mild-to-moderate psoriasis Vulgaris. However, unlike the results of our study, curcumin did not significantly affect the serum level of IL-17 (27). This contrary finding might be due to the different underlying diseases of the participants and the distinct forms of curcumin used in the trials.
The inflammatory responses during sepsis might lead to the dysfunction of vital organs, including the lung, kidneys, heart, and liver, and thus, cause MODS (28). The number of organs engaged in MODS is positively correlated to the mortality of sepsis (35). Scores such as SOFA and MODS are used to properly identify septic patients at higher risk of mortality (29). In the present study, although NC supplementation did not affect the respiratory rate, PaO2/FiO2, and blood pressure, it significantly The effects of the intervention on FOXP3, NLRP3, IFN-γ, and NF-kβ expression in two study groups. (A) Fold change of FOXP3 (B) fold change of NLRP3 (C) fold change of IFN-γ (D) fold change of NF-kβ. Values are mean of fold change ± SEM. Data analysis was done using the ANCOVA test (adjusted for sex, age, type of disease, and baseline values; *p < 0.05 vs. placebo) and Repeated measures ANOVA (**p < 0.05 vs. baseline). P < 0.05, statistically significant. FOXP3, forkhead box P3; NF-κB, nuclear factor kappa B; NLRP3, NLR family pyrin domain containing 3; IFN-γ, interferon gamma.
decreased the SOFA and MODS scores after 10 days. This finding is in line with the results of several animal studies (36, 37). Chen et al. (36) presented that supplementation with curcumin diminished tissue injury and improved survival rates in septic mice. Moreover, the results of another experimental study indicated that curcumin could prevent dysfunction of the kidneys, liver, and small bowel in rats with experimentally formed sepsis (37).
Additionally, the present study showed that septic patients who received NC for 10 days had significantly lower levels of NLRP-3. This finding was in agreement with an animal study that reported the suppression of NLRP-3 inflammasome activation in mice treated with a curcumin analog (38). Moreover, Gong et al. (39) indicated that curcumin could decrease the level of IL-1β by inhibiting the activation of NLRP-3 in an in vivo study. On the contrary, 12 weeks of curcumin supplementation among hemodialysis patients did not substantially influence NLRP-3 mRNA expression (40). This might be explained by different study sample sizes, carriers of the curcumin, as well as different participants of the studies. NLRP3 is a major component of the innate immune system that is prompted by pathogens and releases pro-inflammatory cytokines (41,42).
Additionally, NC significantly reduced NF-kB expression in septic patients. In an animal study (43), both treatment and pretreatment with curcumin lessened NF-êB activation in renal tissues of septic rats. Xie et al. (44) also revealed that curcumin exerts its protective effects on lipopolysaccharide (LPS)/D-galactosamine (D-GalN)-induced acute liver injury in rats by up-regulating nuclear Nrf-2 and downregulate NF-êB. The activated NF-êB is a chief regulator of inflammatory gene expression, including NLRP-3 ( Figure 3) (45).
Several mechanisms have been suggested for the effects of curcumin administration on the components of the immune response mentioned above. Curcumin can down-regulate the Th1 and Th17 cell pathways and help modulate T-helper immune responses (27). It is speculated that curcumin adjusts the Treg/Th17 rebalance by inhibiting the IL-23/Th17 pathway (46). Another hypothesis is that curcumin down-regulates the expression of IL-22 and IL-17 indirectly by repressing IL-1β and IL-6 due to their synergistic activities with IL-17 (47). In addition, curcumin blocks the phosphorylation and degradation of IêB, the inhibitor protein of NF-êB, and averts the nuclear translocation of NF-êB (30). By inhibiting the activation of NF-êB, the transcription of genes engaged in the expression of pro-inflammatory cytokines is suppressed (48). Additionally, curcumin increases the expression of peroxisome proliferator−activated receptor gamma (PPARγ), which contributes to the suppression of NF-êB and lowers the release of pro-inflammatory cytokines (49).
The inhibition of the NF-êB pathway can alleviate the severity of MODS (50). Also, myeloperoxidase which represents The effects of curcumin on the immune response pathway. Curcumin can bind directly to MD2 (protein appears to connect with toll-like receptor 4 on the cell surface). Curcumin also inhibits LPS-induced activation of MyD88-and TRIF-dependent TLR4 pathways, resulting in suppression of both IRF3 and NF-B. Curcumin promotes the expression of the Nrf-2 gene by boosting the antioxidant capacity and synthesis of antioxidant enzymes such as SOD, GPX CAT, and CAT. TANK-binding kinase 1; TRAF, TNF receptor-associated factor; TRAM, TRIF-related adaptor molecule; TRIF, TIR-domain-containing adapter-inducing interferon-β, Nrf-2, nuclear factor erythroid 2-related factor 2; p-IκBα, phosphorylated-IκBα; s; ROS, reactive oxygen species, STAT1, signal transducer and activator of transcription 1; TGF-1β, transforming growth factor-β1; TIRAP, toll-interleukin 1 receptor domain-containing adaptor protein; TLR4, toll-like receptor 4; TNF-α, tumor necrosis factor-α.
The mRNA expression level of HMGB1 was considerably reduced after 10 days of NC supplementation in patients with sepsis. During sepsis, high concentrations of HMGB-1 stimulate the production of pro-inflammatory cytokines, which are associated with MOD and mortality (54,55). In accordance with our study, Ahn et al. (56) reported inhibiting the production of HMGB1 in endotoxemia mice with curcumin longa extractloaded nanoemulsion. In addition, in a study by Kim et al. (57), curcumin inhibited the LPS-mediated release of HMGB-1 by endothelial cells and down-regulated the expression of HMGB-1 receptors. The proposed mechanism for the effect of curcumin on HMGB-1 is the blocking of nitric oxide due to the suppression of c-Jun N-terminal kinase, which leads to the inhibited release of HMGB-1 by macrophages (56).
Another finding of the current study was that mRNA expression of IFN-γ was considerably reduced after 10 days of NC supplementation. An experimental study by Gao et al. (58) indicated that curcumin treatment markedly suppressed the IFN-γ gene expression by splenic T lymphocytes. In addition, Kang et al. (59) demonstrated that in macrophages stimulated with LPS or heat-killed Listeria monocytogenes, pretreatment with curcumin decreased the production of IFN-γ. When concentrations of IFN-γ exceed a particular level in sepsis, resistance to infections is impaired, leading to an increased lethality rate (60, 61). Therefore, suppressing IFN-γ to normal levels is beneficial to the host due to preventing bacterial outflow Frontiers in Nutrition 09 frontiersin.org (62). The mechanism by which curcumin reduces IFN-γ is that curcumin decreases CD4 + and IFN-γ + and thereby inhibits the Th1 response (63). In addition, curcumin inhibits the Th1 cytokine profile by inhibiting IL-12 production (59).
The other immune response factor that we assessed in the current study was FOXP3. The results showed that NC supplementation considerably increased the mRNA expression of FOXP3. In line with our finding, Chen et al. (36) reported that curcumin administration elevates the expression of FOXP3 in septic mice compared to mice treated with corn oil. FOXP3, a key regulator of T regulatory (Treg) cell development and function, is expressed on CD4 + CD25 + Treg cells (64,65). In a study by Chai et al. (66), curcumin attenuated the acute lung injury of cecal ligation and puncture-induced mouse model by boosting the differentiation of naïve CD4 + T cells to CD4 + CD25 + FOXP3 + Tregs. Regarding the mechanism underlying the effect of curcumin on FOXP-3, it is suggested that curcumin increases the CD4 + , CD25 + , and FOXP3 + Treg cells (36), which in turn increases the expression of anti-inflammatory cytokine IL-10 and decreases the proliferation activity of CD4 + , CD25 − , and T cells (36, 67).
So far as we are aware, no previous research has assessed the effect of NC on immune response among septic patients in ICU. In addition, randomizing the participants minimized the possibility of confounding factors. However, this study is not without limitations. First, the results cannot be generalized since we did not include refractory septic shock patients. Second, a longer supplementation duration and higher NC doses might lead to greater efficacy.
In conclusion, our results indicated that NC supplementation for 10 days in ICU patients with sepsis significantly decreased pro-inflammatory cytokines, MODS, and SOFA scores, the mRNA expression of NF-êB, NLRP-3, IFN-γ, and increased the expression of FOXP3. Further trials with a longer intervention period and larger sample size are warranted to confirm these findings.
Data availability statement
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Tabriz University of Medical Sciences (IR.TBZMED.REC.1396.762). The patients/participants provided their written informed consent to participate in this study. | 2022-12-06T14:28:59.220Z | 2022-12-06T00:00:00.000 | {
"year": 2022,
"sha1": "dbf113d89b78c1086a31ad938a01ca4134662bae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "dbf113d89b78c1086a31ad938a01ca4134662bae",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18149332 | pes2o/s2orc | v3-fos-license | Results: a Cut-off Value of 250 for Rmi-1 Provided 95.9% Inter-observer Agreement, Yielding 95.9% Specificity, 93.5% Negative Predictive Value, 75.0% Sensitivity and 82.8% Positive Predictive Value. a Cut-off Value of 250 for Rmi-1 Showed High Performance in Preoperative Diagnosis of Invasive Malign
Introduction It is of particular importance to establish accurate preoperative diagnosis for adnexal masses. Reliable recognition of benign masses would reduce the number of redundant surgeries for asymptomatic benign lesions. Ovarian cancer (OC) is the most fatal of all gynaecologic malignancies in women. Optimal cytoreductive surgery is the most significant prognostic factor in the management of OC (Harlan et al., 2003; Gultekin et al., 2009). In the event of high index of suspicion for ovarian cancer, patients should undergo surgery in tertiary care units where optimal cytoreductive surgery could be performed. Cancer antigen 125 (CA-125) is a cell surface glycoprotein of 220-kDa molecular weight. Elevated CA-125 levels are found in 80% of non-mucinous epithelial ovarian cancers. A cutoff value of 35U/mL yields 83.1% Abstract Background: The risk of malignancy index (RMI) for the evaluation of adnexal masses is a sensitive tool in certain populations. The best cut off value for RMI 1, 2 and 3 is 200. The cut off value of RMI-4 to differentiate benign from malignant lesions is 450. Our aim was to evaluate the efficiency of four different malignancy indexes (RMI1-4) in a homogeneous population. Materials and Methods: We evaluated a total of 153 non-pregnant women with adnexal masses who did not have a history of malignancy and who were above 18 years of age.
Introduction
It is of particular importance to establish accurate preoperative diagnosis for adnexal masses.Reliable recognition of benign masses would reduce the number of redundant surgeries for asymptomatic benign lesions.Ovarian cancer (OC) is the most fatal of all gynaecologic malignancies in women.Optimal cytoreductive surgery is the most significant prognostic factor in the management of OC (Harlan et al., 2003;Gultekin et al., 2009).In the event of high index of suspicion for ovarian cancer, patients should undergo surgery in tertiary care units where optimal cytoreductive surgery could be performed.
Cancer antigen 125 (CA-125) is a cell surface glycoprotein of 220-kDa molecular weight.Elevated CA-125 levels are found in 80% of non-mucinous epithelial ovarian cancers.A cut-off value of 35U/mL yields 83.1%
Should Cut-Off Values of the Risk of Malignancy Index be Changed for Evaluation of Adnexal Masses in Asian and Pacific Populations?
Ali Yavuzcan 1 *, Mete Caglar 1 , Emre Ozgu 2 , Yusuf Ustun 1 , Serdar Dilbaz 1 , Ismail Ozdemir 3 , Elif Yildiz 1 , Tayfun Gungor 2 , Selahattin Kumru 1 sensitivity but low specificity (39.3%) (Benjapibal and Neungton, 2007).Menopausal status provides limited information about the nature of the adnexal masses.Menopausal status yields 55% sensitivity and 80% specificity in differentiating benign from malignant adnexal masses (Aktürk et al., 2011).Ultrasonography (USG) is the most commonly performed imaging modality used to evaluate pelvic pathologies and adnexal masses (Khattak et al., 2013).Hafeez et al. (2013) reported that USG provides 91% diagnostic accuracy in adnexal masses depending on the structure pattern of the mass.However, this high rate applies only to experienced radiologists.Inexperienced physicians attain lower success rate in recognising the mass pattern and operator-dependent subjective nature precludes reliable use of this method.
In the past 20 years, various investigators have proposed risk of malignancy indexes (RMIs) to successfully differentiate benign from malignant masses on an objective basis (Jacobs et al., 1990;Tingulstad et al., 1996;Tingulstad et al., 1999;Yamamoto et al., 2009).Four different indexes utilizing CA-125 levels, menopausal status and findings of malignancy on performed USG as the basic variables have yielded a sensitivity ranging from 71-86.8%, and a specificity ranging from 91-96% (Jacobs et al., 1990;Tingulstad et al., 1996;Tingulstad et al., 1999;Yamamoto et al., 2009).On the other hand, some studies indicate that RMI is not a sensitive tool in certain populations while other studies call for a change in universally accepted cut-off values to differentiate benign from malignant lesions (Ashrafgangooei and Rezaeezadeh, 2011;Ong et al., 2013).
In this study, our aim was to evaluate the efficiency of four different malignancy indexes in a homogeneous population.
Materials and Methods
Medical records of the patients, who underwent a surgery with the pre-diagnosis of adnexal mass in the Department of Obstetrics and Gynaecology in Düzce University Faculty of Medicine and in Ankara Zekai Tahir Burak Training and Research Hospital between November 2009 and May 2013, were retrieved from the hospital records.We evaluated a total of 153 non-pregnant women who did not have a history of malignancy and who were above 18 years of age.All patients were evaluated by USG 2 weeks prior to surgery.All patients provided written informed consent.Surgical staging was performed in accordance with International Federation of Gynaecology and Obstetrics if the diagnosis from frozen section examination was suggestive of malignancy (Benedet et al., 2000).Invasive malignant neoplasms, metastatic masses and borderline ovarian lesions which did not invade epithelial basement membrane were considered as malignant adnexal mass (Andersen et al., 2003).All other masses were considered benign lesions.A total of 32 patients (20.9%) appeared to have a malignant lesion and 121 patients (79.1%) had benign lesion.Histopathological diagnoses of the adnexal masses are presented in Table 1.
In our study, patients were considered postmenopausal in the absence of menstrual flow for the last 1 year.The women above 50 years of age who underwent hysterectomy and those above 55 years of age and who do not remember the date of their last period were also considered postmenopausal (Ashrafgangooei and Rezaeezadeh, 2011).CA-125 levels were determined using electrochemiluminescence immunoassay and expressed in IU/mL.Upper limit of normal for serum CA-125 was set at 30IU/mL.
Analysis of RMI
RMI score was calculated by multiplying transvaginal USG results (U), menopausal status (M) and preoperative CA-125 levels (IU/mL).For this calculation, different coefficients were used in each RMI scale (RMI-1, RMI-2 and RMI-3) (Jacobs et al., 1990;Tingulstad et al., 1996;1999).In RMI-4, the calculation also included mass size (S) as one of the variables that is measured on transvaginal ultrasonography (Yamamoto et al., 2009) (Table 2).The total USG scores (U) were constructed on the basis of the findings on transvaginal USG that would be suspicious for malignancy.These findings included appearance of multilocular cystic lesions, solid area, bilaterality, ascites and presence of intra-abdominal metastasis.
Statistical analysis
Descriptive statistics included mean, standard deviation, minimum and maximum values, median, proportion and frequency.The level of impact was measured using ROC curve analysis.Kappa analysis was used to assess agreement.The p values <0.05 were considered statistically significant.SPSS 21.0 statistical software was used in statistical analyses.
Results
Mean age of the study participants was 46.0±11.3years.Mean size of the adnexal masses was 84.4±39.2mm.Mean preoperative CA-125 level was 75.8±112.5IU/mL.Of the patients, 54 (35.3%) were menopausal.General features of the patients are shown in Table 3.
Kappa value for RMI-2 was 0.539 (p=0.000) with a cut-off value of 200 (Tingulstad et al., 1996).Interobserver agreement has been 87.0%for RMI-2 with a cut-off value of 200, which yielded 85.1% specificity, 98.2% negative predictive value, 75.0%sensitivity, and 57.1% positive predictive value.While evaluating an adnexal mass preoperatively based on RMI-2 in our study, a cut-off value of 350 provided good agreement with histopathological results (Kappa: 0.700/p=0.000).A cut-off value of 350 provided 94.5% inter-observed agreement, yielding 94.2% specificity, 93.4% negative predictive value, 75.0%sensitivity and 77.4% positive predictive value.RMI-2 showed the higher performance when the cut-off value was set at 350.
Kappa value for RMI-3 was 0.579 (p=0.000) with a cut-off value of 200.Inter-observer agreement has been 89.0%for RMI-3 with a cut-off value of 200 (Tingulstad et al., 1999), which yielded 87.6% specificity, 93.0% negative predictive value, 71.0% sensitivity, and 61.5% positive predictive value.While evaluating an adnexal mass preoperatively based on RMI-3 in our study, a cut-off value of 250 provided good agreement with histopathological results (Kappa: 0.717/p=0.000).A cut-off value of 250 provided 95.2% inter-observer agreement, yielding 95.0% specificity, 93.2% negative predictive value, 75.0%sensitivity, and 88.0% positive predictive value.In our study, RMI-3 showed the highest performance to diagnose malignant adnexal masses when the cut-off value was set at 250.
Discussion
Early diagnosis is crucial in OC (Ashrafgangooei and Rezaeezadeh, 2011).Early detection of ovarian cancer offers as high as 80% cure rate and the mortality rate declines by half (Zalel et al., 1996).There is no screening test for routine use to diagnose OC.CA-125 has a low specificity in early stages of the disease and may also be found elevated in other conditions such as benign ovarian cysts, irregular cycles, and anaemia, which do not require surgical intervention (Cure et al., 2012).CA-125 levels increase with increasing age.Hormone replacement therapy and smoking reduce CA-125 levels in menopausal women (Dehaghani et al., 2007).On the other hand, Alcázar et al. (2013) reported false positive results by non-expertized ultrasonography operators to be as high as 12% even in the presence of findings strongly suggestive of malignancy such as ascites, bilaterality, solid component, septa formation and metastasis or even if pattern recognition method has been used.
As being the basic components of RMI scales, serum CA-125 levels and positive findings on USG show extensive variability depending on numerous factors and this seems to be affecting the reliability of RMIs.In a study conducted in Thailand, Moolthiya et al. (2009) used a cut-off value of 200 and found lower sensitivity rates for RMI-1 and RMI-2 as compared to the studies conducted in European countries (Jacobs et al., 1990;Tingulstad et al., 1996;Andersen et al., 2003).The study by Ong et al. (2013) conducted in Singapore yielded 12.5% sensitivity and 84.9-90.1% specificity for RMI 1-3.The results suggest that these values are of no diagnostic value in women of Singapore.In our study, a cut-off value of 200 yielded 90.1% specificity and 75.0%sensitivity.Jacobs et al. (1990) reported 71% sensitivity and 96% specificity when they first used RMI-1.The investigators, who proposed the use of RMI-2, reported 92% specificity and 80% sensitivity with a cut-off value of 200 (Tingulstad et al., 1996).According to our findings, a cut-off value of 200 did not show good performance, and yielded 85.1% specificity and 75% sensitivity.The cut-off value of 200 for RMI-3 yielded 71% sensitivity and 92% specificity for differentiation of benign from malignant adnexal masses (Tingulstad et al., 1999).However, the efficacy of the same cut-off value was found to be lower for RMI-3 in our study (87.6% specificity and 75.0%sensitivity).RMI-4, which is thought to possess the highest efficacy, has been advocated to yield 86.8% sensitivity and 91.0%specificity with a cut-off value of 450 (Yamamoto et al., 2009).However, in our study, we found higher specificity (91.0%) but lower sensitivity (75.0%) for RMI-4 as compared to previous reports.Many studies evaluating RMI scales in Asian and Pacific countries have reported different cut-off values compared to those originally reported by the investigators who proposed these indexes at the first place (Lou et al., 2010;Ashrafgangooei and Rezaeezadeh, 2011;Bouzari et al., 2011).On the other hand, according to the report by van den Akker et al. from Holland, a cut-off value of 200 for RMI-3 and 450 for RMI-4 showed the best performance and yielded success rates similar to that reported by the original investigators (Tingulstad et al., 1999;Yamamoto et al., 2009;van den Akker et al., 2011).In England, Bailey et al. (2006) reported 88.5% sensitivity for RMI with a cut-off value of 200.This finding was similar to that was found in other European studies (Jacobs et al., 1990;Tingulstad et al., 1996;1999).However, we found cut-off values for RMI 1-4 different than the other studies.We used a cut-off value of 250 for RMI-1 and 3, 350 for RMI-2, and 400 for RMI-4.With these cut-off values, specificity ranged from 94.3-95.9% and sensitivity was 75%.A cut-of value of 400 or 450 for RMI-4 does not produce significant difference in terms of efficiency.But, new cut-off values set in our study for RMI 1-3 yielded better PPV and NPV.When a cut-off value is set at 250 for RMI-1 and 250 for RMI-2, a patient with the prediagnosis of OC is more likely to be diagnosed with OC during surgery.Besides, these new cut-off values would reduce the number of redundant surgeries in asymptomatic patients with benign adnexal mass.Similar to our study, Ashrafgangooei and Rezaeezadeh (2011) reported a cutoff value of 238 for RMI-1 to be performing better in their population.Likewise, Bouzari et al. (2011) reported a cut-off value of 265 for RMI-1 and 3, and 355 for RMI-2 in their study conducted in Iran which is Turkey's neighboring country.
Table 2. Coefficients in RMI Indexes
In this study, we showed successful utilization of RMIs in preoperative differentiation of benign from malignant masses.Many studies conducted in Asian and Pacific countries have reported different cut-off values as was the case in our study.We think that it is difficult to determine universally accepted cut-off values for RMIs for common use around the globe.
Figure
Figure 1.Comparison of A) New cut-off Values for RMI 1-4 in this Study with B) Traditional cut-off Values | 2017-04-03T20:04:37.173Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "b224b00d57ec5c7200a77a81ce83c2428053cb6f",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201305981337411&method=download",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "b224b00d57ec5c7200a77a81ce83c2428053cb6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2667822 | pes2o/s2orc | v3-fos-license | Analysis of factors affecting hemorrhagic diathesis and overall survival in patients with acute promyelocytic leukemia
Background/Aims: This study investigated whether patients with acute promyelocytic leukemia (APL) truly fulfill the diagnostic criteria of overt disseminated intravascular coagulation (DIC), as proposed by the International Society on Thrombosis and Haemostasis (ISTH) and the Korean Society on Thrombosis and Hemostasis (KSTH), and analyzed which component of the criteria most contributes to bleeding diathesis. Methods: A single-center retrospective analysis was conducted on newly diagnosed APL patients between January 1995 and May 2012. Results: A total of 46 newly diagnosed APL patients were analyzed. Of these, 27 patients (58.7%) showed initial bleeding. The median number of points per patient fulfilling the diagnostic criteria of overt DIC by the ISTH and the KSTH was 5 (range, 1 to 7) and 3 (range, 1 to 4), respectively. At diagnosis of APL, 22 patients (47.8%) fulfilled the overt DIC diagnostic criteria by either the ISTH or KSTH. In multivariate analysis of the ISTH or KSTH diagnostic criteria for overt DIC, the initial fibrinogen level was the only statistically significant factor associated with initial bleeding (p = 0.035), but it was not associated with overall survival (OS). Conclusions: Initial fibrinogen level is associated with initial presentation of bleeding of APL patients, but does not affect OS.
INTRODUCTION
Acute promyelocytic leukemia (APL) is a distinct subtype of acute myeloid leukemia (AML). It is classified as an aggressive form of AML with the chromosomal translocation t(15;17)(q22;q12) occurring in myeloid cells, according to the World Health Organization classification [1]. This balanced translocation results in fusion between the retinoic acid receptor α gene (RARA) on chromosome 17q12 and a nuclear regulatory factor gene (promyelocytic leukemia or PML gene) on chromosome 15. The PML-RARA fusion gene produces a chimeric protein that arrests maturation of myeloid cells at the promyelocytic stage [2].
The current standards of induction therapy with simultaneous all-trans retinoic acid (ATRA) and anthracycline-based chemotherapy yield a complete remission rate of 90% to 95% [3], and the cure rate of APL is approximately 80% to 90% [4].
Bleeding in patients with APL can appear in various forms, such as widespread bruising, petechiae, mucus membrane bleeding, central nervous system bleeding, pulmonary hemorrhage, gastrointestinal hemorrhage, and excessive blood loss from sites of trauma [5]. Coagulopathy causing such bleeding is life threatening, and is the leading cause of death for patients with APL. An approximately 10% early death rate has been reported in cooperative group clinical trials; however, it appears to be nearly twice as high in population-based studies [6]. A retrospective analysis of 134 Brazilian patients with APL reported a death rate of 32% during induction, with the majority of deaths (60.5%) due to hemorrhage [7]. Therefore, immediate treatment with ATRA should be initiated in suspected APL cases, even before a definitive diagnosis can be made. After administration of ATRA, APL has a high cure rate and coagulopathy typically improves after 5 to 7 days of treatment [8]. Similar to classical disseminated intravascular coagulation (DIC), APL-associated coagulopathy is characterized by activation of a coagulation cascade leading to thrombus formation, hypoperfusion, and bleeding due to widespread consumption of platelets and clotting factors [5]. In addition, fibrinolysis occurs secondary to DIC due to the release of proteolytic enzyme granules from APL blasts, resulting in thrombosis, hyperfibrinolysis, and coagulopathy [9].
The International Society on Thrombosis and Haemostasis (ISTH) and the Korean Society on Thrombosis and Hemostasis (KSTH) DIC scoring system provides objective measurement of DIC [10]. The concordance rate between the two diagnostic systems is 84.7%. When DIC occurs, the scoring system correlates with key clinical observations and outcomes [11]. However, overt DIC criteria have not been established for APL patients. Although bleeding diathesis is not always solely related to DIC, the diagnostic criteria of overt DIC are used to determine which patients have bleeding tendencies that can be prevented by supportive care.
This study investigated whether Korean APL patients truly fulfill the diagnostic criteria of overt DIC proposed by the ISTH and the KSTH, and analyzed which component of the criteria most contributes to bleeding diathesis.
Patients and samples
A retrospective analysis was conducted on 46 newly diagnosed APL patients at Dong-A University Medical Center in Busan, South Korea, between January 1995 and May 2012. All of the patients were treated with ATRA alone or ATRA plus anthracycline for induction therapy. The study was approved by the Dong-A University Medical Center Institutional Review Board.
Diagnosis of APL
AML was diagnosed based on bone marrow biopsy and findings of aspiration, flow cytometry, cytogenetic analyses, and molecular genetics analyses. A blast count of 20% from bone marrow aspirate or peripheral blood was diagnostic for AML. Cell surface markers identified by flow cytometry included CD13, CD33, and/or CD34, which are found on normal immature myeloid cells. We also routinely tested for specific cytogenetic and molecular genetic abnormalities. APL was diagnosed when APL morphology was observed, and the presence of t (15;17) or the PML-RARA hybrid gene was confirmed by cytogenetic or molecular analysis, respectively.
DIC score
DIC scores of patients were calculated based on both the ISTH and KSTH scoring systems (Table 1).
Statistical analysis
Patient characteristics were summarized using descriptive statistics. The association between each diagnostic criterion or other categorical variables and the initial bleeding was analyzed by Student t test or chi-square test, respectively. Logistic regression was used to analyze the factors associated with initial bleeding. Survival analyses were performed using Kaplan-Meier estimate and log-rank tests. The Cox proportional hazards regression model was also employed in both univariate and multivariate analyses for overall survival (OS). p values less than 0.05 were considered statistically significant. All of the statistical tests were performed using SPSS version 20.0 (IBM Co., Armonk, NY, USA).
There were 27 patients (58.7%) who exhibited initial bleeding. Gum bleeding was the most common manifestation (nine cases), followed by petechiae or easy bruising (eight cases), vaginal bleeding (seven cases), epistaxis (two cases), and melena (one case). The median number of points per patient fulfilling the diagnostic criteria of overt DIC by the ISTH and the KSTH was 5 (range, 1 to 7) and 3 (range, 1 to 4), respectively. In total, 22 patients (47.8%) fulfilled the overt DIC diagnostic criteria of either the ISTH or the KSTH at the diagnosis of APL. Fulfilling the diagnostic criteria of overt DIC by the KSTH was significantly associated with bleeding at initial presentation (p = 0.008). Multivariate analysis revealed that fibrinogen level was the only statistically significant factor associated with initial bleeding (p = 0.035) ( Table 3). Early hemorrhagic death (within the first 14 days of treatment) occurred in six patients (6/46, 13%) due to fatal bleeding, including four patients with intracranial hemorrhage and two patients with pulmonary hemorrhage. The mortality rate during remission induction treatment (including willing cessation of treatment) was 23.9% (11/46). Causes of death other than fatal bleeding included sepsis (three cases), uncontrolled ATRA syndrome (one case), and unknown cause due to early willing discharge (one case).
The median follow-up duration was 22.6 months, and the median OS of analyzed patients was 122.6 months. The 2-and 5-year survival rates were 69.1% and 60.8%, respectively (Fig. 1). Univariate and multivariate analyses revealed that the factors making up the diagnostic criteria for the ISTH and KSTH of overt DIC did not significantly affect OS. There were no differences in OS between patients that fulfilled the diagnostic criteria of overt DIC (by either the ISTH or KSTH) and those without overt DIC (p = 0.188 or p = 0.334, respectively). There were no differences in OS according to initial bleeding (p = 0.102) (Fig. 2). In addition, initial fibrinogen level grouped by the ISTH criterion (< 100 or ≥ 100 mg/dL) and by the KSTH criterion (< 150 or ≥ 150 mg/dL) did not affect OS (p = 0.177 and p = 0.334, respectively). Interest- (Fig. 3).
DISCUSSION
APL differs from other subtypes of AML in that it typically presents with a life-threatening hemorrhagic diathesis. The clinical and laboratory features of coagulopathy are useful for diagnosis of DIC, and approximately [8,12]. However, there have been no reports on whether the laboratory results of APL patients at initial diagnosis fulfill the diagnostic criteria of overt DIC proposed by the ISTH in 2001. The KSTH proposed di-agnostic criteria for overt DIC and the high agreement between the ISTH and the KSTH criteria was reported [10]. Only 47.8% of the patients in this study fulfilled the diagnostic criteria of overt DIC by either the ISTH or KSTH. There were no cases of thrombosis, and the only factor significantly associated with initial bleeding was the initial fibrinogen level. These findings suggest that coagulopathy of APL expressed as DIC may require caution, and prospective studies are needed. The consensual definition of DIC proposed by the ISTH is as follows: "DIC is an acquired syndrome characterized by the intravascular activation of coagulation with loss of localization arising from different causes. It can originate from and cause damage to the microvasculature, which if sufficiently severe, can produce organ dysfunction" [13]. In APL, coagulopathy triggered by release of proteolytic enzyme granules from APL blasts not only damages the organ microvasculature but also induces bleeding in APL patients. Since the early 1970s, clotting abnormalities of APL have been ascribed to DIC; thus, it seemed logical to propose heparin to control intravascular clotting with subsequent use of hemostatic factors [12]. However, the beneficial effects of heparin or antifibrinolytic agents have never been proven by prospective randomized trials. According to the PETHEMA leucemia promielocitica aguda (LPA) 99 trial, use of systemic tranexamic acid for the prevention of hemorrhage did not decrease hemorrhagic mortality. However, there was a trend towards a higher incidence of thrombosis [14]. Thrombotic complications, in many cases fatal, have also been reported. However, as these are less well-recognized features of APL their incidence may be underestimated [9]. In this study, multivariate analysis revealed that initial fibrinogen level was the only factor associated with initial bleeding. These findings suggest that initial bleeding in APL patients may not be caused by overt DIC. However, fulfillment of the overt DIC diagnostic criteria may help to predict bleeding tendency and the need for more aggressive prophylaxis for bleeding in APL patients with fibrinogen less than 150 mg/dL, even without hemorrhage at presentation. Generally, platelets, fresh frozen plasma, and cryoprecipitate transfusions are needed to manage APL-associated coagulopathy. The results of this study suggest that maintaining a sufficient fibrinogen levels is just as important as maintaining platelet levels.
Hemorrhagic complications are associated with high rates of morbidity and are the leading cause of death in APL, particularly at presentation [9,[14][15][16]. However, deterioration of coagulation parameters and major bleeding during induction therapy are of critical importance and significantly affect initial mortality. Yanada et al. [16] reported that aggressive transfusion on the day of bleeding achieved the targeted levels of platelet counts (30 × 10 9 /L) and fibrinogen (150 mg/dL) in only 71% and 40% of APL patients, respectively. The authors suggested that a more intensive transfusion policy could be beneficial for patients at high risk of hemorrhage, and showed that patients who did not experience hemorrhagic complications had an excellent long-term outcome. Our data showed no correlation between initial bleeding and OS (p = 0.102). In addition, none of the individual coagulation parameters making up the diagnostic criteria of overt DIC proposed by the ISTH and KSTH, including fibrinogen, significantly affected OS. Additional prospective studies with larger numbers of patients are warranted to confirm whether fulfilling the diagnostic criteria of overt DIC affects OS in patients with APL.
In total, 13% of analyzed patients died due to fatal bleeding within the first 14 days of remission induction treatment. In APL treatment, the major cause of treat-ment failure is death during induction therapy. This has ranged from 5% to 10% in recent multicenter trials and most deaths have been the result of hemorrhage, infection, and differentiation syndrome [17]. Our data showed that the mortality rate during induction therapy was 23.9%. This is considerably higher than the results from trials conducted in Europe and the United States, and may be due to a lack of intensive supportive care. Transfusions, antibiotics, and/or antifungal agents are important for supportive care in acute leukemia. From 2005 onwards, the supportive care for APL patients was intensified at our institution with aggressive transfusion and antifungal strategies. As a result, the OS markedly improved after 2005 (Fig. 3).
This study had some limitations, including its retrospective nature and inclusion of patients diagnosed many years ago (from 1995 onwards). In addition, the study population was small because APL has a relatively low incidence and lower prevalence than other types of AML, and the study population originated from a single medical center. Nevertheless, our results represent novel data on the applicability of overt DIC criteria proposed by the ISTH and KSTH for the diagnosis of bleeding tendency in APL, and provide an evaluation of the impact of each parameter on initial bleeding and OS.
In conclusion, the initial fibrinogen level was the only contributing factor among the diagnostic criteria of overt DIC for bleeding presentation in APL patients. Not all of the diagnostic criteria may contribute to manifestation of initial bleeding in APL patients.
Conflict of interest
No potential conflict of interest relevant to this article
KEY MESSAGE
1. Initial fibrinogen level in newly diagnosed patients with acute promyelocytic leukemia (APL) was the only contributing factor among the diagnostic criteria of overt disseminated intravascular coagulation for bleeding presentation. 2. Maintaining a sufficient fibrinogen levels is as important as maintaining platelet levels for preventing hemorrhagic complication in patients with newly diagnosed APL. | 2018-02-10T09:04:24.297Z | 2015-10-30T00:00:00.000 | {
"year": 2015,
"sha1": "c39c5ff1f00acc4cac56be148ba2d02b0e4c36d8",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3904/kjim.2015.30.6.884",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c39c5ff1f00acc4cac56be148ba2d02b0e4c36d8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2219239 | pes2o/s2orc | v3-fos-license | Isotonic Glycerol and Sodium Hyaluronate Containing Artificial Tear Decreases Conjunctivochalasis after One and Three Months: A Self-Controlled, Unmasked Study
Dry eye complaints are ranked as the most frequent symptoms of patients visiting ophthalmologists. Conjunctivochalasis is a common dry eye disorder, which can cause an unstable tear film and ocular discomfort. The severe conjunctivochalasis characterized by high LId-Parallel COnjunctival Folds (LIPCOF) degree usually requires surgical intervention, where a conservative therapy would be highly desirable. Here we examined the efficacy of a preservative-free, inorganic salt-free unit-dose artificial tear, called Conheal containing isotonic glycerol and 0.015% sodium hyaluronate in a prospective, unmasked, self-controlled study involving 20 patients. The regular use of the glycerol/hyaluronate artificial tear in three months caused a significant improvement in the recorded parameters. Conjunctivochalasis decreased from a mean LIPCOF degree of 2.9±0.4 on both eyes to 1.4±0.6 on the right (median decrease of -2 points, 95% CI from -2.0 to -1.0), and to 1.4±0.7 on the left eye (median decrease of -1 points, 95% CI from -2.0 to -1.0) (p<0.001 for both sides). The tear film breakup time (TFBUT) lengthened from 4.8±1.9 seconds on both eyes to 5.9±2.3 seconds (mean increase of 1.1 seconds, 95% CI from 0.2 to 2.0) and 5.7±1.8 seconds (mean increase of 0.9 seconds, 95% CI from 0.3 to 1.5) on the right and left eyes, respectively (pright eyes = 0.020, pleft eyes = 0.004). The corneal lissamine staining (Oxford Scheme grade) was reduced from 1.3±0.6 on the right and 1.4±0.6 on the left eye significantly (p<0.001) to 0.3±0.4 and 0.2±0.4 on the right and the left eyes. The Ocular Surface Disease Index (OSDI) questionnaire score indicating the subjective complaints of the patients also decreased from a mean value of 36.2±25.3 to 15.6±16.7 (p<0.001). In this study, the artificial tear, Conheal decreased the grade of the conjunctivochalasis significantly after one month of regular use already, from the LIPCOF degree 3, considered as indication of conjunctival surgery, to a LIPCOF degree 2 or lower requiring a conservative therapy. Our results raise the possibility that vision-related quality of life can be significantly improved by conservative therapies even in severe conjunctivochalasis. Trial Registration Controlled-Trials.com ISRCTN81112701 http://www.isrctn.com/ISRCTN81112701
Introduction
Dry eye complaints occur in 5.5 to 33.7% of the population [1], and are ranked as the most frequent symptoms of patients visiting ophthalmologists [2][3][4][5]. Dry eye disease is caused by the reduced production and/or by the improper quality of the tear film. Evolution of dry eye disease may involve chronic inflammation of the ocular surface, and may lead to changes in the ocular surface. These changes may include injuries of the conjunctival and corneal epithelium, sagging of the conjunctiva, as well as the appearance of LId-Parallel COnjunctival Folds (LIP-COF). LIPCOF grading measures the number and severity of the lid-parallel conjunctival folds. LIPCOF grade 0 means the lack of the conjunctival folds, LIPCOF 1 signifies just one conjunctival fold, LIPCOF 2 stands for multiple conjunctival folds, not extending the tear meniscus, and LIPCOF 3 represents multiple conjunctival folds extending the tear meniscus. The LIP-COF degree correlates with the subjective dry eye symptoms [4,6].
The severe conjunctivochalasis, characterized by the high LIPCOF degree, is also considered as a reason for the dry eye disease and not only as its consequence [7,8]. The LIPCOF degrees show a strong correlation with both the subjective and objective complaints of the dry eye syndrome, as well as with the severity of the disease [9]. For all these reasons the assessment of LIPCOF degree changes in our study had by itself a great significance as a primary outcome measure of the study. However, tests studying the objective and subjective symptoms and functions of the ocular surface, should be evaluated together [10]. Therefore, in the course of our study, besides the changes in the LIPCOF degrees, we assessed the tear film breakup time (TFBUT) and the Oxford Scheme corneal staining [11] as well. The subjective complaints of our patients were recorded by the OSDI (Ocular Surface Disease Index), which is considered as the method easiest to follow [12]. These were the secondary outcome measures of our study.
In this examination we have assessed the effects of a unit-dose dose packaged, preservativefree, inorganic salt-free, isotonic glycerol-and 0.015% sodium hyaluronate-containing artificial tear, with special respect to its effect on conjunctivochalasis. Preliminary reports on the results of the one-month treatments of the current study was published in Hungarian language [13]. Our current results including both one and three-month treatments showed that conjunctivochalasis of LIPCOF degree 3, which is generally considered as indication for surgery [14], can be reduced by the help of a conservative therapy to a LIPCOF degree 2 or lower not requiring invasive therapy.
Patients and Methods Patients
Twenty adult patients from the patients of the general outpatient unit of the Department of Ophthalmology of the Semmelweis University between 27 th August 2012 and 24 th July 2013 were enrolled into our prospective study approved by the Hungarian Scientific and Research-Ethics Committee (permission No. 21455-1/2011-EKU given on 7 th December 2011). The research followed the tenets of the Declaration of Helsinki. All participants gave their written informed consent to the examination. The trial was registered at the ISRCTN database (registration number: ISRCTN81112701) after the completion of the study, since trial database submission is not a compulsory in Hungary before starting such a single center study involving only a few patients. The authors confirm that all ongoing and related trials for this drug are registered. 16 female and 4 male patients participated in the study with a mean age of 64.0 ± 17.8 years (between 25 and 85 years of age). The complete date range of participant recruitment was between 27 th August 2012 and 24 th July 2013, and their follow-up studies were performed between 30 th August 2012 and 4 th November 2013. The number of patients was reduced from the originally planned 40 to 20, since power analysis of the first cohort of patients after a 1 month treatment showed a sufficient change of LIPCOF degree even after this short period. The examinations were finished at the Mária street section of the Department of Ophthalmology of the Semmelweis University, since the Tömő street section, mentioned as the study location in the study protocol, was moved to the Mária street section in the end of the study. The study was open, both patients and examiners knew the content of the eye drop.
Patient inclusion criteria were severe conjunctivochalasis (having LIPCOF degree 2 or higher) and lissamine green staining of minimum grade 1 or higher on the Oxford Scheme grade, indicating a more advanced dry eye disease. In the inclusion criteria we applied a more stringent criterion than the LIPCOF degree of 1 or higher in the study protocol, since we became interested, if the eye drops are also effective even in severe conjunctivochalasis conditions. In this severe condition the examination of personal satisfaction rate was omitted to reduce the examination time and patient stress and the examination of subjective symptoms was reduced to the self-completed OSDI questionnaire. Exclusion criteria included pregnancy or lactation, pterygium, prolonged treatment with eye drops with the exception of artificial tears, active allergic keratoconjunctivitis, current keratitis or conjunctivitis of infectious origin, surgery affecting the eye surface, as well as eye injuries occurred within 3 months before starting the treatment. There were no patients with tearing symptoms including punctal occlusion cases in this study. The patients have already used commercially available artificial tears regularly (17 out of the 20 for many months or years, 3 out of the 20 for a few weeks) before entering the study. Enrolled patients stopped the use of their earlier artificial tears 3 days before the first visit. On the first visit, inclusion criteria were re-checked, objective and subjective symptoms were recorded. No significant changes in either the objective or the subjective symptoms occurred during the 3 days wash-out period. A former related study [15] using hyaluronic acid containing artificial tear applied a 1-day wash-out period. This gives further support to our observation that 3 days are enough to overcome the potential delay effects of artificial tears.
Description of treatments and examinations
Despite all of our dry eye patients have been continuously using artificial tears prior our study, they still had subjective symptoms, and their objective dry eye symptoms reached the advanced stage of LIPCOF 3. Since in spite of the use of the commercially available artificial tears each patient reached conjunctivochalasis LIPCOF grade 3 each patient served as his/her own control during the time of our study.
At the first visit the required amount of unit-doses of the preservative-free, inorganic saltfree artificial tear, Conheal (provided by Pannonpharma Ltd., Pécsvárad, Hungary), containing isotonic glycerol and 0.015% hyaluronic acid in purified water as described in reference [16] was given to our patients. Patients were instructed to apply these artificial tears on both eyes four times a day during the three months of the study. Due to the prior use of artificial tears and the detailed discussion of the study at the first visit, as well as the lack of serious adverse events of the eye drops used, patients' compliance was very high throughout the whole study as checked by discussions during the one-month and three-months-visits.
At the first visit, the best corrected visual acuity, the grade of conjunctivochalasis, the tear film breakup (TFBUT) time on both eyes was determined, and the extent of the epithelial damages of the cornea and conjunctiva with lissamine green staining. The subjective complaints of the patients, as well as the impact of the dry eye complaints on their everyday life were recorded by the help of the OSDI questionnaire [12]. Patients were asked to self-complete the OSDI questionnaire translated into Hungarian after receiving general instructions. Schirmer-tests and tear osmlolality measurements planned in the study protocol were not performed to reduce the invasiveness of the study and stress of patients having the advanced condition of severe conjunctivochalasis. After one and three months of regular use of the Conheal artificial tear, our patients were subjected to the same examinations.
The severity of the conjunctivochalasis was determined in terms of LIPCOF degrees according to the Höh method [17]. The TFBUT was measured by a standard method [18] using fluorescein. We opted for this standard method instead of the Tearscope examination planned in the original protocol, due to a sudden damage of our Tearscope apparatus at the start of the study causing inconsistency in its measurements, and since fluorescein staining of the eye was planned in the study protocol assessing epithelial damages besides lissamine green staining. The lissamine green staining was evaluated according to the Oxford Scheme grade [11].
To increase the validity of the measurements all measurements were performed by the same person during the whole study. The investigator was not aware of the stage of the patient, when performing the analysis. Measurements were supervised by an independent expert in a randomly selected 10% of the cases. Both the investigator and the independent expert had a Good Clinical Practice Certificate. LIPCOF degree measurements were performed on the same slit lamp having the same position and slit width in the whole study.
Statistical evaluation
The results of the above tests recorded at the first visit were compared to the results of similar examinations after one and three month of treatment. Additionally, results after one month of treatment were compared to the results after three months of use of the artificial tears. For the comparison of ordinal data (LIPCOF degree, Oxford Scheme grade) and non-normally distributed data (OSDI) the non-parametric Wilcoxon Signed Rank Test was used, meanwhile the normally distributed data (TFBUT) were compared by the help of the parametric Paired T Test using the SPSS Statistics 22 software (IBM Corporation, Armonk, NY, USA). The statistical evaluation was refined compared to that planned in the study protocol including the evaluation of ordinal and non-normally distributed data. The results were expressed as mean ± standard deviation for each objective test and separately for the right and the left eyes. The OSDI test of course represents the personal satisfaction from the treatment of both eyes (Tables 1 and 2).
To study the influence of sample size reduction from 40 to 20 subjects in the study, a post hoc power analysis was performed for the primary efficacy outcome measure, mean grade of conjunctivochalasis.
For a conservative power estimation the highest observed within-group standard deviation of 0.9 (Tables 1 and 2) was supposed, and a low correlation value of 0.2 was assumed for correlated measurements performed on the same subject at month 0 and month 3. A mean decrease from baseline of 1.0 LIPCOF degree was assumed to be clinically relevant. Power estimation was performed for a paired t-test at a two-sided significance level of 5%, and corrected for the asymptotic relative efficiency of 3/π because during the statistical evaluation the Wilcoxon Signed Rank Test was applied instead. Power calculations were performed in SAS v9.4.
For the assumptions described above the power to show a clinically relevant mean decrease of 1 degree in the primary outcome measure was 95.1%. Since this was an exploratory study no adjustment for multiplicity was made, although the primary outcome measure was tested both at the right and the left eye.
Results
Fig 1 shows the CONSORT flow chart of the study. The TREND checklist (S1 TREND Checklist) and the protocol of the study (S1 Protocol) can be found as supporting information files. Study details are given in the Methods section. After both one and three months of properly scheduled treatments with Conheal, regularly instilled four times a day, both the subjective and objective symptoms of our patients improved. The numerical results of the examinations performed at the first visit (starting-visit), after one month (one-month-visit) and three months of use (three-month-visit) of the artificial tears are summarized in Tables 1 and 2. The primary outcome measure of our study, the mean grade of the conjunctivochalasis was reduced on both eyes significantly at the one-month-visit (Fig 2, Table 1), and was decreased further at the three-month-visit (Fig 2, Table 2). After three months conjunctivochalasis decreased from a mean LIPCOF degree of 2.9±0.4 on both eyes to 1.4±0.6 on the right (median decrease of -2 points, 95% CI from -2.0 to -1.0), and to 1.4±0.7 on the left eye (median decrease of -1 points, 95% CI from -2.0 to -1.0) (p<0.001 for both sides). The TREND checklist (S1 TREND Checklist) and the protocol of the study (S1 Protocol) can be found as supporting information files.
doi:10.1371/journal.pone.0132656.g001 The other two recorded objective secondary outcome measures of the dry eye disease also improved during our study. The tear film breakup time (TFBUT) lengthened till the onemonths-visit (right eye median increase of 0.6 sec, 95% CI from 0.2 to 1.0 sec; left eye median increase of 0.9 sec, 95% CI from 0.1 to 1.7 sec), but no significant increase was found after. (Fig 3, Tables 1 and 2). By the application of lissamine green staining the mean Oxford Scheme grade staining decreased significantly during the entire period of the examination (right eye median decrease of -1.0 grade, 95% CI from -1.0 to -1.0 grade; left eye median decrease of -1.0 grade, 95% CI from -1.0 to -1.0 grade after three months; Fig 4, Tables 1 and 2). There were no Tables 1 and 2). The high standard deviation was caused by the broad spectrum of the OSDI data, since the subjective complaints showed a high individual scatter. In all of our patients a decrease in the OSDI values could be observed at the end of the three months examination, the range of the extent of the decrease was from 2.1 to 61.7 OSDI scores.
We have also evaluated the changes of LIPCOF degrees and OSDI scores in separate groups of patients having different initial LIPCOF degrees (14 patients having an initial LIPCOF degree 3 on both eyes, 6 patients having an initial LIPCOF degree 3 on one and LIPCOF degree 2 on the other eye. No patients had LIPCOF degree 2 or lower on both eyes).
Only one (2.94%) out of the 34 eyes that had LIPCOF degree 3 conjunctivochalasis initially had an unchanged LIPCOF degree 3 during the study. LIPCOF degree decreased from the initial degree 3 to degree 2 or degree 1 in 44.1% or 47.1% of the eyes, respectively. LIPCOF degree 0 was reached in two eyes (5.9%). The conjunctivochalasis decreased to degree 1 in 100% of the 6 eyes having LIPCOF degree 2 conjunctivochalasis at the beginning of the study.
The OSDI score decreased in both patient groups having a bilateral LIPCOF degree 3 or a LIPCOF degree 2 and 3 in different eyes initially. The average OSDI score of the group having bilateral LIPCOF degree 3 conjunctivochalasis decreased from the initial value of 29.6 to 20.5 at the end of the first month and to 15.9 at the end of the three months. The OSDI score was surprisingly higher at the beginning of the study in the group having unilateral LIPCOF degree 3, and LIPCOF degree 2 on the other eye, but the OSDI score decreased dramatically from the initial value of 51.5 to 27.5 at the end of the first month and to 14.8 at the end of the three months. At the end of the three months trial no significant difference was found between the results of the two groups (p = 0.304). Artificial tear treatment and measurement of OSDI score on 20 patients were performed as described in Methods. The OSDI scores showed a significant decrease at the end of the first month, which decreased further significantly at the end of the third month. Means and their standard errors of the OSDI score are shown. Statistical evaluation was performed using the Wilcoxon Signed Rank Test. One and three asterisks note p<0.05 and p<0.001, respectively. During the study period two patients complained about greasy sensation on the eyelids, but they did not stop the treatment. No other adverse reactions were reported.
Discussion
In the course of our study the preservative-free, inorganic salt-free, isotonic glycerol and 0.015% sodium hyaluronate containing Conheal artificial tears administered four times a day resulted in a significant favorable change on the ocular surface after one month of regular use already that progressed during the three months of the examination. The treatment of the patients, who used other commercially available artificial tears earlier, resulted in improved objective symptoms and increased patients' subjective satisfaction.
From our results, the effect of the artificial tears on the degree of conjunctivochalasis, which was the main purpose of the study and is the main point of our paper, is a novel finding in the literature. Decrease of LIPCOF degree has been reported earlier in the evaporative dry eye disease after the treatment with liposome eye spray [19][20][21][22]. However, this study is the first demonstrating the decrease of conjunctivochalasis of patients suffering from keratoconjunctivitis sicca using eye drops not containing lipids. This result is important, since it demonstrates the existence of a conservative therapy of severe conjunctivochalasis leading to the reduction of the LIPCOF degree. Using the artificial tears applied in this study, the LIPCOF degree 3, considered as indication for surgery [14], became controllable, and resulted in less corneal damage, hence less complaints.
Our results are in agreement with the results of an earlier clinical study performed with an artificial tear preparation of identical composition. In that article they also showed, that the improving of the rose bengal staining lead to improved personal satisfaction. However, the earlier report did not measured the extent of conjunctivochalasis [16], which is then major novelty of the current paper. In agreement with earlier studies carried out with sodium hyaluronate containing products, the tear film breakup times were lengthened [23], and the epithelial damages were resolved due to the reduction of chronic harms of the corneal epithelium [24]. In agreement with our results, the prolongation of the tear film breakup time was recorded with glycerol containing artificial tears earlier [25]. The subjective complaints of the patients measured with the OSDI questionnaire [12], ensuring a good follow-up, have decreased in our study, which is in agreement with the general correlation between OSDI data and the grade of conjunctivochalasis described by Németh et al. [9].
It is known from the literature that in conjunctivochalasis of severe grade showing high lissamine green staining, the human leukocyte antigen-DR (HLA-DR) level is elevated [26]. Thus it is possible that Conheal caused a decrease in the severity of the conjunctivochalasis through lowering the HLA-DR level, since in keratinocytes glycerol decreases the toll-like receptor 2 (TLR2) and TLR3 activation caused upregulation of the expression of HLA-DR [27]. It is important to mention that other authors found that sodium hyaluronate containing eye drops were able to increase the TFBUT [23,28] and induced a decrease in corneal staining [29,30]. The eye drops used in our study contained a combination of glycerol and sodium hyaluronate, and caused significant improvement in both the objective and subjective symptoms of the patients. In light of the results of previous studies [31,32] and in light that in conjunctivochalasis the HLA-DR is increased [26], it is likely that during our study the HLA-DR levels decreased without the use of a corticosteroid [31] or other anti-inflammatory agent [33]. A tear proteomics study of conjunctivochalasis patients highlighted apoptosis-and inflammationrelated proteins, as well as an increased level of tear defensin associated with conjunctivochalasis [34]. Further studies are needed to ascertain, whether these changes were induced as a consequence of the severe condition, and/or were also contributing to the etiology of conjunctivochalasis.
Hyaluronic acid retains water on the ocular surface, and improves lubrication [35]. These effects (together with the similar effects of glycerol) may have contributed to the overall efficiency of the eye drops used besides potential specific mechanisms, such as attenuation of the upregulation of HLA-DR, as well as apoptosis-and inflammation-related proteins.
The separate use of both glycerol and sodium hyaluronate in the artificial tear preparations is known for a long time, and was shown to be safe [29,35,36]. The effect of glycerol [37] and hyaluronic acid [23,28] treatments alone induced the prolongation of the non-invasive tear film breakup time (NIBUT), and the healing of epithelium injuries developed as a consequence of the dry eye disease [24]. A decrease of the LIPCOF degree has been demonstrated on evaporative dry eyes in several studies using a liposome eye spray [19][20][21][22]. In those studies the dry eye patients were diagnosed primarily by the tear-film instability, not the conjunctivochalasis. However, the decrease of the LIPCOF degree observed in those studies was less than observed in our study using a lipid-free artificial tear containing isotonic glycerol and 0.015% hyaluronic acid (Fig 6). The liposome eye spray was efficient in the increasing of the TFBUT and decreasing the lid-margin inflammation [19][20][21][22].
The administration of preservative-free preparations is generally more beneficial than the use of the products containing even recently applied preservatives [38]. The use of unit-dose packaged artificial tears is both simple and safe, since they do not become infected within 24 hours after opening [39]. However, the infection-free period has to be determined for every unit-dose preparation individually, as it was done for Conheal resulting in 12 hours of infection-free period.
The primary treatment of the dry eye syndrome is the chronic administration of artificial tears. However in the case of severe conjunctivochalasis (LIPCOF degree 3), an invasive therapy may also be necessary [14]. A spectrum of invasive therapies, like classical surgeries against conjunctivochalasis and other invasive methods, such as the treatment of the conjunctival folds with argon-laser [40] or with heat cauterization [41], are well known. There is little data on population survey of conjunctivochalasis. A study in Shanghai [42] claims that over 60 years 4% of the people had "very severe" (grade 3) conjunctivochalasis (at least 0.25% needing urgent surgery) and 16% had "severe" (grade 2) conjunctivochalasis, therefore grade 2 and grade 3 conjunctivochalasis occurred in about 20% of the population over the age of 60. In the EU about 20% of the people are over the age of 65 [43]. From these data it might be assumed that in the total EU population 0.8% has grade 3 conjunctivochalasis (approximately 0.03% needing urgent surgery) and 3.2% have grade 2 conjunctivochalasis: in total 4%.
Since dry eye complaints occur in 5.5 to 33.7% of the population [1] (having an average prevalence of 10.3% weighted to the number of patients participated in the study) we might assume that the incidence of the conjunctivochalasis-caused dry eye syndrome in the population is about one-third of all dry eye cases.
All of our patients used various artificial tears regularly, which did not alleviate their symptoms and in spite of their regular use the patients' LIPCOF degree raised to a mean of 2.9±0.4. The lack of the satisfactory effect of the previous therapies on the objective and subjective symptoms of dry eye disease made our examinations a self-controlled study. The power analysis showed that the study was adequately powered (power>95%) even after reducing the sample size from 40 to 20 patients.
In conclusion, the artificial tear, Conheal, decreased the grade of the conjunctivochalasis significantly after one month of regular use already, from the LIPCOF degree 3, considered as indication of conjunctival surgery, to a LIPCOF degree 2 or lower requiring a conservative therapy. Our results raise the possibility that vision-related quality of life can be significantly improved by conservative therapies even in severe conjunctivochalasis.
Supporting Information S1 TREND Checklist. TREND checklist. TREND Statement Checklist for the trial. (PDF) S1 Protocol. Study protocol. Study protocol for the examination of the efficiency of Conheal sodium-hyaluronate containing eye drops in conjunctival and corneal epithelial injuries as approved by the Hungarian Scientific and Research-Ethics Committee (permission No. 21455-1/2011-EKU). (PDF) | 2016-05-04T20:20:58.661Z | 2015-07-14T00:00:00.000 | {
"year": 2015,
"sha1": "07e2f33cdac80499d3a50fbd469102a80acae0cf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0132656&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07e2f33cdac80499d3a50fbd469102a80acae0cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247069378 | pes2o/s2orc | v3-fos-license | THE IMPLEMENTATION OF COMMUNICATIVE LANGUAGE TEACHING PRINCIPLES IN TEACHING BUSINESS LETTER WRITING AT GENTA ENGLISH COURSE Asrotinengseh,
Letter writing is one of the communication form that only can be done by writing. writing skill is skill that can not be gotten by naturally, unlike speaking. White in Khoo (1981) writing is not natural talent. Due to the difficulty, CLT is one of teaching English approaches that highly recommended that has goal to develop the student’s communicative both in speaking and writing. The aim is to identify the teaching at GENTA English course in teaching business letter writing class, first, about the process of teaching, the obstacles, the factors obstacles, and the strategies to solve the obstacles. The design of this research is qualitative descriptive, the qualitative method employed for analyzing the implementation of CLT principles in teaching business letter writing at GENTA English Course. The imstruments used to collect the data are: interview guide line, observation chceklist, and documentation. The result for different finding, (1) The process which includes the preparation (syllabus, lesson plan, material, and attendance list), implementation (pre-teaching, whilstteaching, and post-teaching), and evaluation, (2) the obstacles faced by teachers and students: some groups did not work significantly, time is sometimes over, the students are too noisy, (3) the factors caused the obstacles faced by teachers and students: the random grouping, time management, the topic is too interesting, students are enjoying the topic, and (4) the strategies to solve the obstacles: teacher may do the grouping considering the active students as the leading agent of each group, teacher should carefully plan the time management, teacher may give some limitations to the students’ participation in the classroom, and add more meetings so that the goals are achieved. Key word: business letter, CLT Principles, teaching writing
INTRODUCTION
Writing and speaking are two forms of communication, writing is not so easy as the speaking one that be effective way to communicate in any language, but written is hard to get. Even though written is difficult, letters, and others (memo, formal proposal, etc) only can be done by the written form. People can express their idea systematically by writing. In English classroom developing student's writing skill is priority by considering the importance of the writing skill, among other skills.
Writing skill is skill that can not be gotten by naturally, unlike speaking. White in Khoo (1981) writing is not natural talent. The scientist believes that writing is difficult to be taught because there are some rules and procedures that must be undesrtood and to be taught to them. Writing becomes very difficult for nonnative speakers because they are expected to write something that shows mastery on the elements of a new language (Rachmayanti, 2013).
Bussiness letter is the letter which is written for bussiness purposes. Crowther (2007) states the purpose of the bussiness letter will lead to some kind of action from both the sender and the recipient of the letter. Thus, writing bussiness letters require good communication skills and knowledge of the bussiness letter conventions. Different from the personal letter, in bussiness letter the language should be formal, direct and clear.
The same as other functional writing, bussiness letter writing is also communicative as stated by Gartside (1986), writing a letter is just like holding a conversation by post. He added that in bussiness letter, the letter itself represent the person who write it. In other words, in order to create a good impression of the writer, the letter should be written in an effective way.
In order to write an effective bussiness letter, there are some points that the writer should consider. First, the writer should consider about the recepient of the letter. The adressee of the letter influences the language which is used in the letter, the tone and style of writing it. The next consideration is the reason or the purpose of writing the letter. It can be, to ask some information to a company, to apply for a job, to order some product and others. After that, the writer should also consider about what information that the writer need to put in the letter to be conveyed to the recipient. This point depends of the purpose of writing the letter. Finally, what is the writer expect from the recipient, whether the writer expects the recipient to give some information, to consider about the write's application, and others.
Beside the above consideration, there are some other bussiness letter writing conventions that writer should follow in order to write good bussiness letters. The conventions are based on the type the bussiness letter. Some of the general conventions of the bussiness letter that should be included in writing any bussiness letters are salutation, introduction, the body of the letter, the closing, the complimentary closing and signature of the writer. In short, writing an English letter writing is not an easy task to do as there are many things that should be considered and many conventions that should be learned in order to make an effective bussiness letter writing.
According to Brown (2001) said that a simplistic view of writing would assume that written language is simply the graphic representation of spoken language. Writing is more complex than this; hence writing pedagogy is important, as Brown states by claiming that writing is as different from speaking as swimming is from Walking (2001).
Due to the difficulty, approach should be selected in order teachers who teach writing could guide the learners in the process of writing in order to make them easier to write. In course the curriculum is especially designed to enable the students to use the practical English in their real life. Hence, the teaching approach that will be applied should be communicative.
Communicative language teaching (CLT) is one of teaching English approach that highly recommended. This approach has goal to develop the student's communicative both in spoken and written. In Indonesia, the communicative language teaching has also been applied in English classroom, especially at the course where the students are prepared with certain ready used skill and the ability to communicate in English both in spoken and written form. Savignon (2002) proposes that communicative language teaching is not exclusively concerned with the teaching of English for oral communication only, but the principles can also be applied equally to the teaching of English reading and writing. As the explanation, it also can be used in teaching writing skill as well as the other two skills and any English teacher should treat the teaching writing as teaching speaking skill, so that the teaching writing's goal can be achieved.
Communicative language teaching (CLT) is approach that emphasize in communicative activity. Based on Garton and Grave (2017) define that CLT as an teaching approach with meaningful communication as its ultimate goal while Barrot (2018) considers that CLT as a way of teaching in which the utilization of communicative activities and target language aims to develop learners' competence of understanding and exchanging different ideas, behavioral modes, values, beliefs and cultures. Although the definitions provided by scholars vary, it appears that they all stress the essence of CLT as genuine communication rather than simply learning such linguistic knowledge as vocabulary, grammar and structure of a language. In Yim's words (2016), the aim of CLT is to foster the capacity of individuals to create and construct utterances (spoken and written) which have the desirable social value or purpose.
The principles of Communicative Language Teaching that argued by Jack C. Richard are (Richards and Schmidt, 2002) 1. Learners use a language through using it to communicate 2. Authentic and meaningful communication should be the goal of classroom activities 3. Fluency and accuracy are both important goals in language learning 4. Communication involves the integration of different language skills 5. Learning is a process of creative construction and involves trial and error. This is a simple good theory that claimed by Richard (2002), he claimed that the important activities are using English. Whatever skill that will be used by students, it does not make problem in communicative language teaching theory, although this theory not as detail as Harmer said but this theory will be easier to do in Indonesia, which is English as second language acquisition (SLA). While the weakness of this theory is third principle because it will feel difficult thing to do, moreover in the new material that they have already known.
Having reviewed some general principle of CLT previousely, Hong (2008) outlined four of the principles which are mainly concerned with the skill of writing. The principles are understanding the culture difference, adjusting the roles of teacher, student and the material, incorporating the process of learning with the product of writing, and combining the all four basic skills The previous research was entitled "Development of Story Writing Skills through Communicative Approach atSecondary Level in Pakistan" by Ahmad at all (2020). The main aim of this study was to observe the effect of communicative teaching on story writing skills among 9th graders. Relevant literature revealed that story writing skills can be increased through a communicative approach. Quasi-Experimental design, that is, pretest post-test non equivalent control group design was used.Threats to internal/external validity were undertaken properly. Two groups were selected to collect data toachieve the above-stated aim. The creative compositions related to story writing skills were evaluated in the light of scoring rubrics. The data were analyzed using t-statistics. The communicative approach is recommended for teaching dialogue writing.
Therefore, the writer was in a very big curiosity to know more on the teaching of business letter writing using CLT principles in Kampung Inggris. Hence, in this research the writer wants to research "the Implementation of Communicative Language Teaching Principles in Teaching Business Letter Writing at GENTA English Course" Pare Kediri. GENTA English Course is situated in English Village (Kampung Inggris Pare) which is a good atmosphere for English learners and has been legalized by Education Authorities for 17 years, located at Kemuning street No. 39, Tulungrejo, pare, Kediri. This institution has a variety of programs that can improve our English language skills. The motto of the Genta is a master of character, who learns not only English but also attitude. For the clapper mission "Mastering English easily and in a short time, Preparing professional English teachers, and Cultivating a harmonious, sincere, simple, independent and innovative spirit". Genta's vision is Fostering a Generation of the Nation with Quality and Morals of Karimah. With the existence of a vision and mission that clap, which is not only learning English but also attitude.
There are 3 programs created by Genta, Genta English Course, Genta Diploma, and Genta holiday. Genta English course is the first GENTA programs. Namely an educational English language course that offers classes: Basic, Intermediate, advance, specific purposes, TOEFL, IELTS, etc. Basic Regular (program + dormitory two months) is a Basic English package for the general public. Basic Intensive (program + dormitory one month) is the same program as Basic Regular, but it is more condensed with the material, broadcast time, and more serious. Five times goes to class in a day (vocabulary class, grammar class, speaking class, grammar club, speaking club), five times comes in a week (Monday -Friday).
Intermediate Regular (program + dormitory two months) is an advanced class program of the Basic Class. The material is continuous with the material in Basic Class. Intermediate Intensive (program + dormitory one month) this program is also the same as Intermediate Regular, the difference here is that it is even more intense because the class level is higher and the time is only one month. Five times goes to class in a day (vocabulary class, grammar class, speaking class, grammar club, speaking club), five times comes in a week (Monday -Friday).
Advanced level,( program+ dormitory a month) in this level the lesson that be taught such as (writing, reading, translation). Three times goes to class in a day (writing class, reading class, translation class), three times comes in a week (Monday -Friday). And one of the program that has been choosen is spesific purposes class( Business letter writing), ( program+ dormitory a month) in this level the students just need study about letter in a meeting every day and five times in a week (Monday -Friday) Based on research context about CLT principles in teaching business letter writing, the research question was used are; (1) how is the process (in the term Preparation, implementation, and evaluation) of teaching business letter writing using communicative language teaching (CLT) principles at GENTA English Course? (2) what are the obstacles faced by teacher and students in the class while teaching and learning English process using communicative language teaching (CLT) principle in business letter writing at GENTA English Course? (3) what factors caused the obstacles faced by students and teacherof teaching and learning of business letter writing using communicative language teaching (CLT) principles at GENTA English Course? and (4) how are the strategies to solve those obstacles in teaching and learning English process using communicative language teaching (CLT) principle in business letter writing at GENTA English Course?. Therefore, researcher is in a big intention to experience a research in GENTA English Course Kampung Inggris Pare. Through this research the writer wants to analyze how the implementation of communicative langauge teaching principles in teaching business letter writing at GENTA English Course is.
METHOD
The researcher used qualitative approach in this research which is about how to describe the results of discovery and making an overview. The main characteristic of qualitative data is not dealing with number as in quantitative. The research design used for this research is a descriptive qualitative to obtain information related to current phenomenon and presented toward determining the nature of situation and condition at the time of the research activities, not giving them any treatment then describing the fact as it exists naturally.
This research is held in GENTA English Course, on Kemuning street, Pare, Kediri, East Java on 2 nd march 2020, that applying communicative language teaching as approach in teaching writing. It built since 2003 by M. Qomar, M.Pd as the director and the owner. Having 30 teachers that teaching speaking, writing, and all subjects about English, it has a special program (business letter writing) and GENTA usually uses CLT as the approach of teaching it. The key informant of the research was the owner, manager, 3 tutors, and some students.
The researcher collected data through observation, interview, and documentation. In addition, the researcher also used some other instruments to collect the data. The researcher can also call as an instrument. It means that she had a big role in doing the research. In other word, the success of the research greatly depended on this role. In this research, the researcher used four instruments in collecting the data. The instruments were observation checklist, interview guidelines, and documentation The researcher finds program that has been consistent at GENTA English course, the program is business letter writing, the program which is needed by someone who want to join in a company. Then, the writer decided the tittle, sending the framework of research to the academic advisor, collecting information and revising the research framework that has been revised by the advisor. For the research implementation, the researcher asked the permition to observe in this course, then observing research subject, submiting research schemes to supervisor, taking and interpreting data result in the form of description patterns under the findings.
The concept of writing a report is based on the writing format following to the research report by the faculty. The research starts writing from the chapter 1 (introduction) which refers to 5 sub chapters, research context, research focus, research objectives, significant of research, and definition of key term, followed by chapter 2 the discussion related to the study of literature under the focus of research that contains theories and previous research findings that are still related to the focus of research, followed by chapter three, that is the research method that is consisting of 8 sub chapters, (1) research design, (2) research setting, (3) key of informant, (4) research as key instrument, (5) technique of collecting data, (6) data analysis, (7) The trustworthiness of data, and (8) research procedure, followed next chapter (four), which discusses research findings and discussions. This chapter at least contains the description data related to findings and discussions of research result, and the last chapter is closure, it contains of the conclusion of the data and the suggestions.
FINDINGS AND DISCUSSION
According to the research focuses, there are four points that can be discussed; The process of teaching business letter writing using CLT principles, The obstacles faced by teacher and students, The factors caused the obstacles, The strategies to solve those obstacles. Here are the detail:
The Process of Teaching Business Letter Writing using CLT Principles
After analysing the data, the researcher found The process of teaching business letter writing using CLT principles in GENTA which includes the preparation (syllabus, lesson plan, material, and attendance list), implementation (pre-teaching, whilst-teaching, and post-teaching), and evaluation applied in the classroom is as follows: (1) The preparation of CLT in teaching business letter writing done by teacher of GENTA English course is including the preparation of syllabus, lesson plan, material, and attendance list, (2) the implementation of CLT principles in teaching business letter at GENTA English course is consisting of three different steps: pre-teaching(Teacher did the pre-teaching in writing class of GENTA English course well based on the lesson plan prepared and the teaching guide line from GENTA English course such as setting the class, reciting Qur'an, checking attendance and giving motivation. It must be highlighted at the last point, giving motivation to students) , whilst-teaching ( the teacher gives the rule to use English in teaching and learning process, if the students break the rule, they get the punishment that has been agreed, the teacher divides the class into 5 groups, the teacher informs to the students about material will be learned, the teacher asks to the students to identify part of the letter in group, the teacher evaluates the students work in front of class, the students make a letter that has been explained individually, the teacher corrects the students worksheet) , and post-teaching (post teaching in GENTA English Course for the writing class is done by giving and asking students feedback to evaluate the whole teaching process and to improve the next teaching quality. Afterwards, teacher gave motivation to students related to the material taught to make sure that students are enjoying the class and know why they learnt the material), (3) The student's writing output in GENTA English course is measured by giving 2 types of different tests, written and spoken, to get their score for further discussion to determine their writing output.
The process that is applied in the classroom is suited to the theories of communicative language teaching implementation, such as Garton and Grave (2017), Barrot(2018), in Yim's word (2016) (2008) which agreed that communicative language teaching activities are a desire communicate, a communicative purpose, content not form, variety language, no teacher intervention, and no material control.
The Obstacles Faced by Teacher and Students
The obstacles found by researcher were not much influencing the whole teaching process and performances. This is line with the previous researches by Ahmad (2020) & Ngozi (2018) that CLT is suited to use in teaching writing such as business letter writing and has shown better result on students understanding of writing. The obstacles found were, accidentally some groups did not work significantly as the members are decided randomly while some are too active since they have more active members, time is sometimes over before the goal of teaching is achieved, and the students are too noisy when the super active students dominated the class discussions. Regardless of those obstacles, the teaching of business letter writing in the class was successful and can be said the obstacles are tackled well.
The Factors Caused the Obstacles
There are some factors listed by the researcher, they are as follows; the random grouping sometimes groups the active students with those who are also active and the less active ones with those who are less active or even passive, time management planned sometimes did not suit the students condition so it is sometimes perfectly enough or over before the goal of the teaching is achieved, the topic is too interesting and the class is so fun, therefore the students are sometimes too active and the class got noisier, further students are enjoying the topic and sometimes the communication between teacher and students are going over the topic. The factors mentioned above than cause some obstacles such as some students are not actively participating the discussing and the time is not enough to do the whole activities the class.
These facts are in line with Nunan (1989) states that students' role is as negotiant of meaning. He believes that to reach the goal of communicative language teaching approach, the students should be actively participated in the language classroom. In writing classroom for example, the students should be given more time to practice writing rather than spending along time to listen to teacher's presentation about the writing. Ansyar (2004) proposes that the language competence can only be reached through lots of language experience. So, no matter how the students comprehend the theories of a language, without experiencing it they will never achive the compenetence of the language which is the main purpose of learning the language.
The Strategies to Solve Those Obstacles
There are some strategies to solve some obstacles in teaching business letter writing in GENTA. There are: (1) teacher may do the grouping not randomly but considering the active students as the leading agent of each group, (2) teacher should carefully plan the time management and suit it with the topic taught. If the time seems to be not enough, teacher should either add meetings or controll the session to be more effective, (3) teacher may give some limitations to the students' participation in the classroom, and (4) add more meetings so that the goals are achieved.
Based on those findings, researcher can conclude that the strategies done by teachers are to achieve the best writing class experience and to motivate students in accomplishing the specific goal of writing. It is in line with Murcia (1991) also states that in English classroom teacher should present activities which are meaningful to the students and which will motivate them to become commited to sustaining that communication is intended to accomplish a specific goal.
CONCLUSIONS
Based on the findings and discussions, the researcher can conclude that the process of teaching business letter writing using CLT principles in GENTA which includes the preparation ( syllabus, lesson plan, material, and attendance list), implementation (pre-teaching, whilst-teaching, and post-teaching), and evaluation applied in the classroom is as follows: (1) The preparation of CLT in teaching business letter writing done by teacher of GENTA English course is including the preparation of syllabus, lesson plan, material, and attendance list, (2) the implementation of CLT principles in teaching business letter at GENTA English course is consisting of three different steps: pre-teaching(Teacher did the preteaching in writing class of GENTA English course well based on the lesson plan prepared and the teaching guide line from GENTA English course such as setting the class, reciting Qur'an, checking attendance and giving motivation. It must be highlighted at the last point, giving motivation to students) , whilst-teaching ( the teacher gives the rule to use English in teaching and learning process, if the students break the rule, they get the punishment that has been agreed, the teacher divides the class into 5 groups, the teacher informs to the students about material will be learned, the teacher asks to the students to identify part of the letter in group, the teacher evaluates the students work in front of class, the students make a letter that has been explained individually, the teacher corrects the students worksheet) , and post-teaching (post teaching in GENTA English Course for the writing class is done by giving and asking students feedback to evaluate the whole teaching process and to improve the next teaching quality. Afterwards, teacher gave motivation to students related to the material taught to make sure that students are enjoying the class and know why they learnt the material), (3) The student's writing output in GENTA English course is measured by giving 2 types of different tests, written and spoken, to get their score for further discussion to determine their writing output.
The obstacles faced by teacher and students in the class while teaching and learning English process using communicative language teaching (CLT) principles in business letter writing at GENTA English Course are as follows: Obstacles faced by teachers are; (1) accidentally some groups did not work significantly as the members are decided randomly while some are too active since they have more active members, and (2) time is sometimes over before the goal of teaching is achieved. The obstacles faced by students are, (1) too noisy when the super active students dominated the class discussions, and (2) time is sometimes over before the goal of the program achieved.
Factors caused the obstacles faced by students and teacher of teaching and learning of business letter writing using communicative language teaching (CLT) principles at GENTA English Course are as follows: The factors caused the obstacles faced by teachers are, (1) the random grouping sometimes groups the active students with those who are also active and the less active ones with those who are less active or even passive, and (2) time management planned sometimes did not suit the students condition so it is sometimes perfectly enough, or over before the goal of the teaching is achieved. The factors caused the obstacles faced by the students are, (1) the topic is too interesting and the class is so fun, therefore the students are sometimes too active and the class got noisier, (2) students are enjoying the topic and sometimes the communication between teacher and students are going over the topic.
The strategies to solve the obstacles faced by teachers are: (1) teacher may do the grouping not randomly but considering the active students as the leading agent of each group, (2) teacher should carefully plan the time management and suit it with the topic taught. If the time seems to be not enough, teacher should either add meetings or control the session to be more effective. The possible strategies to solve the obstacles faced by students are: (1) teacher may give some limitations to the students' participation in the classroom, and (2) add more meetings so that the goals are achieved. | 2022-02-24T16:20:46.607Z | 2022-01-13T00:00:00.000 | {
"year": 2022,
"sha1": "a6fb9fc148cddce55e5b913cfa8e2fd91ff16f02",
"oa_license": "CCBY",
"oa_url": "https://ejournal.uniska-kediri.ac.id/index.php/PROFICIENCY/article/download/2248/1418",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b983df1defc2400320acf49b9d1f8de81b9234d4",
"s2fieldsofstudy": [
"Business",
"Education",
"Linguistics"
],
"extfieldsofstudy": []
} |
237392112 | pes2o/s2orc | v3-fos-license | Chemical and Organoleptic Characteristics of Seaweed Jelly Candy (Eucheuma cottonii) with the Addition of Red Ginger (Zingiber officinale Roscoe) Extract
Eucheuma cottonii is one of seaweed species that has beneficial economic value and widely cultivated in Indonesia. Diversifying product into seaweed jelly candy could be carried out to utilize the source. The seaweed distinctive aroma is one of problem in producing jelly candy. Ingredient with strong aroma such as red ginger is needed to covered the smell. This research was aimed to determine the optimum concentration of red ginger (Zingiber officinale Roscoe) extract and seaweed (Eucheuma cottonii) to produce high quality jelly candy based on chemical and organoleptic characteristics. This research used an experimental method consisting of 4 different concentration of red ginger extract (0%, 40%, 50% and 60%) with 20 panelists as evaluators. Chemical composition (water content, protein content, fat content, carbohydrate content, crude fiber content) and organoleptic characteristics (appearance, aroma, texture, taste) of jelly candy were observed as parameters in this research. The results showed that the addition of 50% red ginger extract to jelly candy produced the best organoleptic characteristics and most preferred by Original Research Article Amalia et al.; AJFAR, 12(5): 33-43, 2021; Article no.AJFAR.69013 34 panelists. The chemical analysis showed the jelly candy contained 6,22% water, 0,88% protein, 0,19% fat, 96,82% carbohydrate, and 1,54% crude fiber. Addition 50% of red ginger extract is recommended to produce seaweed jelly candy that has the best and most preferred characteristics.
INTRODUCTION
Indonesia is a country that has potential to produce large seaweed and is known as one of the seaweed exporters in Asia [1]. Seaweed is one of the fishery commodities that has beneficial economic value and potential to be developed. Seaweed production in Indonesia reached 10,32 million tons in 2018 and 9,91 million tons in 2019 [2].
Eucheuma sp is type of seaweed which widely cultivated in Indonesia and has important economical value [3]. This type of seaweed is generally used for food. Eucheuma cottonii seaweed is one species of seaweed that produces carrageenan in the form of polysaccharide compounds [4]. This type of seaweed contains ingredients that are rich of dietary fiber which is good for health, but its utilization for food products in Indonesia is still very limited. Therefore, utilization efforts are needed to encourage these fishery products. One of its uses is by diversifying product into seaweed jelly candy.
Jelly candy is a soft textured candy which processed with the addition of hydrocolloid components such as agar, gum, pectin, starch, carrageenan, and gelatin which is used to modify the texture, so that chewy candy is produced [5]. Jelly candy is a product that is liked by many people from children to adults because it has a chewy and distinctive candy texture.
The use of Eucheuma cottonii in making jelly candy is intended as a hydrocolloid material to produce chewy candy products. One of the hydrocolloid ingredients in seaweed is carrageenan. One of the problems in using Eucheuma cottonii, both fresh and dry, in making jelly candy is the distinctive aroma of seaweed. Therefore, a material that has a strong aroma is needed to cover the distinctive aroma of the seaweed. One of the ingredients that has a strong aroma is ginger [5].
Ginger has a distinctive aroma due to the essential oil content and the specific taste of spicy flavor derived from oleoresin compounds [6]. Red ginger extract has the highest volatile (essential oil) and non-volatile (oleoresin) components than other types of ginger, namely the essential oil content of around 2,58% -3,90% and oleoresin 3% [7]. The selection of red ginger in seaweed jelly candy products is because red ginger has a higher content of essential oils than other types of ginger. In addition, red ginger has various benefits that are good for the body. Essential oil is a component that gives ginger a distinctive aroma [8].
The addition of red ginger extract to seaweed jelly candy can cover the distinctive aroma of seaweed, so the quality of the jelly candy produced can be better. Therefore, the research of making jelly candy needs to be done to get the right formulation between red ginger extract and seaweed (Eucheuma cottonii) to produce jelly candy with good quality based on chemical and organoleptic characteristics and accordance with the quality standard of jelly candy based on SNI 3574.-2-2008. The organoleptic test was carried out because so far the jelly candy on the market is rarely added with ginger. Organoleptic testing plays an important role in measuring product acceptance and assessing the quality of food products. Based on this, it is necessary to carried out an organoleptic test to find out whether this product can be liked by consumers or not.
Time and Place
This research was conducted during August-October 2020. The study was conducted in the Fishery Product Processing Laboratory, Faculty of Fisheries and Marine Sciences, Padjadjaran University, Sumedang Regency, West Java, Indonesia.
Procedure of making red ginger extract
The procedure for making red ginger extract was carried out according to the study of Bactiar et al. [5]. Red ginger was weighed as much as 300 g and peeled the skin. After that, the red ginger was washed using running water. Then the red ginger was cutted into several pieces. Then, red ginger was mashed using a blender and added mineral water with a ratio of 1:1. After being crushed, the red ginger prorridge was filtered using a sieve to obtain an extract of red ginger. The red ginger extract obtained was 400 ml from 300 g of red ginger.
Procedure of making seaweed jelly candy with the addition of red ginger extract
The procedure for making seaweed jelly candy was carried out according to the study of Sukotjo and Asmira [1]. The cleaned dried seaweed (Eucheuma cottonii) was soaked using rice washing water [9] for 24 hours and with clean water for 18 hours. Then, seaweed was washed and weighed as much as 100 g. After that, seaweed was mashed using a blender with a comparison of seaweed: water = 1:5. The mashed seaweed was cooked at a heating temperature of 95ºC until the volume of the solution was reduced by half. Then, the red ginger extract and granulated sugar were added. Then, the solution was heated at a heating temperature of 95ºC until it thickens. After that, put the jelly candy dough in the baking sheet and let it stand for 1 hour at room temperature. Then, put the jelly candy in the refrigerator for 24 hours. After that, cut the jelly candy using a knife. Then, the jelly candy was dried by drying in the sun for three days until the candy was dense and chewy.
Research Methods
This research used an experimental method consisting of 4 different treatments of red ginger extract (0%, 40%, 50%, 60%, respectively) with 20 panelists as evaluators. The panelists used are semi-trained panelists [10] who are students of the Faculty of Fisheries and Marine Science University Padjadjaran who previously knew and studied organoleptic testing. The parameters observed in this study were the chemical composition (water content, protein content, fat content, carbohydrate content, crude fiber content) which tested in all treatments and organoleptic characteristics (appearance, texture, aroma, taste) of the jelly candy based on the panelists' preference level using the hedonic test. The hedonic test was aimed to determine the panelist's preference level with a value scale, namely: very dislike (1), dislike (3), neutral / ordinary (5), like (7) and very like (9) [10]. Testing of water content used gravimetric method, protein content used Kjeldahl method, fat content used Soxhlet method, carbohydrate content used by difference method, and crude fiber content used gravimetric method [11].
Data Analysis
The data obtained from the preference test (hedonic test) were analyzed using Friedman's two-way variant analysis to determine the effect of adding red ginger extract to the preference level of seaweed jelly candy. If the Friedman test shows significant result, the test results are continued with multiple comparison tests, while to determine the best treatment used bayes method [12]. Chemical test data were analyzed descriptively. Data water content, protein content, fat content, carbohydrate content, and crude fiber content from single determination.
Chemical Characteristics of Seaweed
Jelly Candy
Water content
The results of the water content test of seaweed jelly candy with the addition of red ginger extract are presented in Table 1.
Based on the test, the highest water content was found in jelly candy with the addition of red ginger extract treatment as much as 60%, which was 7,36%, while the lowest water content was found in jelly candy without the addition of red ginger extract which was 5,36%. Jelly candy with the addition of red ginger extract has a higher water content than jelly candy without the addition of red ginger extract and the more concentration of adding red ginger extract, the water content of the jelly candy is higher. This is because fresh ginger has high water content, so it can affect the water content of the product. Fresh red ginger has 70,48% water content [13].
Overall, jelly candy in all treatments had low water content, but the jelly candy was still in accordance with the quality standard for the water content of jelly candy in SNI 3547.2-2008, which is maximum of 20%. The low water content is due to the water evaporation process that occurs at the time of cooking at a temperature of 95°C. In addition, the use of granulated sugar in the making of jelly candy can absorb and bind water to the product, so it can reduce the water content in the product. The drying process of the jelly candy which causes the jelly candy to dry on the surface can also result in decreasing levels of water content of jelly candy. In addition, the use of seaweed in this study can also bind water, so it can lower the water content in the product. Seaweed contains a lot of hydrocolloid components in the form of agar, carrageenan, and alginate. Hydrocolloids are polymer components derived from vegetables, animals, or microbes which generally have the ability to absorb and bind water. Hydrocolloids have characteristics that can absorb and bind water well [14]. Jelly candy that has a low water content can be caused by an even stirring process, resulting in large water evaporation [15].
Protein content
The results of the protein content test of seaweed jelly candy with the addition of red ginger extract are presented in Table 2.
Based on the test, the highest protein content was found in jelly candy without the addition of red ginger extract treatment, which was 1,37%, while the lowest protein content was found in jelly candy with the addition of red ginger extract as much as 60%, which was 0,71%. Protein content in jelly candy decreased along with the addition of red ginger extract. This is because fresh red ginger used in making jelly candy has a high water content, so the more concentration of the extract of red ginger are added on jelly candy will cause the protein content decreases. Protein content is influenced by water content and fat content, where there is an inversely proportional relationship between protein and water content, the higher of the water content in a food ingredient that is added will make the protein content decrease because myogen and protein are water soluble [16].
The seaweed jelly candy from this study has low protein content because the protein in the jelly candy has been denatured due to the heating process at the time of making jelly candy and the drying process. The boiling process can result in decreased protein content in food product [17]. Processing using high temperatures will cause protein denaturation, resulting in coagulation and solubility or the ability of the dissolution will be decreased. The reaction that occurs when heating protein can damage the condition of the protein, so that protein content can be decreased [17]. The heating process will cause the protein degradation and this condition not only causes the nutritional value to decrease, but protein activity as enzymes and hormones will be lost [18].
Fat content
The results of the fat content test of seaweed jelly candy with the addition of red ginger extract are presented in Table 3.
Based on the test, the highest fat content was found in jelly candy without the addition of red ginger extract treatment, which was 0,16%, while the lowest protein content was found in jelly candy with the addition of red ginger extract as much as 60%, which was 0,92%.
Fat content in jelly candy decreased along with the addition of red ginger extract. This is because water content has an inversely propotional relationship with fat content which is the higher the water content, the lower the fat content [19]. This is accordance with the results of this study, where the fat content of jelly candy decreases along with the addition of red ginger extract and the water content increases along with the addition of red ginger extract.
The seaweed jelly candy from this study has low fat content because the fat contained in the jelly candy has been damaged due to the heating process at the time of making jelly candy and the drying process. Continuous heating will cause the fat content to degrade. Fat damage will be increased along with the high temperature used [20]. In addition, the seaweed and red ginger has low fat content, so the fat content of jelly candy produced is low. Eucheuma cottonii has 0,37% fat content [21]. Fresh ginger has 1% fat content [22]. In this study, the fat content of jelly candy decreased along with the addition of red ginger extract.
Carbohydrate content
The results of the carbohydrate content test of seaweed jelly candy with the addition of red ginger extract are presented in Table 4.
Based on the test, the lowest carbohydrate content was found in jelly candy without the addition of red ginger extract treatment, which was 95,92%, while the highest carbohydrate content was found in jelly candy with the addition of red ginger extract as much as 60%, which was 96,93%. The results showed that carbohydrate content of seaweed jelly candy with the addition of red ginger extract is higher than jelly candy without the addition of red ginger extract and the more concentration of red ginger extract added, the carbohydrate content are getting higher. This is because the fresh ginger used in making of jelly candy contains carbohydrates, so that ginger also affects the carbohydrate content in jelly candy. Fresh ginger has 10% carbohydrate content [22]. Ingredients that contain carbohydrates will increase the carbohydrate content during the cooking process when added to a food product [23].
The seaweed jelly candy in this study has high carbohydrate content due to the use of seaweed and the addition of granulated sugar to the jelly candy dough. Carbohydrates from seaweed consist of cellulose and amorphous in the form of agar or carrageenan [24]. Eucheuma cottonii was able to produce kappa carrageenan at 65,75% [21]. The high content of carrageenan in seaweed can cause high content of carbohydrates in jelly candy. The high carbohydrate content of jelly candy in this study was also caused by low levels of other nutritional components, namely water content, fat content, and protein content. Carbohydrate levels are influenced by other nutritional components, the lower the other nutritional components, the higher the carbohydrate content. Nutritional components that affect the amount of carbohydrate content include the content of protein, fat, water and ash [25].
Crude fiber content
The results of the crude fiber content test of seaweed jelly candy with the addition of red ginger extract are presented in Table 5.
Based on the test, the lowest crude fiber content was found in jelly candy without the addition of red ginger extract treatment, which was 1,42%, while the highest crude fiber content was found in jelly candy with the addition of red ginger extract as much as 60%, which was 1,75%. The presence of crude fiber content in jelly candy is caused by the use of seaweed and red ginger. Eucheuma cottonii has 1,39% crude fiber content [14].
The results showed that the crude fiber content of jelly candy increased along with the addition of red ginger extract. This is because ginger contains crude fiber, so the more concentrations of ginger extract added, the crude fiber content will be increased. Fresh ginger has crude fiber content of 7,53% [15].
Organoleptic
Characteristics of Seaweed Jelly Candy
Appearance
The hedonic average value of seaweed jelly candy appearance are presented in Table 6 and description of jelly candy color are presented in Table 7.
The results showed that the average appearance value of jelly candy for all treatments ranged from 6,5 to 7,4. The product acceptance limit is an average value of ≥ 5, meaning that if the product tested has a value equal to or greater than 5 then the product is declared still accepted or liked by the panelists [10]. This indicates that the panelists' acceptance rate for the appearance parameter ranges from normal to like, so it can be said that all treatments are still accepted or liked by the panelists. Friedman statistical test showed that the addition of red ginger extract to the seaweed jelly candy did not have a significant effect on the panelists' preference for the appearance of the seaweed jelly candy. This is because the appearance of the seaweed jelly candy produced has a uniform shape in all treatments, so the panelists liked all jelly candy even though the color of the jelly candy produced was different. The addition of ginger extract to a product, regardless of the concentration, will only affect the taste but not for the color [7].
In the control treatment or without the addition of red ginger extract, the color of the seaweed jelly candy produced was brownish yellow. In the addition of 40%, 50%, and 60% red ginger extract treatment, the color of seaweed jelly candy was light brown (+) to dark brown (+++). The color of jelly candies is more determined by the natural color of red ginger extract and the browning reaction during the process of making jelly candies. The cooking process at high temperatures and for a long time can cause caramelization of sugar, causing a brownish color to the product [26]. Component of ginger color is oleoresin. Oleoresin is a phenolic compound in ginger that is dark brown, so the product with the added ginger will turn brown [27]. Ginger has oleoresin which is easily oxidized, where oxygen will activate the polyphenol oxidase (PPO) enzyme, then the PPO enzyme will catalyze the phenol compounds contained in ginger oleoresin, so that a brownish melonoidin pigment will be formed. This will affect the color of the food product with the addition of ginger. The color of the food products produced tends to have a yellow to brownish color [28].
Aroma
The hedonic average value of seaweed jelly candy aroma is presented in Table 8.
The results showed that the average aroma value of jelly candy for all treatments ranged from 5 to 7. The product acceptance limit is an average value of ≥ 5, meaning that if the product tested has a value equal to or greater than 5 then the product is declared still accepted or liked by the panelists [10]. This indicates that the panelists' acceptance rate for the appearance parameter ranges from neutral or normal to like, so it can be said that all treatments are still accepted by the panelists. Friedman statistical test showed that the addition of red ginger extract to the seaweed jelly candy had a significant effect on the panelists' preference for the aroma of the seaweed jelly candy. The control treatment (0%) was significantly different from the 40%, 50%, and 60% treatments, while the 40%, 50%, and 60% treatments were not significantly different. This is because the panelists prefer jelly candy with the addition of red ginger extract than jelly candy without the addition of red ginger extract. Seaweed jelly candy with the addition of red ginger extract has distinctive aroma of ginger that can cover the distinctive aroma of seaweed. Jelly candy without the addition of red ginger extract has the lowest average aroma value because the jelly candy smells of sugar and has slight smell of seaweed, so the panelists not preferred it.
The distinctive aroma of seaweed comes from organic compounds. Dimethylsulfoniopopionate (DMSP) is a compound found in algae cells, which acts as an osmolite (maintains cell volume and water levels). This compound can be broken down by enzymes and bacteria, and this can produce Dimethylsulfide (DMS). DMS is considered a major component of the smell of the sea which can affect the smell of seaweed, so that the seaweed has distinctive aroma [29].
Ginger has distinctive aroma due to the essential oils contained [5]. Essential oils are volatile compounds in ginger that gives ginger a distinctive aroma. Red ginger extract has the highest volatile (essential oil) and non-volatile (oleoresin) components than other types of ginger, namely the essential oil content of around 2,58% -3,90% and 3% oleoresin [6]. The fragrant aroma of ginger is produced by essential oils. The main components of ginger essential oil are sesquiterpenes-zingiberen, zingiberol, phenol, acetate, linalool, citrate and metal hetenone [30]. Zingiberen and zingiberol are components of ginger essential oil which cause the distinctive scent of ginger [30]. The essential oil content in red ginger causes red ginger to have a sharp aroma [8]. The addition of red ginger extract to jelly candy acts as a masking agent that can mask the distinctive aroma of seaweed. Masking agents are complex compounds combined with modifiers, inhibitors and enhancers that can mask unwanted flavor characteristics by producing a new flavor sensation [31]. The mechanism action of masking agent is to cover unwanted flavor characteristics through the presence of other sensations, competing with specific receptors, or by increasing other flavors [32]. Flavor in question is the aroma and taste [32].
Texture
The hedonic average value of seaweed jelly candy texture is presented in Table 9.
The results showed that the average texture value of jelly candy for all treatments ranged from 7,4 to 7,6. The product acceptance limit is an average value of ≥ 5, meaning that if the product tested has a value equal to or greater than 5 then the product is declared still accepted or liked by the panelists [10]. This shows that the panelists's acceptance rate for the texture parameter is like, so it can be said that all treatments are liked by the panelists. Friedman statistical test showed that the addition of red ginger extract to the seaweed jelly candy did not have a significant effect on the panelists' preference for the texture of the seaweed jelly candy. This is because the amount of seaweed used for all treatments is the same, so the texture of the jelly candy for all treatments is relatively the same. The texture of the jelly candy is influenced by the hydrocolloid material, so the addition of red ginger extract has not effect on the texture of the jelly candy.
The jelly candy texture for all treatments is chewy, dense and sandy. The chewy texture of the jelly candy is due to the use of Eucheuma cottonii seaweed in the jelly candy dough. In Eucheuma cottonii seaweed, there are hydrocolloid material that can make the chewy jelly candy. The hydrocolloid material in Eucheuma cottonii is carrageenan. Eucheuma cottonii seaweed is able to produce kappa carrageenan with a strong and dense gel type [33].
Eucheuma cottonii has 65,75% carrageenan content [21]. The sandy texture of jelly candy is due to the formation of sugar crystals on the surface of jelly candy due to the process of drying jelly candy. The high carrageenan content in Eucheuma cottonii seaweed causes a chewy and strong candy texture. The chewy texture of jelly candy is formed because the caragenan polymer chain traps water in it, then a strong and rigid gel structure is formed [1]. Seaweed that has a high carrageenan can form gels when gotten heat treatment [34]. Gel formation is a process of merging or crosslinking polymer chains, so that a three-dimensional network is formed. This network binds the water in it and forms a strong texture [5].
The texture of the jelly candy is affected by the water content of the product. Increased water content can reduce the hardness of the candy where water will diffuse into the gel, so the gel formed becomes soft and causes the hardness to decrease [35]. The low water content of the candy will result in chewy jelly candy and instead [36]. This is accordance with the results of this study, where the jelly candy has low water content, so the texture of the jelly candy produced is chewy.
Taste
The hedonic average value of seaweed jelly candy taste is presented in Table 10.
The results showed that the average taste value of jelly candy for all treatments ranged from 6,1 to 7,9. The product acceptance limit is an average value of ≥ 5, meaning that if the product tested has a value equal to or greater than 5 then the product is declared still accepted or liked by the panelists [10]. This shows that the panelists' acceptance rate for the taste parameter ranges from normal to like, so it can be said that all treatments are liked by the panelists. Friedman statistical test showed that the addition of red ginger extract to the seaweed jelly candy had a significant effect on the panelists' preference for the taste of the seaweed jelly candy. The control treatment (0%) was significantly different from the 40%, 50%, and 60% treatments, while the 40%, 50%, and 60% treatments were not significantly different. This is because the panelists prefer jelly candy with the addition of red ginger extract than jelly candy without the addition of red ginger extract. Seaweed jelly candy with the addition of red ginger extract provides another variation of flavors, namely distinctive taste of pungent ginger.
Distinctve taste in ginger caused by oleoresin content. Red ginger has oleoresin content as much as 3% [7]. Ginger has distinctive taste because the phenolic components in ginger. Ginger contains phenolic derivatives that cause distinctive taste in ginger with characteristic heat, sharpness, and a stinging sensation in the mouth called pungent (spiciness). The pungent characteristic of fresh ginger is found in ginger oleoresin which is caused by phenylalkylketone which is a derivative of vanillin. This group of compounds is known as gingerols [37]. Ginger has an oleoresin content consisting of gingerol, shogaol, and resin components which causes a spicy taste in ginger [38]. The characteristic spicy taste of ginger is a spicy taste in the throat which causes a warm sensation. The addition of 50% red ginger extract treatment had the highest average flavor organoleptic value than other treatments. Seaweed jelly candy in this treatment has a sweet taste that comes from the addition of granulated sugar and the distinctive taste of pungent ginger. The lowest average taste value was found in the control treatment or without the addition of red ginger extract. Jelly candy without the addition of red ginger extract only had sweet taste of sugar, there is no other taste variation. In addition, the use of seaweed affected the taste of the jelly candy without the addition of red ginger extract, so the panelists tended to give the value in the normal category.
Decision Making Using the Bayes Method
Decision making with the bayes method is carried out by considering the weight criteria from the appearance, aroma, texture, and taste of jelly candy. The results of the calculation of the weight criteria for appearance, aroma, texture, and taste of jelly candy are peresented in Table 11.
Based on the bayes test, taste is the most important criteria compared to appearance, texture and aroma. Seaweed jelly candy with the addition of 50% red ginger extract obtained the highest alternative value of 7,47. The lowest alternative value is found in without the addition of red ginger extract treatment of 6,20. Based on the preference test parameters that have been carried out, seaweed jelly candy with the addition of 50% red ginger extract is the best treatment and most preferred by panelists. This is because that treatment has the highest median value on the parameters of appearance, aroma, texture, and taste of jelly candy and the alternative value obtained is higher than the other treatments.
CONCLUSION
Based on the research, it can be concluded that seaweed jelly candy with the addition of red ginger extract as much as 50% is the most preferred by panelists and has the best organoleptic characteristics based on appearance, aroma, texture, and taste of the jelly candy.The average value of the hedonic test of seaweed jelly candy with the addition of 50% red ginger extract to appearance was 7,4 (like); aroma 7,0 (like); texture 7,6 (like); and taste 7,9 (like). The jelly candy contained 6,22% water, 0,88% protein, 0,19% fat, 96,82% carbohydrate, and 1,54% crude fiber. Addition 50% of red ginger extract is recommended to produce seaweed jelly candy that has the best and most preferred characteristics. | 2021-09-01T15:19:20.759Z | 2021-06-14T00:00:00.000 | {
"year": 2021,
"sha1": "6733b3ab6d12aec41a7af325f2c78bda3b47c47a",
"oa_license": null,
"oa_url": "https://www.journalajfar.com/index.php/AJFAR/article/download/30246/56764",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e05ccdd9dc9ac08f47ab51f78fda70b87916dcf5",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
248336550 | pes2o/s2orc | v3-fos-license | Downregulated XBP-1 Rescues Cerebral Ischemia/Reperfusion Injury-Induced Pyroptosis via the NLRP3/Caspase-1/GSDMD Axis
Ischemic stroke is a major condition that remains extremely problematic to treat. A cerebral reperfusion injury becomes apparent after an ischemic accident when reoxygenation of the afflicted area produces pathological side effects that are different than those induced by the initial oxygen and nutrient deprivation insult. Pyroptosis is a form of lytic programmed cell death that is distinct from apoptosis, which is initiated by inflammasomes and depends on the activation of Caspase-1. Then, Caspase-1 mobilizes the N-domain of gasdermin D (GSDMD), resulting in the release of cytokines, such as interleukin-1β (IL-1β) and interleukin-18 (IL-18). X-box binding protein l (XBP-1) is activated under endoplasmic reticulum (ER) stress to form an important transcription factor XBP-1 splicing (XBP-1s). The cerebral ischemia/reperfusion (CI/R) causes cytotoxicity, which correlates with the activation of splicing XBP-1 mRNA and NLRP3 (NOD-, LRR-, and pyrin domain-containing 3) inflammasomes, along with increases in the expression and secretion of proinflammatory cytokines and upregulation of pyroptosis-related genes in HT22 cells and in the middle cerebral artery occlusion (MCAO) rat model. However, whether XBP-1 plays a role in regulating pyroptosis involved in CI/R is still unknown. Our present study showed that behavior deficits, cerebral ischemic lesions, and neuronal death resulted from CI/R. CI/R increased the mRNA level of XBP-1s, NLRP3, IL-1β, and IL-18 and the expressions of XBP-1s, NLRP3, Caspase-1, GSDMD-N, IL-1β, and IL-18. We further repeated this process in HT22 cells and C8-B4 cells and found that OGD/R decreased cell viability and increased LDH release, XBP-1s, NLRP3, Caspase-1, GSDMD-N, IL-1β, IL-18, and especially the ratio of pyroptosis, which were reversed by Z-YVAD-FMK and downregulated XBP-1. Our results suggest that downregulated XBP-1 inhibited pyroptosis through the classical NLRP3/Caspase-1/GSDMD pathway to protect the neurons.
Introduction
Stroke is a leading cause of permanent disability and death that affects about 15 million people around the world [1]. It is characterized by high rates of incidence, disability, and recurrence. Ischemic stroke is caused by the interruption of cerebral blood flow or the obstruction of the cerebral vasculature by thrombus, all of which deprives local brain tissues of oxygen and glucose [2,3]. At present, pharmacological, physical, and modern mechanical reperfusion therapy is usually the first line of care in acute ischemia patients. In ischemic stroke therapy, reperfusion injury still is problematic [4]. Currently, reperfusion can elicit other lifethreatening sequels that have prompted pursuance of studies involving the use of the cerebral ischemia/reperfusion (CI/R) animal model. This undertaking entails identifying novel targets to reduce the neuroinflammation induced by reoxygenation and glucose replenishment to revive compromised neural function. Neuroinflammation and neural cell injury in the infarct region are inescapable consequences of focal cerebral ischemia. Studies have shown that an inflammatory response not only goes along with the pathological development of CI/R but also is the primary cause of neuronal death [5]. Occlusion of the middle cerebral artery can bring about a prompt shutdown of the oxygen and glucose supply. Damage-associated molecular patterns (DAMPs) refer to oxygen-deprived cells in the ischemic region that upregulate and secrete factors such as TNF-α, interleukin-1β, interleukin-18, high mobility group box 1 (HMGB1), and extracellular cold-inducible RNA-binding protein (eCIRP). They can evoke dangerous signaling inflammatory mediators such as Caspase-1/Gasdermin, JAK/STAT-1, and TLR4/MyD88/NF-κB [6].
An inflammasome is a type of a tissue-damage sensor that is necessary for the conversion of the proform of interleukin-1β (IL-1β) to the mature, active form and is also implicated with the pyroptosis [7][8][9][10]. Pyroptosis is distinct from apoptosis, which was first observed in macrophages infected with Shigella flexneri, intracellular bacteria [11] that enable the release of immunogenic cellular content, including DAMPs; these stressors activate NLRP3 (NOD-, LRR-, and pyrin domain-containing 3), and then, NLRP3 activates Caspase-1 by means of the adaptor apoptosis associated speck-like protein containing a CARD (ASC) [12]. Caspase-1 processes and activates inflammatory cytokines such as IL-1β and interleukin-18 (IL-18) and also cleaves gasdermin D (GSDMD) to release the membrane poreforming N-terminal GSDMD domain (GSDMD-N). To trigger inflammation, GSDMD-N pores promote the release of activated IL-1β and IL-18 [13]. The NLR family, which includes NLRP3, IL-1β, and Caspase-1, has been reported to play critical roles in rodent models of the ischemic brain injury [14], and stroke is linked to a single-nucleotide polymorphism of IL-1β [15].
X-box binding protein l splicing (XBP-1s) serves as a significant transcription factor and is activated under endoplasmic reticulum (ER) stress, which can positively regulate both cell proliferation and angiogenesis. Previous studies have shown that transient cerebral ischemia can activate XBP-1 mRNA splicing to protect cells from ischemia/reperfusioninduced cell damage and apoptosis [16,17]. XBP-1 can activate the NLRP3 inflammatory body [5]. XBP-1 deficiency in mice increases their susceptibility to both bacterial infections and impairs their host defense [18].
A recent study indicated that NLRP3 inflammasome deficiency or using its selective inhibitor (MCC950) can improve cerebral injury after ischemic stroke [19]. The IRE-1α inhibitor STF-083010 or genetic silencing of XBP-1 can selectively inhibit the IRE-1α/XBP-1s branch and then attenuate Cdinduced NLRP3 inflammasome activation and pyroptosis in HK-2 cells. Sirtuin-1 can ameliorate cadmium-induced ER stress and pyroptosis through XBP-1s deacetylation [20]. However, clear evidence that the increased expression and slicing of XBP-1 induced pyroptosis is present in cerebral ischemia/reperfusion has not yet been demonstrated. Accordingly, targeting XBP-1 and pyroptosis to suppress the abnormal inflammatory response may lead to a novel therapeutic strategy for CI/R injury [21].
In the present study, we explored the effects of XBP-1 on pyroptosis in HT22 cells, C8-B4 cells, and rats treated by OGD/R or middle cerebral artery occlusion (MCAO). We found that OGD/R decreased the cell viability, whereas and pyroptosis increased, which could be mostly inhibited by knockdown of XBP-1. OGD/R or MCAO increased the levels of XBP-1s, NLRP3, Caspase-1, GSDMD, IL-1β, and IL-18, which were restored by downregulated XBP-1 expression.
Animals.
Male Sprague-Dawley rats (weighing 250-280 g) were provided by the Shanghai Laboratory Animal Center, Chinese Academy of Sciences, and were housed under a under a 12 : 12 h light/dark cycle at 21 ± 1°C with ad libitum access to food and water. Animals were divided into four experimental groups: (i) sham group (sham), rats without carotid occlusion (S); (ii) I/R group, carotid artery occlusion was performed for 60 minutes followed by reperfusion for 24 hours (24 h); (iii) I/ R group, carotid artery occlusion was performed for 60 minutes followed by reperfusion for 48 hours (48 h); and (iv) I/R group, carotid artery occlusion was performed for 60 minutes followed by reperfusion for 72 hours (72 h).
Middle Cerebral Artery Occlusion Model.
We followed the methods of Deng et al. [22]. Rats were anesthetized with 1.5% isoflurane (Abbott, Abbott Park, IL, USA), and the left middle cerebral artery was occluded by the intraluminal suture technique as previously described (reversible middle cerebral artery occlusion without craniectomy in rats). Briefly, the middle cerebral artery was occluded by a 4-0 nylon monofilament coated with a silicone tip. Reperfusion was established by gently withdrawing the filament after 60 min of occlusion. In the sham control group, all of the surgical procedures were included except for the occlusion of the MCA.
Behavior Tests
2.4.1. Gait Analysis. Gait was measured using the footprint test to assess limb coordination and stride length. For gait analyses, rats were habituated to the experimenter and the behavior room from 2 to 3 days before the test. Then, the rats were habituated to walk straight to their home cage. The paws of the rats were recorded by software (Bihaiwei Software Technology, Anhui, China); the rats were allowed to walk freely on the track. The mean stride length and mean stride width were analyzed by measuring the distance between paw prints.
2
Mediators of Inflammation 2.4.2. Neurological Assessment. We followed the methods of Li et al. [23]. Behavior testing is also critical for observing the degree of ischemia after 24 h, 48 h, and 72 h of reperfusion. Three blinded researchers rated and recorded the neurological deficit of the rats, and the scores of all the groups were then calculated. The 5-level 4-point Longa method was conducted in this study to evaluate the neurological deficit of each rat.
The criteria for scoring were as follows: Grade 0: no neurological deficits Grade 1: the contralateral forelimb cannot be stretched completely when the rat is lifted by its tail Grade 2: the rat spontaneously circles to the paralytic side when walking Grade 3: the rat involuntarily falls down to the contralateral side when walking Grade 4: the rat cannot walk automatically and loses consciousness 2.4.3. Evaluation of Infarct Volume. Two, three, fivetriphenyltetrazolium chloride-(TTC-) stained brain sections were used to assess cerebral infarct volume. Twenty-four hours, forty-eight hours, and seventy-two hours after reperfusion, immediately after sacrifice, the brains were removed from the rats and cut coronally into five serial 2 mm slices. Samples were then incubated for 15 min in 2% TTC (Solarbio, Beijing, China) at 37°C and fixed in 4% paraformaldehyde overnight. Infarctions remained unstained by TTC. Each brain section of each rat was stained with TTC (unstained areas were recognized as infarctions) and was evaluated quantitatively using Image-Pro Plus software to calculate the percentage infarct volume.
2.4.4.
Haematoxylin-Eosin (H&E) Staining. Deeply anaesthetized rats were transcardially perfused with PBS followed by 4% paraformaldehyde. The brains were quickly removed by decapitation and carefully postfixed. Then, the samples were paraffinized and sliced to 5 μm thick sections. The sections were dewaxed in 2 changes of xylene (10 min each) and rehydrated in 2 changes of absolute ethanol (5 min each) and then rinsed by running tap water orderly. Haematoxylineosin (H&E) staining was performed to observe the histomorphology. Histology assessment was performed by blinded investigator.
Nissl
Staining. For the Nissl staining, paraffinembedded brain tissue sections (5 μm) were immersed in xylene (5 min, 2 times), rehydrated in absolute ethanol (5 min, 2 times) followed by 95%, 75%, and 50% solutions of ethanol in water (5 min each), and then washed in distilled water for 2 times, 5 min each. Slides were stained in FD cresyl violet solution (FD Neurotechnologies, Baltimore, MD, USA) for 10 min and, then, briefly rinsed in 100% ethanol and differentiated in 100% ethanol containing 0.1% glacial acetic acid for 1 min. The slides were then dehydrated in absolute ethanol (2 min, 4 times) followed by clearance in xylene (3 min, 2 times). Coverslips were mounted with resinous mounting medium. The staining of the hippocampal CA1 region was routinely analyzed by a researcher blinded to the experimental protocol.
HT22 cells and C8-B4 cells were seeded in 6-well plate at a density of 4 × 104 per well. The contents of 0.33 μg siRNA and 5 μl siTran transfection reagent (Origene, MD, USA) per well were diluted separately in serum free Opti MEM for a final volume of 250 μl, gently mixed, and incubated for 5 min at room temperature. Then, the diluted siRNA solution and the diluted siTran transfection reagent were mixed gently and incubated for 20 min at room temperature. The diluted siRNA/siTran transfection reagent complex was added to the plates. After transfection with siRNA for 24 h, the cells were exposed to OGD/R and then harvested for assay.
Oxygen-Glucose Deprivation and Reoxygenation (OGD/R)
Model. To mimic the CI/R conditions in vitro, the OGD/R modelling method was used, and the HT22 cells and C8-B4 cells were cultured under normal conditions for 24 h, then moved to glucose-free DMEM (Gibco), and placed under ischemic conditions (3% O 2 , 92% N 2 , and 5% CO 2 ) at 37°C for 2 h. After that, the medium was discarded and the cells were cultured in normal medium under normoxic conditions for another 24 h reperfusion.
2.8. The Cell Viability Assay. We followed the methods of Bai et al. [24]. HT22 cells and C8-B4 cells were seeded into a 96well plate overnight and then were pretreated with XBP-1 siRNA for 24 h and Z-YVAD-FMK (20 μM) and polyphyllin VI (4 μM) for 30 min before OGD/R and then were 3 Mediators of Inflammation incubated for 24 h after exposure to OGD/R. The cell viability was measured by using the Cell Counting Kit-8 (CCK-8) according to the manufacturer's instructions. Twenty-four hours later, 10 μl CCK-8 was added into every well, followed by 2 h 37°C incubation. Absorbance at 450 nm/630 nm was detected using an enzyme-labeled instrument. The results were obtained from three independent experiments, and each experiment was performed in triplicate. The mean OD of one group/mean OD of the control was used to calculate the viability.
2.9. Lactate Dehydrogenase (LDH) Assay. Cell death was evaluated by the quantification of plasma membrane damage which resulted in the release of lactate dehydrogenase (LDH). The level of LDH released in the cell culture supernatant was detected by LDH cytotoxicity assay detection kit (Beyotime, China) following the manufacturer's instructions.
Flow Cytometry (FCM). HT22 cells and C8-B4 cells
were washed with PBS (C0221A; Beyotime, Shanghai, China) and then digested with pancreatin (C0203; Beyotime, Shanghai, China) and next centrifuged at 168 × g for 5 min after OGD/R. The supernatant was discarded to collect the cells that were then resuspended in PBS. The cells in the suspension were counted and labeled with Annexin V (AV) and propidium iodide (PI) briefly, after which flow cytometry (Guava easyCyte™ 8, Millipore, USA) was employed to quantitatively determine and meticulously analyze the pyroptosis cells.
PCR amplification was carried out at 95°C for 30 s, followed by 45 cycles of 95°C for 5 s and 55°C for 30 s. GAPDH was used as an endogenous control to normalize differences. All fluorescence data were processed by a PCR postdata analysis software program. The differences in gene expression levels were analyzed with the 2 -ΔΔCT method.
2.13. Statistical Analysis. Data were expressed as mean ± SEM. Statistical analysis was performed by using SPSS software. The one-way ANOVA followed by a post hoc Bonferroni multiple comparison test was used to compare control and treated groups. p value less than 0.05 was considered statistically significant. All blots are representative of experiments that were performed at least three times.
CI/R Caused Behavior Deficits, Cerebral Ischemic
Lesions, and Neuronal Death in the Rat MCAO Model. To confirm whether the rat MCAO model could successfully lead to CI/R injury, the gait analysis, neurological deficit score, and infract volume were evaluated in the sham group, 24 h group, 48 h group, and 72 h group. Compared to the sham group, the 24 h group, 48 h group, and 72 h group exhibited an abnormal gait (Figure 1(a)), and their stride length decreased significantly (Figure 1(b)) (F ð3,32Þ = 60:3, p < 0:001). Meanwhile, the neurological deficit scores of the 24 h group, 48 h group, and 72 h group were significantly higher than that of the sham group (Figure 1(c)) (F ð3,36Þ = 136:1, p < 0:001). TTC staining was used to evaluate the ipsilateral volume (Figure 1(d)). Compared to the sham group, the 24 h group, 48 h group, and 72 h group exhibited 42.3%, 38.3%, and 32.6% infarct rate, respectively (Figure 1(e)) (F ð3,20Þ = 75:3, p < 0:001). To investigate the (Figure 1(f)). Compared to the sham group, an apparent decrease in the neuronal density in the CA1 region was noted in the 24 h group, 48 h group, and 72 h group (Figure 1(g)). These results demonstrated that the rat MCAO model was able to induce behavior deficits and neuronal death.
3.2. CI/R Activated XBP-1 Slicing, Neuroinflammation, and Neuron Pyroptosis in the Rat MCAO Model. Previous studies confirmed that XBP-1 is associated with CI/R, and the Xbp-1s mRNA expression level was measured. Compared to the sham group, the mRNA expression level of Xbp-1s was significantly increased in the 24 h group, 48 h group, and 72 h group (Figure 2(a)) (F ð3,20Þ = 132:5, p < 0:001), and the expression of XBP-1s was also significantly increased in the 24 h group, 48 h group, and 72 h group (Figures 2(e) and 2(f)) (F ð3,20Þ = 57:07, p < 0:001). XBP-1s can activate NLRP3 inflammatory bodies [20], and the signal of NLRP3 inflammasomes is a basic mechanism in CI/R. Compared to the sham group, the mRNA expression level of the NLRP3 was significantly increased in the 24 h group, 48 h group, and 72 h group (Figure 2 (Figures 3(c)-3(i)). These results showed that OGD/R decreased cell viability, increased cytotoxicity, and induced XBP-1 slicing, inflammation, and pyroptosis in HT22 cells. XBP-1 may therefore be associated with pyroptosis in HT22 cells after OGD/R.
OGD/R Decreased Cell Viability and Increased
Cytotoxicity, XBP-1 Slicing, and Inflammation in C8-B4 Cells. To confirm the above results in a different cell line, the murine C8-B4 microglia cell line was used. As shown Figure S1C-S1I). These results showed that OGD/R decreased the cell viability, increased LDH release, and induced XBP-1 slicing, inflammation, and pyroptosis in C8-B4 cells. Therefore, XBP-1 may be involved in pyroptosis in C8-B4 cells after OGD/R. Figure S2E-S2K). These results showed that downregulation by XBP-1 increased the cell viability, decreased LDH release, and decreased XBP-1 slicing, inflammation, and pyroptosis in C8-B4 cells.
Discussion
Ischemic stroke has become a major human threat, and the potential for reperfusion injury of cerebral ischemic tissue has attracted the attention of scientists [27]. Studies indicated that an inflammatory response is a major component of the pathological development of CI/R and is also one of the main causes of neuronal death [5]. In this study, we used an in vivo rat model and an in vitro cell culture model to investigate the potential mechanism of CI/R injury-induced neurotoxicity. Our results showed that pyroptosis mediated OGD/R-induced toxicity in both HT22 cells and C8-B4 cells. We also demonstrated that the reduction of XBP-1 mRNA by genetic means rescued OGD/R-induced activation of the NLRP3 inflammasome and pyroptosis through the NLRP3/Caspase-1/GSDMD pathway ( Figure 5). Unlike apoptosis, pyroptosis, which depends on the activation of Caspase-1 [28], is less understood in CI/R injury. 13
Mediators of Inflammation
The activation of Caspase-1 can lead to the cleavage of GSDMD [29] to generate the N-domain of GSDMD, which in turn forms membrane pores and mediates the release of cytokines, cellular swelling due to water influx, membrane rupture, and finally lysing the cell [23,[30][31][32]. Pyroptosis is characterized by the pore formation, osmotic swelling, and early loss of membrane integrity. By acting as a key pyroptotic executor, the GSDMD N-terminal fragment is able to induce the formation of membrane pores [26]. In this study, the inflammation-associated pyroptosis in both the rat MCAO model and the OGD/R cell model is described in detail (Figures 1-4 and Figure S1-2). H&E staining in the rat MCAO model (Figure 1(f)) revealed the characteristics of pyroptotic changes. These results suggest that pyroptosis is likely to be included in the responses induced by CI/R.
In microglia, active Caspase-1 can bring about pyroptosis by means of the intramembranous pores [33]. Driven by inflammasome activation, pyroptotic cell death in microglia exacerbated brain damage in ischemic stroke [34,35]. Therefore, HT22 cells and C8-B4 cells were chosen to illuminate the mechanism of XBP-1 that influences pyroptosis. The NLRP3 inflammasome signal is the initial response that mediates the inflammatory response in the process of ischemic stroke [21]. In accordance with previous studies, our results confirmed that the NLRP3 inflammasome signal was activated within CI/R injury, and the mRNA level and expression of NLRP3 increased in the rat MCAO model (Figure 2). In addition, the expression of NLRP3 was increased by OGD/R (Figures 3 and 4), which was reversed by Z-YVAD-FMK.
The assembly of NLRP3 inflammasome triggers the pro-Caspase-1 autocleavage to active Caspase-1 [25], which is considered as regulating pyroptosis. Consistent with previous studies, this study indicated that pro-Caspase-1 was activated in CI/R injury; the expression of Caspase-1 was increased in the rat MCAO model (Figure 2). The expression of Caspase-1 was increased by OGD/R (Figures 3 and 4), which was reversed by Z-YVAD-FMK. Caspase-1 cleaves GSDMD to release the GSDMD-N domain and activates IL-1β and IL-18 [13]. Our results showed that the expression levels of GSDMD-N, IL-1β, and IL-18 were increased in the rats with CI/R injury (Figure 2), which was consistent with the rises in the mRNA expression levels of IL-1β and IL-18. In addition, the expression levels of GSDMD-N, IL-1β, and IL-18 were also increased by OGD/R (Figures 3 and 4) and were reversed by Z-YVAD-FMK. These results suggest that the inhibition of pyroptosis may prevent neuron damage by CI/R injury.
Increasing in vitro and in vivo evidence has demonstrated that CI/R injury activates ER stress. Upon unfolded protein response (UPR) activation, inositol-requiring transmembrane kinase/endonuclease-1 (IRE1) activates its endoribonuclease (RNase) activity through dimerization and autophosphorylation [37]. The IRE1 RNase removes a 26 nucleotide intron of 14 Mediators of Inflammation the leucine zipper transcription factor XBP-1 and produces a frameshift in the XBP-1 mRNA transcript (the splicing mRNA of XBP-1). Then, the spliced XBP-1 (XBP-1s) mRNA is translated into a potent transcription factor that is responsible for the upregulation of genes encoding ER chaperones and proinflammatory gene production [38]. The activation of the XBP-1 serves to reestablish the homeostasis of the ER and the secretory pathway. Moreover, in addition to its importance in protein secretion and lipid metabolism, XBP-1s also modulates immune responses [37]. The transcription factor XBP-1 represents a key component of the ER stress response and is required for the maintained generation of proinflammatory cytokines during the inflammatory response. Previous researches have not clarified the relationship between XBP-1 and pyroptosis. In addition, previous studies indicate that deficient XBP-1 increases mice's susceptibility to bacterial infections and impaired host defenses [39]. Therefore, XBP-1 has a close relationship with NLRP3-associated inflammation and Caspase-1-mediated pyroptosis. Importantly, the inhibition of Caspase-1 activation using Z-YVAD-FMK or the knockdown of XBP-1 with siRNA substantially mitigated OGD/R-induced activation of the NLRP3 inflammasome and resulting cell death. Similarly, the IRE1/XBP-1 branch of the UPR signaling can regulate the inflammatory responses of macrophages. Using IRE-1α inhibitor, STF-083010 or genetic silencing of XBP-1 can selectively inhibit the IRE-1α/ XBP-1s branch and then attenuate cadmium-induced NLRP3 inflammasome activation and pyroptosis in HK-2 cells [20]. Our results showed that XBP-1s was increased in the CI/R injury in both the rat model and the cellular model (Figure 2), and XBP-1 therefore may be a key factor. Downregulated XBP-1 can decrease the expressions of NLRP3, Caspase-1, GSDMD-N, IL-1β, and IL-18 ( Figure 4 and Figure S2). Downregulated XBP-1 had the same effects as Z-YVAD-FMK. The decrease of IL-18 in particular has a meaningful influence on CI/R [40] in that it reduces the accumulation of macrophages and neutrophils and decreases the expression of proinflammatory molecules that are downstream of IL-18.
Our results showed that XBP-1 may be involved in pyroptosis and could play an important role in regulating pyroptosis in CI/R injury. Downregulated XBP-1 can inhibit Figure 5: A schematic diagram for the mechanisms of XBP-1 in the regulation of neuronal pyroptosis following cerebral ischemia. The expression of XBP-1 is upregulated after cerebral ischemia/reperfusion. Cerebral ischemia/reperfusion mediates Xbp-1u mRNA splicing to generate the active XBP-1s form. Active XBP-1s promotes NLRP3-ASC/Caspase-1 inflammasome assembly, which generates inflammatory mediators and cytokines. The triggered Caspase-1 cleaves GSDMD to promote the release of the N-terminal domain, which executes pore formation on the neuronal membrane. The mature forms of IL-1β and IL-18 that are secreted through these pores are also increased. Downregulated XBP-1 can facilitate an anti-inflammatory effect and inhibit pyroptosis. XBP-1: X-box binding protein l splicing; NLRP3: NOD-, LRR-, and pyrin domain-containing 3; ASC: speck-like protein containing a CARD; GSDMD: gasdermin D; GSDMD-N: N-terminal GSDMD domain; IL-1β: interleukin-1β; IL-18: interleukin-18.
15
Mediators of Inflammation NLRP3 inflammasome activation and potentially pyroptosis induced by OGD/R or MCAO. These effects may protect the neurons through NLRP3/Caspase-1/GSDMD pathway ( Figure 5). These findings may offer a better understanding and novel therapeutic strategies for CI/R injury.
Conclusion
This study demonstrated that downregulated XBP-1 could inhibit pyroptosis by inhibiting the NLRP3/Caspase-1/ GSDMD pathway in the hippocampus which rescues cerebral ischemia/reperfusion injury.
Data Availability
All datasets presented in this study are included in the article. All data is real and guarantee the validity of experimental results.
Ethical Approval
This study followed the conventional requirements of experimental operation and was approved by the local Committee on Animal Use and Protection of Yunnan province (No. LA2008305).
Conflicts of Interest
All the authors declare no financial and nonfinancial conflict of interests. | 2022-04-23T15:25:33.007Z | 2022-04-21T00:00:00.000 | {
"year": 2022,
"sha1": "6e5af8dbb5eb0ac2911e81c38e18e5193cd95674",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mi/2022/8007078.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd974c291aea4da9843ce89a1b03e39f64974a31",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134622920 | pes2o/s2orc | v3-fos-license | Determination of Ionizing Radiation Exposure Levels within Four Local Mining Sites Selected from Sardauna Local Government Area of Taraba State - Nigeria
Miners and the people living close to mining sites are exposed to elevated levels of ionizing radiation with or without their consent. This study determined the background ionizing radiation of four mining sites from Sardauna local government area of Taraba state using an inspector alert nuclear radiation meter manufactured by S.E. International, Inc USA with serial number 35440. The meter has a halogen- quenched Geiger Muller tube with a ± 45 mm effective diameter and mica window density of 1.5- 2.0 mg/cm 3 . The Geiger tube in the meter generates a pulse of electrical current each time radiation is incident on the tube and causes ionization. The measured values ranged from 0.19 - 0.40 mSv/yr across all the mining sites and 500 m away from the sites. These results were found to be far less than the standard of 1.0 mSv/yr set by International Commission on Radiological Protection (ICRP) for the general public and 20.0 mSv/yr set by the Nigeria Basic Ionization Radiation Regulation (NiBIRR) for the whole body of adult radiation protection workers which means that the miners and inhabitants of this areas are safe. Nevertheless, there could be long term variations in the consequences arising from the effects of ionizing radiation among the miners and even the inhabitants. Strong correlations were found between the equivalent doses at the excavating and the processing points of the sites which mean that the miners and people living close to these mining sites are subjected to uniform distribution of consequences arising from ionizing radiation. We do recommend that policy makers and regulatory bodies should apply mitigation measures to the effects by means of creating awareness to the miners at various mining sites and the use of modern mining strategies to protect other natural resources especially water.
Introduction
The exposure of human beings to ionizing radiation from natural sources is a continuing and inescapable phenomenon on earth. Ionizing radiation and radioactivity are found naturally within the environment and their level depends generally on the distribution of natural radionuclides within the environment. Human activities involving mining, the use and processing of radionuclide or items which contain radionuclides can enhance the levels of environmental radiation [1,2]. Some people are exposed to enhanced levels of natural radiation at their places of work, such workers includes underground miners, some workers involved in mineral processing and aircraft flight crews [3].
Mining workers and their neighbors such as present in Sardauna local government area of Taraba state have been for long exposed to ionizing radiation with or without their consent [4]. Exposure to radiation leads to damage on different levels of the biological system of an organism.
The clinical risk of damages posed by radiation and the resulting radiation syndromes may vary to a great extent depending on exposure conditions like the nature of radiation, time and the affected organs [5]. These injuries and clinical symptoms may include chromosomal transformation, cancer induction, free radical formation, bone necroses and radiation catractogenesis [6].
The two main contributors to natural ionization exposure are high-speed cosmic ray particles incidents in the earth's atmosphere and the primordial radionuclides present in the earth's crust which are ubiquitous including the human body. Some exposure to natural radiation sources are modified by human activities like mineral processing and quarry activity. The people of Sardauna local government area of Taraba state are known for their engagement in sapphires, gold, gravel and other precious stones mining locally at commercial levels. The miners are exposed to radioactive elements like Uranium-238, Uranium-235 and Thorium-232. It has been found that all soil samples had concentration of Uranium-238 [7]. Exposure to ionizing radiation also arises from other terrestrial radionuclides present in trace levels in all soil types. Radiation emitted by these radionuclides within 15-30 cm of the top soil reach the earth surface [8]. Only those radionuclides with half lives comparable to the age of the earth, and their decay product exist in significant quantities in these materials. One of the heavy radionuclide products is naturally occurring radon gas which contributes to high amount of potentially lethal doses and it has also been reported to be the cause of majorities of lung cancer death and risk of lung cancer from exposure to radon is also reported to be higher than other causes [9,10].
Ike et al. [4] and Agba et al., [9] investigated the radiation levels in mining sites of Plateau and Benue state respectively and found that ionization radiation was present in the mines though within a healthy range. Because of the lethal effects of ionizing radiation, the assessment of exposure to it is an important goal of regulatory bodies and radiation scientists. It is intended in this study to monitor and assess the levels of exposure to ionizing radiation in some selected mining sites from Sardauna local government area of Taraba state and hence recommends measures of keeping the miners' exposure level to ionizing radiation as low as possible.
Since little or no data exist on this subject, it is hoped that the data generated in this study will form a base line for policy makers and regulatory bodies in the state and also assist them to put in place proper checks and regulations on the activities of the miners in order to achieve low exposure levels to ionizing radiation and sustainable development in Sardauna local government area of Taraba state and Nigeria in general.
Sampler and Analytical Procedures
The sampling tool used in this study was an inspector alert nuclear radiation meter manufactured by S.E. International, Inc USA with serial number 35440. The meter has a halogen quenched Geiger-Muller tube ± 45 mm effective diameter and mica window density of 1.5-2.0mg/cm 3 . The Geiger tube generates a pulse of electrical current each time radiation is incident on the tube and causes ionization. Each pulse is electronically detected and registered as a count in a choice mode of the operator. The meter was held one meter above the ground surface to reflect the abdominal level of human and twelve readings in count per minute (CPM) and Roentgens per hour (R/hr) were taken at each point of the four sampling sites. The readings were taken at the excavation and processing point of each of the mining sites. Areas of normal background were also selected 500m away from each of the mining site based on metropolitan's population or the presence of an access path leading to the village and the mining sites. The mean readings in roentgens per hour were then calculated from the expression in equation 1.
( )
Where ∑ F = Sum of all the observed values at a site. N = Total number of times observations are recorded at a site. The mean readings in roentgens per hour were further converted to microsieverts per hour which serve as the equivalent dose according to the conversion factor in equation 2 UNSCEAR, 1988 recommended outdoor occupancy factor of 0.2 to be the proportion of the total time during which an individual is exposed to a radiation field. 8760 hrs/yr was used to convert readings in hours from equation 2 to annual equivalent doses in mSv/yr according to Where Ed A = Annual equivalent dose rate in mSv/yr and β = observed readings in µSv/hr.
Study Area
The study area covers four mining sites from Sardauna local government area of Taraba state which are located in Nguroje and Gembu towns all in southern part of Taraba state. The area lies within latitude 06 0 05 ' -06 0 90 ' N and longitude 11 0 15 ' -11 0 65 ' E and covered approximately 4,603km 2 . The population of the inhabitants is approximately 397,438 according to 2006 census figure [11]. 90% of the people are engaged in mining, farming and grazing activities as major occupation. The miners use blunt implements such as hoes, diggers and bare hands to dig the earth in search of the precious stones. The cold weather of the area provides a clement atmosphere for the miners to work throughout the day.
Sampling Sites Selection/Site Code
The sampling sites selected from the villages of the study area and their assigned codes are presented in Table 1.
The four sites were chosen based on high human activities of mining in the local government area as carefully observed by the researcher.
Results and Discussion
The mean of the ionizing radiation data collected in roentgen per hour and the total count in count per minutes at the excavation point, processing point and 500 m away from each of the mining site is calculated using equation 1 and presented in Table 2. Table 3 is the converted results using equation 2 and 3 respectively.
Discussion
The mean ionizing radiation measured at the excavation point, processing point and 500 m away from the mining sites as presented in Table 2 and Figure 1 shows that Mayo-Sina sapphires mining site has highest value of ionizing radiation at all the three points followed by Mayo-Ndaga gold mining site with the highest value at the excavation and processing point. But for the case of 500 m away from the mining site it is Mayo-Ndaga quarry site that came second to Mayo-Sina sapphires mining site with the value of 0.015 R/hr. The lowest values were recorded At Mbamnga sand mining site at all the point and these values were the same. The mean annual background radiation at the excavation point of Mayo-Sina sapphires mining site (M 1 ) top with the value of 0.35 mSv/yr followed by 0.28 mSv/yr which was measured at the excavation point of Mayo-Ndaga gold mining site (N 1 ) in Nguroje. Then the results from Mayo-Ndaga quarry site (M 2 ) and Mbamnga sand mining site (G 1 ) in decreasing order (Table 3 and Figure 2). These high values are probably due to local evaporation procedures involved in the excavation processes as well as the geographical altitude of the site. The investigation also revealed that a maximum annual dose rate contribution to the background radiation of 0.37 mSv/yr was measured at the processing point of Mayo-Sina sapphires mining site (M 1 ) while Mayo-Ndaga gold mining site (N 1 ) and Mayo-Sina quarry site (M 2 ) followed with 0.32mSv/yr and 0.21 mSv/yr (Table 3 and Figure 2). For Mbamnga sand mining site (G 1 ) where the value is least, the annual background radiation is 0.19 mSv/yr and same for all the point. The higher value recorded at the processing point could be attributed to escaping steam containing radionuclides in the form of aerosols as a result of the miners' activities at the point. All the annual background radiation recorded 500 m away from the mining site have values less than those recorded at the excavating and processing point of all the site except for Mayo-Ndaga quarry site (M 2 ) where the background radiation is 0.26 mSv/yr and for Mbamnga sand mining site (G 1 ) where the results is the same at all the points. The higher levels of ionizing radiation recorded at the excavating and processing points confirms the fact that mining activities contributed to enhanced levels of ionizing radiation and the same results for all the points at Mbamnga sand mining site established that background radiation here depends on geographical/meteorological factors of the area like location, altitude and temperature. Additionally to the assessment of the annual equivalent dose rate of all the mining sites, strong correlations were found to exist between the equivalent dose rate at the excavating point and the processing point of the mining sites. This means that the contribution to background radiation of the mining sites arises from similar natural and anthropogenic influences and hence the sites have uniform distribution of negative consequences arising from ionizing radiation. The correlation coefficient r is described by the relation: Where n is the number of pairs of data and Ed e , Ed p are equivalent dose rate at the excavation and processing point of a mining site. The correlation coefficient r (4) ranged from 0.70 -1.0 across all the sites. The highest value of r(4) is 0.97 determine for Mbamnga sand mining site while the least value is 0.76 found for Mayo-Ndaga gold mining station.
Conclusion
The background ionizing radiation of four mining sites from Sardauna local government area of Taraba state have been measured and the results were found to be within the range of 0.19 -0.40 mSv/yr in the mining sites and 500 m away from the sites. These results are within the range of literature values published by many other authors and hence confirmed our sampling procedures. The results are far less than the permissible limit value of 20.0 mSv/yr set by the Nigerian Basic Ionizing Radiation Regulation (NiBIRR) for the whole body of an adult radiation protection worker and 1.0 mSv/yr set by the International Commission on Radiological Protection (ICRP) for the general public [12]. This signifies that the miners and the people living close to these mining sites are safe. The strong correlations between the equivalent dose rate at the excavation point and processing point of the mining sites means that there is uniform distribution of consequences arising from background ionizing radiation across all the mining sites. We do recommend that policy makers should apply mitigation measures to the effects by creating awareness to the miners at various mining sites and also enforce compliance to the use of modern mining strategies to protect our natural resources especially water. | 2019-04-27T13:07:29.065Z | 2017-10-19T00:00:00.000 | {
"year": 2017,
"sha1": "6b26117c2af5ee527c9cae398db0e5882653bd0e",
"oa_license": null,
"oa_url": "https://doi.org/10.12691/ijp-5-5-3",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "10fcf3452c44dd5d078e6201ef35c6d17b7c1415",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
227267990 | pes2o/s2orc | v3-fos-license | A novel circular RNA hsa_circRNA_103809/miR-377-3p/GOT1 pathway regulates cisplatin-resistance in non-small cell lung cancer (NSCLC)
Background Cisplatin is the first-line chemotherapeutic drug for non-small cell lung cancer (NSCLC), and emerging evidences suggests that targeting circular RNAs (circRNAs) is an effective strategy to increase cisplatin-sensitivity in NSCLC, but the detailed mechanisms are still not fully delineated. Methods Cell proliferation, viability and apoptosis were examined by using the cell counting kit-8 (CCK-8) assay, trypan blue staining assay and Annexin V-FITC/PI double staining assay, respectively. The expression levels of cancer associated genes were measured by using the Real-Time qPCR and Western Blot analysis at transcriptional and translated levels. Dual-luciferase reporter gene system assay was conducted to validated the targeting sites among hsa_circRNA_103809, miR-377-3p and 3′ untranslated region (3’UTR) of GOT1 mRNA. The expression status, including expression levels and localization, were determined by immunohistochemistry (IHC) assay in mice tumor tissues. Results Here we identified a novel hsa_circRNA_103809/miR-377-3p/GOT1 signaling cascade which contributes to cisplatin-resistance in NSCLC in vitro and in vivo. Mechanistically, parental cisplatin-sensitive NSCLC (CS-NSCLC) cells were subjected to continuous low-dose cisplatin treatment to generate cisplatin-resistant NSCLC (CR-NSCLC) cells, and we found that hsa_circRNA_103809 and GOT1 were upregulated, while miR-377-3p was downregulated in CR-NSCLC cells but not in CS-NSCLC cells. In addition, hsa_circRNA_103809 sponged miR-337-3p to upregulate GOT1 in CS-NSCLC cells, and knock-down of hsa_circRNA_103809 enhanced the inhibiting effects of cisplatin on cell proliferation and viability, and induced cell apoptosis in CR-NSCLC cells, which were reversed by downregulating miR-377-3p and overexpressing GOT1. Consistently, overexpression of hsa_circRNA_103809 increased cisplatin-resistance in CS-NSCLC cells by regulating the miR-377-3p/GOT1 axis. Finally, silencing of hsa_circRNA_103809 aggravated the inhibiting effects of cisplatin treatment on NSCLC cell growth in vivo. Conclusions Analysis of data suggested that targeting the hsa_circRNA_103809/miR-377-3p/GOT1 pathway increased susceptibility of CR-NSCLC cells to cisplatin, and this study provided novel targets to improve the therapeutic efficacy of cisplatin for NSCLC treatment in clinic. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-020-07680-w.
Background Non-small cell lung cancer (NSCLC) is a common malignancy with high morbidity and mortality, which has increased in incidence in the last decades [1,2]. According to the report in 2015, the crude and age-adjusted incidences of NSCLC in China are 54.20 per 100,000 people [3], and the 5-year rate for patients with metastatic NSCLC is less than 5% [1,2]. Currently, The efficacy of the current therapeutic strategies, which include surgical resection [4,5], chemotherapy [6], radiotherapy [7,8], and immunotherapy [9,10], are limited by advance-stage disease, chemoresistance and radio-resistance [11,12]. The chemotherapeutic drug cisplatin is currently used as first-line treatment for NSCLC [13,14]. Recent evidence show that continuous long-term stimulation of NSCLC cells by cisplatin caused alteration of multiple cancer associated Circular RNAs (circRNAs), resulting in a decrease in the effectiveness of the drug [13,14]. Uncovering the underlying mechanisms leading to this resistance might solve this problem. Based on the above information, by searching the online Pubmed database (https://pubmed.ncbi.nlm.nih.gov/), we selected hsa_circRNA_103809 for further investigations in this study, and the main reason is that hsa_circRNA_103809 acted as an oncogene to promote cancer development in colorectal cancer [15], breast cancer [16], hepatocellular carcinoma [17], gastric cancer [18] and lung cancer [19]. However, to date, the role of hsa_circRNA_103809 in drug resistance in cancer remains largely unknown and therefore important to investigate.
Glutamate oxaloacetate transaminase 1 (GOT1) mainly regulates cellular glutaminolysis, which converts glutamate (Glu) into a-ketoglutaric acid (a-KG) and is crucial for sustaining cancer progression [31,32]. Inhibition of GOT1 had been validated to be an effective strategy to impair cancer growth in pancreatic cancer [32] and lung cancer [31]. Notably, previous data suggests that cisplatin regulated mitochondrial GOT1 to induce nephrotoxicity in rats [33], which rendered the possibility that targeting GOT1 might help to increase the therapeutic efficacy of cisplatin in NSCLC. In addition, recent studies have suggested that miRNAs could bind to the 3′ untranslated regions (3'UTRs) of GOT1 mRNA, resulting in GOT1 degradation and downregulation [34,35], and Zhang K et al. found that miR-9 targeted GOT1 to regulate cell ferroptosis in melanoma [34]. By conducting the online miRDB software (http://mirdb.org/), we predicted that miR-377-3p potentially bound to the 3'UTR of GOT1 mRNA. Given the fact that hsa_circRNA_103809 sponged miR-377-3p in NSCLC cells, we speculated that hsa_circRNA_103809 might regulate GOT1 through miR-377-3p in a competing endogenous RNA (ceRNA)-dependent manner.
Based on the published literatures, by conducting in vitro and in vivo experiments, this study identified that the hsa_ circRNA_103809/miR-377-3p/GOT1 pathway regulated cisplatin-resistance in NSCLC cells, and targeting this pathway improved cisplatin-sensitivity in NSCLC, which provided potential avenues for improving NSCLC treatment in the clinic.
Cell culture and induction of cisplatin-resistant NSCLC (CR-NSCLC) cells
The parental CS-NSCLC cell lines, including A549 (ATCC® CCL-185™), H1299 (ATCC® CRL-5803™) and Calu-3 (ATCC® HTB-55™), were purchased from American Type Culture Collection (ATCC, USA) in Jan. 2019, and cultured in the incubator with standard culture conditions (37°C and 5% CO 2 humidified atmosphere). The cells were authenticated by STR profiling and were identified as mycoplasma-free by a commercial third-party company (Abace Biotechnology, Beijing, China). The Roswell Park Memorial Institute 1640 medium (RPMI-1640, HyClone, USA) containing 10% fetal bovine serum (FBS, Gibco, USA) was used for cell cultivation. According to the experimental procedures provided by the previous work [36,37] and our preliminary experiments (data not shown), the CS-NSCLC cells were exposed to continuous low-dose cisplatin treatment, ranged from 0.5 μg/ml to 5 μg/ml, for 80 days in a step-wise manner to generate descendent CR-NSCLC cells (A549/DDP, H1299/DDP and Calu-3/DDP). After that, the CR-NSCL C cells were stimulated with high-dose cisplatin (25 μg/ ml) for 0 h, 24 h, 48 h and 72 h, to validate the successful induction of CR-NSCLC cells.
Vectors transfection
The overexpression and downregulation vectors for hsa_ circRNA_103809 and GOT1, and miR-377-3p mimic and inhibitor were designed and synthesized by Sangon Biotech (Shanghai, China), and the above vectors were delivered into CS-NSCLC and CR-NSCLC cells to manipulate genes expressions by using the commercial Lipofectamine 2000 reagent (Invitrogen, USA), based on the experimental protocols provided by the producer. After that, Real-Time qPCR was conducted to validate the transfection efficiency of the above vectors. The sequence of siRNA for hsa_cir-
Cell counting kit-8 (CCK-8) assay
The NSCLC cells were pre-transfected with the above vectors, cultured in 96-well plates at standard culture conditions, and were subjected to cisplatin (25 μg/ml) stimulation for 0 h, 24 h, 48 h and 72 h, respectively. After that, the CCK-8 reaction solution (AbMole, USA) was incubated with the cells in the volume of 20 μl per well for 4 h at the incubator. Then, the plates were vortexed to thoroughly mix the cells with the solution, and were placed in a microplate reader (ThermoFisher Scientific, USA) to measure the optical density (OD) values at the wavelength of 450 nm, which could be used to represent relative cell proliferation in the cells.
Trypan blue staining assay
The CR-NSCLC and CS-NSCLC cells were pretransfected with different vectors, and stimulated by using the cisplatin. Then, the cells were prepared and stained with trypan blue staining solution obtained from Invitrogen (USA) for 20 min at room temperature. After that, a light microscope was used to observe and count the number for dead blue cells, which were used to evaluate cell viability according to the following formula: Cell viability (%) = (Total cells -Dead blue cells)/Total cells * 100%.
Annexin V-FITC/PI double staining assay
A apoptosis detection kit (Invitrogen, USA) was used to examine cell apoptosis in CS-NSCLC cells and CR-NSCLC cells, based on the protocols provided by the manufacturer. In brief, the cells were harvested and prepared, and subsequently stained with Annexin V-FITC and propidium iodide (PI) for 25 min at room temperature without light exposure. After that, a flow cytometer (FCM, ThermoFisher Scientific, USA) was used to examine the cell death ratio in NSCLC cells. Specifically, the early apoptotic cells were stained with Annexin V-FITC alone, the late apoptotic cells were stained with Annexin V-FITC and PI, and the necroptotic cells were stained with PI alone.
Real-time qPCR
The NSCLC cells were subjected to differential treatments, and the TRIzol reagent (Invitrogen, USA) was employed to extract the total RNA. Specifically, 5 × 10 6 cells were treated with 1 ml TRIzol solution for 5 min, and were subsequently treated with chloroform for 15 min at room temperature. Next, the upper water phase was collected, and was treated with 0.5 ml isopropyl alcohol for 10 min. After centrifugation with 12,000 g for 10 min, 75% ethyl alcohol was used to isolate and purify the total RNA. Next, the Real-Time qPCR was conducted to determine the expression levels of hsa_circRNA_103809, miR-377-3p and GOT1 mRNA, and the experimental procedures had all been documented in the previous publications [36,37]. Of note, to detect hsa_circRNA_103809 levels, the total RNA must be pre-treated with RNase R enzyme (3 U/μg) for 20 min at 37°C to eliminate linear RNA. The primer sequences for the involved genes are as follows: hsa_cir-cRNA_103809 (Forward: 5′-ACG CAT TCT TCG AGA CCT
Western blot analysis
The RIPA lysis buffer was purchased from Solarbio (Beijing, China) to lyse the NSCLC cells/tissues and extract the total protein, according to the experimental procedures recorded in the previous publications [36,37], the expression levels of GOT1, β-actin, cyclin D1, CDK2, cleaved caspase-3 and Bax were examined by using the Western Blot analysis. Specifically, the 40 μg/lane protein lysates were separated by using the 10 -15% SDS-PAGE, and the target protein bands were transferred onto the PVDF membranes (Millipore, USA). Next, the membranes were incubated with 5% skim milk for 70 min at room temperature for blocking, and the membranes were probed with the primary antibodies against GOT1 (1: 1500, MW: 50 kDa, #PA5-24634, Thermo, USA), β-actin and Bax (1:1500, MW: 21 kDa, #ab32503, Abcam, UK) overnight at 4°C. After washing by PBS buffer for 3 times, the PVDF membranes were incubated with the secondary antibody (Abcam, UK) for 120 min at room temperature. Finally, the protein bands were visualized by ECL system (GE Healthcare Bio-science, USA) and quantified by using the Image J software.
Dual-luciferase reporter gene system assay
The binding sites of miR-377-3p with hsa_circRNA_ 103809 and 3′ UTR region of GOT1 mRNA were predicted by the online miRDB software (http://mirdb.org/), and validated by using the dual-luciferase reporter gene system, and the detailed experimental procedures had been well documented in the previous literatures [36,37]. Briefly, the targeting sites in hsa_circRNA_103809 and GOT1 were mutated, and named as Mut-circRNA and Mut-GOT1, respectively. Correspondingly, the original wild-type (Wt) genes were named as Wt-CircRNA and Wt-GOT1. The above sequences were cloned into the luciferase reporter vectors by Sangon Biotech (Shanghai, China), and the schematic image for the luciferase reporters was shown in Figure S3. The above vectors were delivered into NSCLC cells co-transfected with miR-377-3p mimic and inhibitor for 48 h. After that, the commercial dual-luciferase reporter assay kit (Promega, USA) was used to measure relative luciferase activities in the cells.
Xenograft tumor-bearing mice models
The CR-NSCLC cells were pre-transfected with different vectors, and were subcutaneously injected into the dorsal flank of male nude mice (N = 20), and the age of the mice ranged from 6 to 8 weeks. Each mouse was injected with 5 × 10 6 cells, at 7 days post-injection, the tumor were subjected to high-dose cisplatin (10 μg/ml) treatment every 3 days for 2 weeks. The above mice were equally divided into 4 groups, including Control, Cisplatin, KD-circRNA and Cisplatin + KD-circRNA, each group had 5 mice. The mice were sacrificed at 35 days post-injection. After that, the mice tumor tissues were collected, and the expression levels of proliferation associated proteins (cyclin D1 and CDK2) and apoptosis associated proteins (cleaved caspase-3 and Bax) were examined by using Western Blot analysis, and the expressions/localization of Ki67 protein in mice tissues were determined by Immunohistochemistry (IHC). All the animal experiments were approved by the Ethics Committee of The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), and the approval number was 2020-12-002.
Immunohistochemistry (IHC)
The mice tumor tissues were collected and spliced into sections of 5 μm thickness, and IHC assay was conducted to determine the expressions and localization of Ki67 protein in the mice tissues, the detailed experimental procedures can be found at the previous publications [36,37]. The antibody against Ki67 protein was bought from Abcam (UK), and was diluted at the ratio of 1:400.
Statistical analysis
Data analysis was conducted by using the SPSS 18.0 software, and the data was represented as Means ± Standard Deviation. The comparisons between two groups were performed by using the Student's t-test, and the comparisons among multiple groups were conducted by using one-way ANOVA analysis. Each experiment was repeated at least 3 times, *P < 0.05 could be regarded as statistical significance.
Results
The expression patterns of hsa_circRNA_103809, miR-377-3p and GOT1 in CS-NSCLC and CR-NSCLC cells The CS-NSCLC cell lines (A549, H1299 and Calu-3) were subjected to continuous low-dose cisplatin treatment to generate CR-NSCLC cells (A549/DDP, H1299/DDP and Calu-3/DDP), which simulated the realistic conditions of cisplatin-resistance in NSCLC patients in vitro. Next, the NSCLC cells were stimulated with high-dose cisplatin (25 μg/ml) for 0 h, 24 h, 48 h and 72 h, and cell proliferation was evaluated by the CCK-8 assay (Fig. 1a-c). The results showed that the proliferation abilities in CS-NSCLC cells but not in CR-NSCLC cells, were significantly inhibited by cisplatin treatment (Fold changes (72 Figure 1d-f). Next, the cells were stained with Annexin V-FITC and PI, and cell apoptosis was detected by using flow cytometry (FCM) (Fig. 1g). As expected, the data suggested that cisplatin induced apoptotic cell death in CS-NSCLC cells, compared to the CR-NSCLC cells (Fold changes: 9.07, 8.56 and 6.38 vs. CR-NSCLC cells, Fig. 1g), suggesting that CR-NSCLC cells were much more resistant to cisplatin treatment. Next, the expression status of hsa_circRNA_ 103809, miR-377-3p and GOT1 were examined in the NSCLC cells, and we found that hsa_circRNA_103809 (Fig. 1h) and GOT1 (Fig. 1j, k) were upregulated, while miR-377-3p (Fig. 1i) was downregulated in CR-NSCLC cells, suggesting that continuous low-dose cisplatin pressure altered the expression status of hsa_circRNA_103809, miR-377-3p and GOT1 in CR-NSCLC cells.
The regulatory mechanisms of hsa_circRNA_103809, miR-377-3p and GOT1 in NSCLC cells By using the online miRDB software (http://mirdb.org/), we predicted a relationship among hsa_circRNA_103809, miR-377-3p and GOT1 (Fig. 2). Mechanistically, binding sites for miR-377-3p with hsa_circRNA_103809 (Fig. 2a) and the 3′ untranslated region (3'UTR) of GOT1 mRNA (Fig. 2f) were predicted, which were validated by the subsequent dual-luciferase reporter gene system. The results showed that miR-377-3p mimic targeted the binding sites in hsa_circRNA_103809 (Fig. 2b-d) and 3'UTR of GOT1 mRNA ( Fig. 2g-i) to decrease the relative luciferase activities in CS-NSCLC cells, while miR-377-3p inhibitor had the opposite effects ( Fig. 2b-d, g-i). Additionally, the RNA pull-down assay verified that miR-337-3p could be enriched by biotin-labelled hsa_circRNA_103809 (Fig. 2e) and GOT1 mRNA (Fig. 2j) probes but not in the control probes. Next, the overexpression and downregulation vectors for hsa_circRNA_103809 were transfected into CS-NSCLC cells (Figure S2A), and the results showed that hsa_circRNA_103809 positively regulated GOT1 expressions in CS-NSCLC cells (Fig. 2k, l). In addition, the miR-337-3p mimic and inhibitor were delivered into CS-NSCLC cells ( Figure S2B), and the results showed that miR-337-3p inhibited GOT1 expressions in CS-NSCLC cells (Fig. 2m, n). Of note, the promoting effects of hsa_circRNA_103809 overexpression on GOT1 were abrogated by upregulating miR-337-3p (Fig. 2o, p). The above results indicated that hsa_cir-cRNA_103809 sponged miR-337-3p to upregulate GOT1 in CS-NSCLC cells. . d-f Trypan blue staining assay was conducted to evaluate NSCLC cell viability. g Cell apoptosis ratio was measured by using the Annexin V-FITC/PI double staining method. Real-Time qPCR was used to examine the expression levels of (h) hsa_circRNA_103809, i miR-377-3p and j GOT1 mRNA in NSCLC cells. k Western Blot analysis was employed to determine the protein levels of GOT1 in NSCLC cells, full-length blots/gels are presented in Supplementary Figure S4. Each experiment was repeated at least 3 times. *P < 0.05 Knock-down of hsa_circRNA_103809 sensitized CR-NSCLC cells to cisplatin by regulating the miR-377-3p/GOT1 axis Further experiments were conducted to investigate the regulating effects of hsa_circRNA_103809 on cisplatinresistance in NSCLC. To achieve this, the silencing vectors for hsa_circRNA_103809 were transfected into CR-NSCLC cells to knock down hsa_circRNA_103809 ( Figure S1A), and the CCK-8 results showed that either hsa_circRNA_ 103809 ablation or cisplatin treatment alone had little effects on cell proliferation abilities in CR-NSCLC cells, while silencing of hsa_circRNA_103809 enhanced the inhibiting effects of cisplatin on cell growth (Fig. 3a-c). Then, the miR-337-3p downregulation ( Figure S1B Figure 3g).
Targeting hsa_circRNA_103809 enhanced the inhibiting effects of cisplatin on CR-NSCLC cell growth in vivo
Next, we validated the above cellular results in vivo. To achieve this, the CR-NSCLC cells were pre-transfected with hsa_circRNA_103809 downregulation vectors, and the cells were subcutaneously injected into the dorsal flank of nude mice to establish xenograft tumor-bearing mice models. At 7 days post-injection, the tumor were subjected to high-dose cisplatin treatment every 3 days. The mice were sacrificed at day 35, and the tumor tissues were collected, prepared and analyzed by Western Blot analysis and immunohistochemistry (IHC) (Fig. 5). As shown in Fig. 5a-f, either cisplatin alone or hsa_cir-cRNA_103809 downregulation alone had little effects on the proliferation and apoptosis associated proteins, while knock-down of hsa_circRNA_103809 and cisplatin combination treatments downregulated Cyclin D1 and CDK2 to hamper cell cycle, and upregulated cleaved Caspase-3 and Bax to trigger apoptotic cell death in A549/DDP, H1299/DDP and Calu-3/DDP cells in vivo. Consistently, the expressions and localization of Ki67 protein were examined by IHC, and the images in Fig. 5g showed that cisplatin significantly decreased the expression levels of Ki67 protein in CR-NSCLC cells with hsa_circRNA_103809 downregulation in mice tumor tissues.
Discussion
Cisplatin is the first-line chemotherapeutic drug for nonsmall cell lung cancer (NSCLC) treatment in clinic [13,14], however, long-term cisplatin treatment causes cisplatinresistance in NSCLC cells, which seriously limits the therapeutic efficacy of this chemical drug, resulting in cancer recurrence and bad prognosis in NSCLC patients [13,14]. Based on the above information, recent studies focused on uncovering the underlying mechanisms of cisplatin-resistance in NSCLC, and managed to identify potential therapeutic targets to improve cisplatin-sensitivity in NSCL C cells [38,39]. Among all the cancer associated genes, researchers noticed that circular RNAs (circRNAs) were closely associated with cancer progression and drug resistance in NSCLC, and identification of novel circRNAs that regulated NSCLC pathogenesis and drug resistance became necessary and meaningful [13,14]. Therefore, in the present study, we identified a novel circRNA, hsa_circRNA_ 103809, that regulated cisplatin-resistance in NSCLC. Mechanistically, according to the previous publications [36,37], the cisplatin-resistant NSCLC (CR-NSCLC) cells were inducted from their corresponding parental cisplatinsensitive NSCLC (CS-NSCLC) cells, and we found that hsa_circRNA_103809 tended to be highly expressed in CR-NSCLC cells, compared to CS-NSCLC cells. Interestingly, further experiments evidenced that knock-down of hsa_cir-cRNA_103809 enhanced the inhibiting effects of cisplatin on cell proliferation and viability in CR-NSCLC cells. Furthermore, upregulation of hsa_circRNA_103809 increased Fig. 4 Upregulation of hsa_circRNA_103809 promoted cisplatin-resistance in CS-NSCLC cells. a-c Cell proliferation abilities were examined by using the CCK-8 assay. d-f Cell viability was evaluated by performing the trypan blue staining assay. g Cell apoptosis was determined by using the Annexin V-FITC/PI double staining assay. (Note: "Control: without vectors transfection and cisplatin treatment"). Each experiment was repeated at least 3 times. *P < 0.05 cisplatin-resistance in CS-NSCLC cells, implying that targeting hsa_circRNA_103809 could potentially improve cisplatin-sensitivity in NSCLC cells. Previous data suggested that hsa_circRNA_103809 acted as an oncogene to promote cancer progression [15][16][17][18][19], and this study evidenced that hsa_circRNA_103809 also modulated drug resistance in NSCLC, which broadened our knowledge in this field.
Glutamate oxaloacetate transaminase 1 (GOT1) is crucial for promoting cancer progression by regulating glutamate metabolism [31,32], and inhibition and silencing of GOT1 had been validated as an effective strategy to impair cancer growth [31,32]. Interestingly, previous data suggested that GOT1 could be regulated by cisplatin [33], and our experiments validated that hsa_cir-cRNA_103809 positively regulated GOT1 in NSCLC Figure S6E-F). g IHC was performed to examine the expressions and localization of Ki67 protein in mice tumor tissues, the signal intensity in different groups were assessed as follows: Control (+++), Cisplatin (+++), KD-circRNA (+++) and Cis + KD-circRNA (+). (Note: "Control: without vectors transfection and cisplatin treatment"). Each experiment was repeated at least 3 times. *P < 0.05 cells through miR-377-3p. Mechanistically, there existed binding sites between miR-377-3p and 3'UTR of GOT1 mRNA, and miR-377-3p negatively regulated GOT1 in NSCLC cells at both transcriptional and translational levels. In addition, we noticed that upregulation of hsa_circRNA_ 103809 increased GOT1 expression levels, which were reversed by overexpressing miR-377-3p, implying that hsa_ circRNA_103809 sponged miR-377-3p to upregulate GOT1 in NSCLC cells. Furthermore, knock-down of hsa_ circRNA_103809 increased cisplatin-sensitivity in CR-NSCLC cells, while overexpression of hsa_circRNA_103809 increased cisplatin-resistance in CS-NSCLC cells, which were reversed by overexpressing and silencing GOT1, suggesting that hsa_circRNA_103809 upregulated GOT1 to modulate cisplatin-resistance in NSCLC cells. Finally, by performing the in vivo experiments, we evidenced that knock-down of hsa_circRNA_103809 triggered apoptotic cell death to inhibit tumorigenesis in the xenograft tumorbearing mice models.
Interestingly, recent data noticed that NRF2 mediated glutamine metabolism was closely associated with chemo-resistance in pancreatic cancers [40], given that GOT1 served as a crucial regulator for glutamate metabolism, we hypothesized that there might exist connections between NRF2 and GOT1 in regulating cisplatinresistance in NSCLC. However, the detailed mechanisms are still needed to be studied. In addition, since Kirsten rat sarcoma viral oncogene homolog (KRAS) is one of the driver gene of NSCLC [41], it was worthy to investigate the interplay between KRAS gene and the hsa_cir-cRNA_103809/miR-377-3p/GOT1 pathway in regulating NSCLC development in our future work.
Conclusions
Taken together, through in vitro and in vivo experiments, this study found that targeting the hsa_circRNA_103809/ miR-377-3p/GOT1 pathway inhibited cell proliferation and viability, and triggered cell apoptosis to increase cisplatin-sensitivity in NSCLC cells. Our work broadened our knowledge in this filed, and provided potential therapeutic targets to improve the therapeutic efficacy of current chemical drug for NSCLC. | 2020-08-06T09:04:03.821Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "5c8ff862099488bf34c32899eaef21465d060eca",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-07680-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9620c16d8ba87c80bc76706f60c1c89e51bb46c5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
220732441 | pes2o/s2orc | v3-fos-license | miR-200a contributes to the migration of BMSCs induced by the secretions of E. faecalis via FOXJ1/NFκB/MMPs axis
Background Upon migrating to the injured sites, bone marrow mesenchymal stem cells (BMSCs) play critical roles in the repair of bone lesion caused by chronic apical periodontitis. Emerging evidences have shown that Enterococcus faecalis is always associated with apical periodontitis, especially refractory apical periodontitis. But the mechanism underlying how Enterococcus faecalis affects the migration of BMSCs remains unclear. Methods The effects of Enterococcus faecalis supernatants on the migration of BMSCs were determined by transwell migration assays. miRNA sequencing was performed to detect the significantly differentially expressed miRNAs of BMSCs. Proteomics analysis was used to detect the protein expression alterations of BMSCs. Luciferase report assays were deployed to verify the targets of miRNA. Western blot analysis was performed to examine the expressions of matrix metalloproteinases-3, matrix metalloproteinases-9, Forkhead Box Protein J1 (FOXJ1), and nuclear factor kappa B (NFκB). The activations of NFκB were detected by luciferase assays with NFκBluc reporter. Results We found that Enterococcus faecalis supernatants could promote the migration of BMSCs. The upregulation of miR-200a-3p in this process contributed to BMSC migration through downregulating its target Forkhead Box Protein J1. Moreover, FOXJ1/ NFκB axis was found to regulate matrix metalloproteinases (MMPs) in this process. Conclusions These results above suggest that miR-200a contributes to the migration of BMSCs induced by the secretions of E. faecalis via FOXJ1/NFκB/MMPs axis.
Background
Apical periodontitis, which is characterized by the inflammation and destruction of the apical periodontium, is always caused by the host immune response to microbial infection in the root canal system [1]. Those species originated from intestinal flora, such as E. faecalis, have been identified as dominant factors causing apical periodontitis [2]. E. faecalis is always isolated as a monoculture in retreated root canals [3]. It belongs to facultative aerobic species and is usually found in secondary infection or post-treatment of apical periodontitis, especially in the refractory inflammation [4]. E. faecalis is tolerated to antimicrobials and contains the ability of surviving in a nutrient-deficient environment. Thus, persisting infections in root canal or apical periodontium are always associated with E. faecalis [5]. There are various virulence factors produced by E. faecalis, such as lipoteichoic acid, aggregation substance protein, and surface adhesion [6,7]. During the development of pulpitis, E. faecalis can invade and colonize at dentinal tubules. Studies on the etiology of refractory apical periodontitis have revealed that E. faecalis biofilms in the dentinal tubules contribute to the retaining of apical periodontitis [8].
Since apical periodontitis is characterized by inflammation and bone resorption [9], cells associated with this process should be carefully taken into consideration. Belonging to multipotent stem cells and widely presenting in bone marrow, bone marrow mesenchymal stem cells (BMSCs) can differentiate into osteoblasts, chondrocytes, or adipocytes [10]. Meanwhile, BMSCs have shown certain ability of moving from niche to the peripheral circulation, and further to the target tissues [11]. The recruitment of BMSCs is required for the repair of bone lesion, and the migration of BMSCs is usually attracted by the environmental factors at the site of injury [12]. There are various factors gathering at the injury, including infectious factors and those produced by injured tissues.
With bacterial infection, BMSCs contact with bacterial components and recognize them through the receptors on the cell membrane. Studies on human BMSCs have revealed that lipopolysaccharide (LPS), the cell wall component from gram-positive bacteria, can increase their migration [13], while the synthetic lipopeptide could inhibit the migration of mouse BMSCs [14]. The migration of human dental pulp stem cells were also increased with the stimulation of Toll-like receptor 2 (TLR2) ligands [15]. Studies above remind us that it depends on the type of mesenchymal stem cell in which migration effect would be caused by bacterial components. In fact, it is hard to ensure the regeneration of the tissue without the efficient migration of BMSCs into the injured sites. It requires more acknowledgments in this field to induce BMSCs migrating and generating new tissues.
In recent years, it has been widely accepted that miR-NAs play important roles in the biological regulation of stem cells. miRNAs are highly conserved endogenous non-coding RNAs with a length of 19 to 25 nucleotides. They usually act as a negative regulator by binding to the 3′UTR sites of their target mRNAs [16] and further modulate the cell signaling transduction [17]. They also take part in various biological processes, including cell apoptosis, metabolism, migration, and differentiation [18]. Specific miRNAs have been taken as biomarkers and therapeutic targets for their roles in pathological processes and human diseases [19]. A growing number of miRNAs have been explored for their roles in the migration of BMSCs as either inhibitors or activators. Abnormal miRNA expression would also lead to an obvious alteration on the osteogenic differentiation of BMSCs [20]. A previous study has shown that miR-335 overexpression would downregulate the proliferation, migration, and differentiation of human BMSCs [21]. By upregulating the expressions of MMP-2 and MMP-9, miR-21 can promote the migration of BMSCs via the PI3K/Akt pathway [22]. The migration of rat BMSCs could be inhibited by miR-375 via Akt signaling [23].
The objective of this article is to detect miRNAs related to the migration of BMSCs which is induced by E. faecalis, the main infection in refractory apical periodontitis. In this study, we investigate the variations of miRNA signaling of BMSCs in response to E. faecalis supernatants (EfS) and evaluate how the miRNAs participate in cell migration. We examine and validate a series of miRNAs which are differentially expressed with the stimulation of E. faecalis supernatants, and further elucidate that adjustments of the miRNA can regulate the migration of BMSCs. We provide an insight into the mechanism that miR-200a-3p is involved in the NFκB signaling and affects the expressions of MMPs in the migration of BMSCs. Together, the outcomes of this study provide a better understanding on the movement of BMSCs with the invasion of E. faecalis in apical periodontitis and suggest a novel way to drive BMSCs to the injured sites and complete tissue regeneration.
Cell culture
To obtain BMSCs, femur bones from rats were dissected, isolated, and flushed with PBS. Bone marrow was aspirated and suspended in PBS with 5% FBS (Gibco). After centrifuged at 400g for 5 min and washed twice with PBS, isolations were cultured in DMEM supplemented with 10% FBS, 100 μg/ml each of penicillin and streptomycin under the condition of 37°C and 5% CO 2 . Enterococcus faecalis ATCC33186 was cultured in brain heart infusion (BHI) medium and the growth rate was measured by the optical density at 600 nm.
Proteomics analysis
After treated with mediums containing EfS or BHI for 48 h, cells were collected and analyzed as previously described [24]. In brief, label-free peptide MS1 intensity-based methods were used to identify the levels of proteins in different groups, and LC-MS/MS analysis was performed on a Q Exactive mass spectrometer (Thermo Scientific). Those proteins with level change > 2 and p < 0.05 were taken as upregulated in EfS-vs. BHI-BMSCs. While proteins with level change < 0.5 and p < 0.05 were taken as downregulated. Blast2Go (https://www.blast2go.com) were used for the functional annotation of proteins. KEGG (http://www. kegg.jp) was employed to conduct pathway enrichment. Fisher's exact test was used in GO and KEGG analysis.
MicroRNA-sequencing
Cells were treated with indicated mediums for 24 h and miRNA sequencing was performed as previously described [24]. In brief, total RNA was obtained with TRIzol reagent, and then small RNA sequencing libraries were conducted. The libraries were quantified with an Agilent 2100 Bioanalyzer. The raw lllumina sequence data were prepared and converted to fastq files.
Transwell migration assays
The two-chamber transwell system with 8 μm pore size was used in these assays. After treated with indicated medium for 24-48 h, cells were seeded into the upper chamber of the inserts with serum-free medium, while the medium with 10% FBS was filled into the lower chamber. After incubation for 12 h, the inserts were fixed by 100% methanol and subsequently stained with 0.1% crystal violet. The migrated cells on the lower side of the inserts were imaged and counted.
Scratch wound assays
After obtaining a confluent monolayer, cells were incubated in the serum-free medium for 12 h and then physically wounded with a sterile pipette tip. Before adding culture medium with BHI or EfS, detached cells were washed away with PBS. The scratches were recorded with a microscope at the defined positions after 0 and 48 h. The scratch wound closures, which were expressed as a percentage of scratch surface area covered by migrated cells, were analyzed with ImageJ.
Western blot analysis
Bone marrow mesenchymal stem cells between passage 3 and passage 5 were seeded in plates and reached 70-80% at 37°C in 5% CO 2 incubator. Stimulation was applied as the indicated time. And then cells were washed with PBS and lysed in RIPA with protease inhibitors. Proteins were extracted, separated by SDS-PAGE, and transferred to PVDF membranes. Before probing with the secondary antibodies conjugated with HRP for 1 h at room temperature, the membranes were blocked with 5% non-fat milk and rinsed in the indicated primary antibodies at 4°C overnight. Primary antibodies against MMP3(#14351, CST), MMP13(#69926, CST), NFκB(#8242, CST), p-NFκB(#3033, CST), β-actin (#4970, CST), and Foxj1 (#ab235445, Abcam) were applied in this study according to the manufacturers' instructions. Signals were captured using the enhanced chemiluminescence kit and ChemiDoc MP System. β-actin was shown as the internal control.
Real-time PCR analysis
After treated with indicated culture medium, cells were lysed and the total RNA extract kit was used to obtain the RNA according to the manufacture's protocol. After converted into cDNA, gene expressions were detected on a light480 real-time PCR system.
Oligonucleotide transfection
Synthetic miRNA mimics, inhibitors, and negative control oligonucleotides were designed and produced by Ribobio. BMSCs were transfected with miRNA using DharmaFECT Transfection Reagents according to manufacturer's protocols. Briefly, transfections were conducted when cells reached 50-60% confluence. Total RNA and proteins were obtained after 24 and 48 h, respectively.
Luciferase assays
After transfected with appropriate plasmids for 48 h, cells were lysed for luciferase assays. Dual-luciferase reporter assay system was used for detection according to manufacturer's protocol and Renilla luciferase activities were taken as an internal control.
Statistical analysis
All statistical analyses were performed by Graphpad Prism 8.0. Statistical results expressed in the figures are shown as the mean ± standard deviation calculated from at least three independent experiments. The statistical significance of the differences was analyzed by unpaired Student's t test at a significance level of p < 0.05.
E. faecalis supernatants (EfS) in late stationary phase promote the migration of BMSCs
To address the unsolved questions surrounding how E. faecalis affects the role of BMSCs during the repair of periapical bone loss, we obtained the cell-free supernatants of E. faecalis ATCC33186, which was cultured in brain heart infusion (BHI) medium. Following cultured in medium for 15 h, the growth of E. faecalis was recorded according to the optical density at 600 nm and culture supernatants were harvested in late stationary phase (Fig. 1a). To detect the effect of supernatants on the migration of BMSCs, we performed the transwell assays and scratch wound assays. Compared with the control groups added with BHI medium, the supernatants added into the culture medium dramatically promoted the migration of BMSCs to the lower side of the inserts in transwell (Fig. 1b) and increased the proportion of wound closure area after scratch (Fig. 1c).
miR-200a-3p participates in the regulation of BMSCs migration
To further address the question of how E. faecalis supernatants impact the migration of BMSCs and whether any miRNAs play roles during this regulation, we performed miRNA sequencing to detect the significantly differentially expressed miRNAs. Thirty-eight miRNAs were defined by setting padj < 0.05 as thresholds and 10 miRNAs were upregulated in BMSCs with E. faecalis supernatant treatment (Fig. 2a). Considering that miR-200 family has been proved to take part in the regulation of cell migration, we examined the expressions of miR-200a-3p, miR-200b-3p, and miR-429 by qRT-PCR. The results showed that the expressions of miR-200a-3p and miR-200b-3p were dramatically upregulated, while no significant results were obtained with miR-429 (Fig. 2b). The expressions of miR-200a-3p were also identified after transfection of miR-200a-3p mimics or inhibitors (Fig. 2c, d). To investigate the miR-200a-3p function in BMSCs migration, cells were transfected with miR-200a-3p mimics, inhibitors, or negative control, and transwell assays showed that miR-200a-3p mimics could promote the migration of BMSCs, which could be attenuated by inhibitors (Fig. 2e). With the stimulation of E. faecalis supernatants, the migration of BMSCs transfected with miR-200a-3p mimics could still be decreased by miR-200a-3p inhibitors, while no significant result was obtained in comparison with the control group (Fig. 2e).
Proteomic analysis on the BMSCs treated with E. faecalis supernatants
Following miRNA detection, we next sought to observe the protein expression alterations of BMSCs during this process. After performing proteomic analysis on BMSCs with or without E. faecalis supernatant treatment, we defined 63 significantly differentially expressed proteins by setting absolute level change > 2 and p value < 0.05 as thresholds. All the differentially expressed proteins in BMSCs after E. faecalis supernatants treatment were listed in the heat map (Fig. 3a). Among these proteins, 38 proteins were upregulated and 25 proteins were downregulated. Next, gene ontology (GO) analysis showed that these proteins participated into a series of biological processes, such as the regulation of cell migration, the response to molecule of bacterial origin, and the positive regulation of cell motility (Fig. 3d). Furthermore, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment on the differentially expressed proteins were analyzed, and the top 20 enriched pathways, including NFκB signaling pathway, were shown (Fig. 3e).
miR-200a-3p affects the expressions of MMP-3 and MMP-13 in BMSCs stimulated with E. faecalis supernatants As we chose to focus on the proteins involved in cell migration, further analyses on the mRNA levels of MMP-3 and MMP-13 were performed to identify their alterations in BMSCs with E. faecalis supernatant treatment. After cells were stimulated with the indicated concentration of E. faecalis supernatants for 24 h, a concentrationdependent manner was obtained at mRNA expression level (Fig. 4a). Compared with the control group, the expressions of MMP-3 and MMP-13 showed a significant upregulation with increasing concentration of E. faecalis supernatants. Next, western blot results demonstrated that the expressions of MMP-3 and MMP-13 in BMSCs at protein level could also be promoted by E. faecalis supernatants ( Fig. 4b and c). Further detection on the protein expressions of MMP-3 and MMP-13 showed that they were obviously upregulated with miR-200a-3p restoration,
Fig. 1 Enterococcus faecalis supernatants (EfS) increased the migration of rat bone marrow mesenchymal stem cells (BMSCs). a Harvesting of
Enterococcus faecalis ATCC33186 cell-free culture supernatants. Enterococcus faecalis ATCC33186 was cultured in brain heart infusion (BHI) medium and culture supernatants were harvested at late stationary phase. The growth was recorded with the optical density of medium over time. b Observation of BMSCs migration with the stimulation of Enterococcus faecalis supernatants (EfS) in transwell assays. BMSCs were cultured on the upper inserts of transwell in DMEM with EfS. Those cells migrated to the lower side of transwell membrane were fixed, stained with crystal violet, and counted. c Observation of BMSCs migration with the stimulation of EfS in scratch wound assays. The quantitative results are the means ± SD of three independent experiments. *p < 0.05 while inhibition of miR-200a-3p decreased their expressions (Fig. 4d, e).
miR-200a-3p downregulates the expression of FOXJ1 by binding to its 3′UTR
To further identify the functional target of miR-200a-3p, data were collected through public algorithms, and computational prediction reminded that miR-200a-3p downregulates FOXJ1 expression by directly binding to its 3′ UTR (Fig. 5a). To verify the repression of FOXJ1 by miR-200a-3p binding to its 3′UTR, luciferase report assays containing either the wild-type or mutant FOXJ1 3′ UTR sequence were conducted (Fig. 5b). The luciferase activities of wild-type FOXJ1 3′UTR reporter were repressed with overexpression of miR-200a-3p, when compared with the control groups (Fig. 5c). Furthermore, we found that BMSCs showed a dramatic decrease of FOXJ1 with E. faecalis supernatant treatment (Fig. 5d). No matter with or without E. faecalis supernatant treatment, miR-200a-3p mimics could repress the expression of FOXJ1 while inhibitors showed converse function.
miR-200a-3p increases BMSCs migration through FOXJ1/ NFκB pathway
Considering that FOXJ1 acts as a repressor of NFκB activation and the proteomic analysis reminded that the NFκB pathway was included in this process, we sought to investigate whether FOXJ1/NFκB axis plays a role during the migration of BMSCs. Firstly, we determined the level of NFκB activation by luciferase report assays and found that the activation of NFκB was obviously upregulated by E. faecalis supernatants (Fig. 6a). Furthermore, transfection of miR-200a-3p could also improve the activity of NFκB (Fig. 6b). Following miR-200a-3p transfection, the expressions of p-NFκB in BMSCs were detected by immunofluorescence. Compared with the control group, cells transfected with miR-200a-3p mimics showed more p-NFκB expression in the cell nucleus (Fig. 6c). Consistent with the expressions of MMP-3 and MMP-13, the expression of p-NFκB could also be increased by miR-200a-3p mimics (Fig. 6d). Meanwhile, transwell assays reminded that inhibition of NFκB with PDTC would attenuate the migration of BMSCs (Fig. (Fig. 6f-h).
Discussion
With multiple differentiation potential, BMSCs play an important role during the repair of bone lesion at the injured sites of apical periodontitis. The BMSCs reside in the stem cell niches at bone marrow and it is necessary for them to migrate to the damaged area and differentiate into osteoblasts to regenerate new tissues. Previous studies have described that BMSCs could be chemoattracted by the inflammation factors around the injured tissues [25,26]. However, it is still unclear whether the migration of BMSCs could be affected by the substances from microbes. Herein, we found that cell-free culture supernatants from the late stationary phase of E. faecalis could increase the migration of BMSCs when compared with the BHI medium. It reminds us that the secreted molecules or debris of E. faecalis could activate the motility of BMSCs.
A growing number of evidences have shown that miR-NAs contribute to the migration of BMSCs under both physiological and pathological situation [22,27]. To explore whether specific miRNAs were involved in this study, we detected the miRNA alterations of BMSCs treated with E. faecalis supernatants by miRNA sequencing. On the top of the list of upregulated miRNAs, [28]. In the present study, the migration of BMSCs was increased after miR-200a-3p mimic transfection, while this promotion could be inhibited when the inhibitors of miR-200a-3p were applied. Together with the results above, these observations provided clues that miR-200a-3p takes part in the regulation of BMSCs migration caused by E. faecalis.
Considering that cell motility mainly relies on the protein rearrangements, we performed proteomic analysis to detect the molecular mechanisms underlying BMSCs migration and those proteins with statistical alterations were listed. Among the proteins which were highly expressed in the EfS treated groups, we found MMP-3 and MMP-13 were included. Matrix metalloproteinases (MMPs) belong to a large family and play critical roles in the tissue remodeling and extracellular matrix (ECM) degradation [29]. Various physiological and pathological processes, including cell migration and invasion, require the involvement of MMPs [30]. It has been revealed that MMP-3 and MMP-13 mediate the remodeling of ECM and contribute to the metastasis of cancer cells [31,32]. Increased expression of MMP-3 and MMP-13 are associated with the augmentation of cell migration in lung cancer [33]. Previous studies have shown that the migration of colorectal cancer cells would be attenuated with downregulating MMP-3 [34], and knockdown either MMP-3 or MMP-13 could repress the invasion and migration of anaplastic thyroid carcinoma cells [35]. At the same time, knockdown of MMP-13 could dramatically downregulate the migration of ESCC (esophageal squamous cell carcinoma) cells [36]. An in vitro study on the adult neural stem/progenitor cells (aNPCs) found an increased expression of MMP-3 during the migration in response to chemokines [37]. Further gene ontology (GO) enrichment analysis on BMSCs also demonstrated that the top 8 affected biological process (BP) contained a response to molecule of bacterial origin, positive regulation of cell motility, and so on. All the results above suggest that molecules in E. faecalis supernatants promote the migration of BMSCs through regulating the expression of MMP-3 and MMP-13.
Given that the expression of MMPs showed obvious alteration during the migration of BMSCs induced by E. faecalis, we sought to determine the effect of miR-200a on the expression of MMPs. Initially, the expressions of MMP-3 and MMP-13 were confirmed at mRNA and protein level, and their expressions showed a concentrationdependent manner with stimulation of E. faecalis supernatants. These results are consisted of what we have obtained in the proteomic analysis. Previous researches have revealed that miR-200 could affect the expression of MMPs [38]. In this study, BMSCs transfected with miR-200a-3p mimics showed similar results with those treated miRNA is usually functioned in a unique way, which means that a single miRNA can affect multiple RNA transcripts [39]. Thus, it is necessary to identify the target genes of miR-200a in BMSCs in order to detect the mechanism underlying BMSCs migration. Firstly, bioinformatics analysis was carried out to explore the potential targets of miR-200a-3p. Next, we identified FOXJ1 as one target gene of miR-200a by luciferases reporter assay. Previous studies have demonstrated that miR-200a could increase the migration of non-small cell lung cancer cells [40], while controversial results are also shown that the upregulation of miR-200a could suppress the migration of triplenegative breast cancer cells [41]. All the above reminds us that it may depend on the cell type how the miR-200a affects cell migration. FOXJ1, belonging to a DNA-binding protein family with the forkhead domain [42], could antagonize the NFκB activation by inhibiting IκB protein [43]. Interestingly, we also observed the NFκB signaling pathway in the KEGG analysis.
The NFκB transcription factor family forms various protein complexes and plays critical roles in the control of cell migration [44,45]. Previous studies have demonstrated that the NFκB/MMP-3 pathway plays roles in various cell migrations, including fibroblasts [46], prostate cancer cells [47], and chondrosarcoma cells [48], and so on. And NFκB/MMP-13 axis contributes to cell migration of lung cancer and glioma [49,50]. In our present study, both supernatants from E. faecalis and miR-200a-3p could promote the NFκB activation, while the inhibitor could suppress the activation. Consistent with the results that the MSC migration induced by IL1β could be impaired by the blockade of NFκB [51], we also found that inhibition of NFκB activation could attenuate the migration of BMSCs, as well as the expressions of MMP-3 and MMP-13. Based on the findings above, we provided evidence that E. faecalis supernatants induce the BMSC migration through miR-200a-3p/ FOXJ1/NFκB/MMPs axis.
Conclusions
In this study, we provided evidence on the involvement of miR-200a-3p in the migration of BMSCs induced by E. faecalis supernatants and its downstream target FOXJ1. Furthermore, NFκB pathway activation was detected and contributed to the migration by promoting the expressions of MMP-3 and MMP-13 (see Fig. 6i for an overview). These findings provide a new perspective and may help understand the mechanism of BMSC migration in response to E. faecalis infection, though further investigations on other miRNAs involved in this process should be concerned. | 2020-07-25T14:19:43.386Z | 2020-07-25T00:00:00.000 | {
"year": 2020,
"sha1": "cf2208236ad9e70714bad1a3bca4f62d677284cd",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-020-01833-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf2208236ad9e70714bad1a3bca4f62d677284cd",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
14168563 | pes2o/s2orc | v3-fos-license | Black hole dynamics from thermodynamics in Anti-de Sitter space
We work on the relation between the local thermodynamic instability and the dynamical instability of large black holes in four-dimensional anti-de Sitter space proposed by Gubser and Mitra. We find that all perturbations suppressing the metric fluctuations at linear order become dynamically unstable when black holes lose the local thermodynamic stability. We discuss how dynamical instabilities can be explained by the Second Law of Thermodynamics.
Introduction
Black holes are very interesting objects from their causal structures in general relativity to their quantum mechanical properties. To figure out their physical relevance, we need to answer if the complete gravitational collapse of a body results in a black hole rather than a naked singularity. The conjecture [1] that nature censors naked singularity was proposed in this respect. One of motivation of this is from the fact that black holes in 4-dimensional asymptotic Minkowski space are stable: linear perturbations around black hole solutions do not give any evolution.
However, it was found in [2,3] that black strings and p-branes are unstable. The basic idea of the Gregory-Laflamme instability is that whatever has the biggest entropy is favored. Since a black string has a different topology of horizon as that of a black hole and entropy is proportional to the area of horizon, array of black holes has bigger entropy than a uncompactified black string when they have the same mass [4]. The instability of black strings was shown [2,3] by doing perturbation theory. It is a very interesting question to see what would happen during the transition between them. It has been argued that violation of cosmic censorship does occur during this process.
Recently it has been suggested that a black string settles down to a new static black string solution which is not translationally invariant along the string [5].
Entropy argument used above was revisited by Gubser and Mitra to propose that a black brane becomes dynamically unstable when it is locally thermodynamically unstable [6,7]. Local thermodynamic stability is defined as having an entropy which is concave down as a function of the mass and the conserved charges [8]. This conjecture was made from the perspective of AdS/CFT correspondence [9,10,11], which identifies two low energy excitations, both of which are decoupled from supergravity in flat space, in two low energy descriptions of superstring theory [12]. Some unstable fluctuation modes may be excited when there is a thermodynamic instability in the field theory and according to AdS/CFT the same thing would happen in AdS [7]. A semi-classical proof of above conjecture using the Euclidean path integral approach to quantum gravity was given in [13].
The motivation of Gubser-Mitra (GM) conjecture is that Lorentzian time evolution should proceed so as to increase the entropy. In this paper, we find that their argument holds for both of the cases in which the metric fluctuations are suppressed and in which only the metric fluctuations are turned on. In the former case, any perturbation for all equal charges of AdS 4 -RN solutions becomes dynamically unstable when the system loses the thermodynamic stability and all evolutions increase entropy. In the latter case, there is no dynamical instability even though the system is thermodynamically unstable. The stability can be explained by the fact that entropy would be decreasing if the perturbation is unstable. We discuss these in section 2 and section 3. We try to explain dynamical instabilities of black holes from the Second Law of Thermodynamics in section 4.
Gubser-Mitra analysis and its generalization 2.1 AdS 4 -RN black hole
An electrically charged black hole in the asymptotically AdS 4 was found in [14]. Starting from N = 8 supergravity in 4-dimensions, they gauged the rigid SO(8) symmetry of 28 gauge boson [15] and the potential induced from this gauging makes AdS 4 a vacuum solution of this theory. The AdS 4 black hole solution is made by focusing on U(1) 4 Cartan subgroup of SO (8), which is believed to be a consistent truncation. Only three scalar fields of 70 scalar fields in the original theory are kept by working in symmetric gauge. The Lagrangian of this truncated theory is, where The metric signature is (− + ++) and G 4 = 1 4 . The electrically charged solutions are where the quantities Q A are the physical conserved charges. The mass is [16] and the entropy is where z H is the largest root of F (z H ) = 0. It is possible to express M directly in terms of the entropy and the physical charges in the large black hole limit, M≫L, as [7] M = 1 We are going to study in the case where all four charges are equal, q A = q. In this case the solution can be written in term of a new radial variable, r = z + q, and it becomes It is known [17] that the consistent S 7 truncation of 11-dimensional supergravity is equivalent to N = 8 4-dimensional gauged supergravity. Also the equivalence of large R-charged black holes in D=4, D=5 and D=7 with spinning near-extreme M2, D3 and M5 branes are respectively demonstrated in [16]. It is important to check that our black holes can be embedded to higher dimensional black objects because GM conjecture requests the non-compact translational symmetry. Any instability found in the large black hole limit, M≫L in our case implies the instability of M2-brane.
Thermodynamic instability and adiabatic evolution
Local thermodynamic stability is defined as having an entropy which is concave down as a function of the extensive variables. This means that the Hessian matrix, has no positive eigenvalues. It is straightforward to express for all equal charges, Q A =Q, the thermodynamic instability is present when χ > 1 [7]. We can see that the most positive eigenvetor 1 of Hessian increases entropy most when entropy is at its extremum.
Even though entropy is not at its extremum, we can forget about the first derivative parts by energy and charge conservation in microcanonical ensemble. With this it was found that in the positive eigenvector direction of Hessian for all equal charges, the dynamical instability coincides with the thermodynamic instability with a small discrepancy due to numerical errors. It will be interesting to see what would happen in other directions. Gubser and Mitra analyzed the linear perturbation in which fluctuations of the metric are suppressed. The most unstable eigenvector is (δM, δQ A ) = (0, 1, 1, −1, −1) for all equal charges. The condition that the metric decouples at linear order is that Q A · δQ A = 0. In this case δT ab vanishes at linear order and we can also see from (2) the metric does not change at linear order. It is not difficult to make the linear perturbation equations in this decoupling case beyond the eigenvector direction. Our original motivation was to see two things: first, even though the system loses the thermodynamic stability, it would not be dynamically unstable if we perturb the system in the way of decreasing its entropy. Second, because the eigenvector direction increases entropy most, it would be the fastest way of increasing entropy.
The general perturbation in which the metric decouples for all equal charges is that where a, b can be any real numbers. From (2), we can make an ansatz about a relevant perturbation This ansatz relating three scalar fields to one scalar field and four U(1) fields to the other U(1) field should be checked if it is consistent with equations of motion and it turns out to be consistent. The linear perturbation for each φ i is the same up to overall factor ∇ µ ∇ µ + 2 L 2 − 8F 2 µν δφ − 16F µν δF µν = 0 (11) and the linear perturbation for each F (A) µν is also the same up to overall factor Here F is the background field strength in (6): it is the same for four F (A) . It is remarkable that all directions have the same perturbation equation. The case of a = 1, b = −1 is the unstable eigenvector and we can see that (11) and (12) are exactly what Gubser and Mitra found [6]. From their analysis, we can conclude that all perturbations suppressing metric fluctuation at linear order have dynamical instabilities when the system is thermodynamically unstable. We can see that the eigenvector direction is the fastest way of increasing entropy not because it evolves fastest but because it increases entropy most. It was suspected that some of perturbations (9) would decrease entropy by the continuity of (8) in the case that χ is slightly greater than 1, but the Second Law of Thermodynamics is not violated in another remarkable way 23 . The second derivative parts of (8) with perturbation (9) is Here A(M, Q A ) is a positive when χ ∼ 1. We can see that all perturbations (9) increase entropy when χ > 1. The factorization of the eigenvalue part χ − 1 can explain why we have the same linear perturbation equations (11) and (12) for all perturbations (9).
3 Stability from the metric perturbation analysis
Metric perturbation equation
In the previous section, we observed that there is a dynamical instability in any case that the metric perturbation is suppressed. It would be very interesting to see what would happen in the case that the metric is also involved. This is very difficult to do and we have not succeeded in doing this in the case that all fields are involved.
In this section, we analyze a simple case: three scalar fields and four U(1) fields are suppressed. This perturbation is in the direction of δM =0, δQ A =0 for all equal charges. From (5) and (8) we can see that entropy is decreasing in this perturbation. 2 We thank a referee of JHEP for pointing out this and we would like to apologize to the authors of [7] for incorrectly disclaiming their argument in the previous version of this paper. 3 The violation of the Second Law of Thermodynamics was suspected from the fact that AdS is not globally hyperbolic. We thought the area law for black holes might not hold. However the area law needs only a partial Cauchy surface [20], so it holds in AdS. φ i do not satisfy the dominant energy condition, but they satisfy the strong energy condition defined in [18] or the timelike convergence condition defined in [20]. Therefore, the area of horizon can not be decreasing with classical evolutions by the area law.
Varying (1) yields the equations of motion
We expect a relevant perturbation is We need to check if above ansatz is consistent with equations of motion and it is so if where γ = γ a a = g ab γ ba . The linear perturbation equations from (14.a) and (14.b) are automatically satisfied for all equal charges . Now we need to check if (15) is consistent with (14.c). The linear perturbation from (14.c) is where we use the totally symmetric notation for ( ), Λ = − 3 L 2 and F 2 = F ab F ab . Four U(1) fields become the same. It is a well-known fact from electromagnetism that if there is a source term, a simple gauge choice is not easy. However, if we take the trace of (17), it becomes We used the condition, γ t t = −γ r r for this. Because (18) is a homogeneous equation for γ, we can choose a transverse traceless gauge whereby See [18] for a detail about this gauge choice. It should be noted that this gauge choice is only possible with our ansatz that scalar fiends and U(1) fields are not fluctuating. Finally we get the perturbation equation for the metric from (17) following [18].
It is the even wave in the canonical form [19] which is relevant to our case: It can be easily checked that in this form, γ t t and −γ r r have the same equation in (20) and this proves that our ansatz (15)-(16) is completely consistent with equations of motion. This equality between γ t t and −γ r r is expected from the δM perturbation of the metric in (6). The equations for γ tt and γ tr are coupled: Here f is defined in (6) and f ′ = ∂ r f . Using the form (21) we can make the forth order ordinary differential equation forγ tr , which is what we are going to study by numerics.
Numerical analysis
To carry out a numerical study of (22), we can cast the equation in terms of a dimensionless radial variable u, a dimensionless charge parameter χ, a dimentionless mass parameter σ, and a dimensionless frequencyw introduced in [6]: Then we combine two equations in (22) Now we need to specify the boundary condition forγ rt . We want to place a initial data surface touching the horizon at one end and ending on the boundary of AdS at the other end. Because AdS does not have a Cauchy surface [20], the future domain of dependence of this initial data lies inside of Cauchy horizon of AdS. To define 'small' for the perturbation at the horizon, we can use non-singular coordinates, Kruskal coordinates [21]. Dropping the S 2 piece in (6) and introducing a tortoise coordinate r * and Kruskal coordinates (T, R) according to The near-horizon metric is regular We can express the Kruskal components γ ′ ab in terms of the original components γ ab all of which should be finite as r → r H on our initial data surface. To avoid the issue of mode superposition and a better physical sense in which black holes would form in a collapse situation, we would require a surface ending on a future horizon [22]. When we approach the future horizon from outside of a black hole region, R = T +O(r −r H ). This implies that normalizable wavefunctions γ tr must be O(r − r H ) as we approach the event horizon. We also want it falling off like 1/r 2 near the boundary of AdS 4 . This boundary condition is taken from the asymptotic behavior γ tt ≃ r 3 γ tr . Using Maple, we solved (24) numerically. We did not find any unstable mode. At σ = 0, thermodynamic stability is lost at χ = 1. The smallestw 2 in this case isw 2 = 2. There is no normalizable wavefunction with negativew 2 . Negative mode is found at χ = 3.7, which lies in naked singularity regions and therefore is not relevant. We can conclude that in the perturbation, δM =0, δQ A =0 there is no dynamical instability, which makes sense because this perturbation would decrease entropy if it were unstable.
Discussions and Perspectives
We have been working on Gubser-Mitra conjecture, which relates thermodynamics to dynamics in black holes. When the metric fluctuations are suppressed at linear order, all perturbations are dynamically unstable if black holes are thermodynamically unstable and all of evolutions increase entropy. However, when only the metric fluctuations are turned on, they are dynamically stable. Entropy is not decreasing in this case. This result strengthens both the motivation of Gubser-Mitra conjecture, which claims that Lorentzian time evolution should go so as to increase entropy, and the validity of their conjecture. The Second Law of Thermodynamics is not a rigorous law of nature. It is correct in a certain macroscopic time scale. Therefore it is remarkable if dynamics obeys this law, even though we are interested in classical instabilities.
As suggested in [7], entropic arguments can give a good information not only on the existence of dynamical instabilities but also on the direction they point. We can have a very heuristic argument why there is a dynamical instability when the perturbation (δM, δQ A ) increases entropy. Entropy can be understood as a functional of the fields describing black hole. Let X i (t) denote metric, scalar and U(1) fields. If we write down the equation (8) in terms of X i (t), it becomes S(X i (t 0 + dt)) = S(X i (t 0 )) + 1 2 Here X i (t 0 ) is the solution describing our black hole and the derivative of entropy with respect to X i is a functional derivative. δX i (t) can be understood as a solution of the linear perturbation equation in this case. If we assume that δ 2 S δX i δX j gives a positive definite inner product between fields, we can see that entropy increases if and only if the time dependence of X i (t) is e iwt with w 2 negative. This argument does not hold for a Schwarzschild black hole and this might be due to our assumption on the positivity of δ 2 S δX i δX j . It will be a very interesting problem to understand this from the quantum theory of gravitation and it might explain the reason why we need a non-compact translational symmetry of a black object in Gubser-Mitra conjecture. This is also very interesting in mathematical point of view. Suppose we have a hypersurface defined by S = S(M, Q A ). Each pertuabation (δM, δQ A ) corresponds to a tangent vector originated from p = (M 0 , Q 0A ) up to a normalization factor in a tangent space T p S of the hypersurface at p 4 . This tangent space can be understood as the real projective space ℜP 4 . The second derivative part of equation (8) is a homogeneous polynomial of (δM, δQ A ) of degree 2 and the zero locus of this polynomial gives us an algebraic subvariety of ℜP 4 . This variety separates ℜP 4 into two parts: δS > 0 and δS < 0. We can also separate ℜP 4 in another way: A stable region in which a perturbation (δM, δQ A ) gives no dynamical evolution and an unstable region in which a perturbation (δM, δQ A ) gives a dynamical evolution. If our argument in the previous paragraph is correct, this means that both of separations are the same and it will give a very interesting relation between algebraic equations and differential equations 5 .
It was argued that a unstable black string settles down to a new static black string solution which is not translationally invariant along the string and can be viewed as a local entropy maximum but not a global one [5]. If the final stage of the evolution is a local entropy maximum, we can not say that evolutions from different perturbatoins end up with the same final solution. In our case, we have 3 eigenvectors of Hessian with the same positive eigenvalues for all equal charge: (a = 1, b = −1), (a = −1, b = 1) and (a = −1, b = −1) in (9). Considering the sign of these vectors, there are six most unstable perturbation vectors in the tangent space. It would be very interesting to see that what will be the final solutions for these perturbations. Finally it is an open question to see if evolutions from perturbations around the eigenvectors would result in the same final solutions in which the evolutions from the nearby eigenvectors result. We leave these questions for future work.
Acknowledgments
We would like to thank V. Balasubramanian, F. A. Brito, M. Cvetic and M. Strassler for useful discussions. We would like to express special thanks to A. Naqvi for his help with numerical analysis and to R. M. Wald for his comprehensive book [18]. This work is supported by DOE grant DE-FG02-95ER40893. | 2014-10-01T00:00:00.000Z | 2001-06-29T00:00:00.000 | {
"year": 2001,
"sha1": "89da816fb805710f735f1fe6fa969cfbccc0e870",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fd42a9084431f340980d808556222ae444be8c18",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252220280 | pes2o/s2orc | v3-fos-license | Multi-band character revealed from Weak-antilocalization effect in Platinum thin films
Platinum (Pt) has been very much used for spin-charge conversion in spintronics research due to it's large intrinsic spin-orbit interaction. Magnetoconductance originated from weak-antilocalization effect in quantum interference regime is used as a powerful tool to obtain the microscopic information of spin-orbit interaction and coherence phase breaking scattering process among itinerant electrons. To acquire the knowledge of different types of scattering processes, we have performed magnetoconductance study on Pt thin films which manifests multi-band (multi-channel) conduction. An extensive analysis of quantum interference originated weak-antilocalization effect reveals the existence of strong (weak) inter-band scattering between two similar (different) orbitals. Coherence phase breaking lengths ($l_{\phi}$) and their temperature dependence are found to be significantly different for these two conducting bands. The observed effects are consistent with theoretical predication that there exist three Fermi-sheets with one $s$ and two $d$ orbital character. This study provides the evidence of two independent non-similar conducting channels and presence of anisotropic spin-orbit interaction along with $e$-$e$ correlation in Pt thin films.
I. INTRODUCTION
Gaining control over electron spin degree of freedom is very much desirable in the field of spintronic research 1,2 . In recent days, spin-orbit interaction (SOI) has been found to provide promising strategy for electrical manipulation spin/magnetism in spintronic devices [3][4][5][6] . This includes the creation of spin current from transverse charge current effect (SHE) [7][8][9] , and the exertion of a torque on a local magnetization from electrical current by spin-orbit torque effect [10][11][12][13] . At a microscopic level, the asymmetric spin dependent electron scattering induced by SOI lies at the heart of the above phenomena 14,15 . Besides, a broader implication of SOI is realized in the design of topological materials with their potential use in low energy dissipation and faster magnetization switching [16][17][18][19] . Materials with high Z element (Z is atomic number) are ideal candidates to look for spin-orbit interaction induced phenomena (SOI strength ∝Z 4 20 ). In particular, 5d transition metal Pt has drawn lot of attention for spin-charge conversions and spin-torque effect in Pt/magnetic layer based hetrostructures due to its intrinsic high SOI [21][22][23][24] . Further, the observation of inverse spin hall effect (ISHE) (reverse process of SHE, i.e. creation of charge current from transverse spin current) in Ni 81 Fe 19 /Pt and in Pt wire at room temperature provided an effective way to detect spin current 25,26 . In view of the extensive use of Pt in spintronics owing to its chemical inertness, easy fabrication of device and most importantly strong intrinsic spin-orbit interaction, it is important to have a comprehensive understanding of the effect of spin-orbit interaction on electronic transport in bare metallic Pt thin films.
Quantum interference originated weak-localization (WL) or weak-antilocalization (WAL) effect is very much sensitive to the spin-orbit interaction (SOI) scattering process in conducting systems 27 . In real system, any type of deviation from perfect crystalline structure acts as a source of scattering potential for itinerant electrons. Scattered electrons propagating along timereversed, identical self-intersecting trajectories known as "Cooperon loop" (CL) 28,29 interfere constructively/ destructively to give rise to suppression (WL)/ enhancement (WAL) of conductivity. Experimentally, WL/WAL is usually determined from conductance correction to classical Drude conductivity at low temperature and in presence of external magnetic field. The interference correction tends to vanish for most of trajectories after averaging over the random scattering potentials, except for those scattered electrons which propagate in CL.
The application of uniform external magnetic field breaks the time reversal symmetry required for the interference effect and induces an additional relative phase shift (due to enclose magnetic flux) between the two electrons traversing CL . This effect suppress constructive/destructive interference, as a result conductance gets enhanced (positive)/decrease (negative) with application of magnetic field in the WL/WAL regime 30,31 . Therefore, variation of magnetoconductance is used as a sensitive probe to detect quantum interference effect. Traditionally, the manifestation of WAL has been attributed to SOI in material. In order for WAL to occur, it is crucial to have a π phase shift between two electron trajectories. The rotation of spin 1/2 particle by 4π is equivalent to the identity operation. However, for Pt thin films 2π rotation of itinerant electron spin induced by SOI gives rise to π phase shift resulting in WAL effect 32,33 .
In this work, we have carried out an indepth magnetotransport study to examine the underlying SOI scattering of itinerant electrons from WAL effect in Pt thin films. It has been realized that SOI scattering strength (proportional to the characteristic spin-orbit magnetic field (B so ∼ 2 T)) is very much stronger than coherent phase breaking scattering strength (B φ ∼ 0.004 T at 2 K). The corresponding B so and B φ are extracted by using Hikami-Larkin-Nagaoka equation (Eq.7). Fur-ther, it has been found that the single conduction channel can not be fitted well with experimental data, rather two independent conduction channels need to be considered to reconcile with experimental data. To get more information about channels and their orbital symmetry, we examined temperature dependent behavior of B φ . It is found that B 1 φ (T ) for one channel exhibits prominent temperature dependence whereas B 2 φ (T ) corresponding to other channel shows weak temperature dependence. The observed difference in the temperature variation of B 1 φ (T ) and B 2 φ (T ) are attributed to two conducting channels made of orbitals with different symmetry. The band originated from more symmetric orbitals are less sensitive to disorder whereas the band originated from anisotropic orbitals are more sensitive to disorder and it is reflected The observed effects are consistent with theoretical predication that there exist three Fermi-sheets (FS) with one s and two d orbital character [34][35][36] . This study provides the evidence of two independent nonsimilar conducting channels (one channel is made of s orbital FS and other is originated from combining two d orbital FS (illustrated in Fig.9)) and presence of anisotropic spin-orbit interaction along with e-e correlation in Pt thin films.
II. Experiment
Platinum thin films were grown on Si/SiO 2 substrates using DC magnetron sputtering at room temperature with a base pressure 5 × 10 −8 mBar. Before deposition, photolithography patterning with a Hall bar geometry was done with a mask aligner MDA-400M-N on 5 × 5 mm 2 substrates. The thickness variation of Pt layer was achieved from a single deposition-run by rotating substrate plate holder. The structural characterization was carried out using a high-resolution X-ray diffractometer. The films were found to be polycrystalline with grains mainly grown/oriented on (111) plane. The thickness of the films was determined to be 16 nm and 23 nm with a roughness about 3Å from X-ray reflectivity (XRR) measurement. The magneto transport measurements were performed in a Cryogenic physical property measurement system (PPMS) with magnetic fields applied parallel and perpendicular to the film surface. We used Keithley 6221 as the current source and a Keithley 2182A nano voltmeter for better data resolution.
III. Resistivity at low temperature
The temperature dependent resistivity (ρ) for 16, 23 nm thick films ( Fig.1) exhibits positive temperature coefficient ( dρ dT > 0) above 6 K indicating metallic character. Moreover, in the intermediate temperature range 15 K < T < 26 K, the resistivity follows quadratic temperature dependence which is attributed to e-e coulomb 18 interaction (EEI) [37][38][39] and is shown in inset of Fig.1. Experimental data in the range temperature 15 K < T < 26 K the fits well with Eq.(1), where ρ 0 is residual resistivity and A accounts for EEI contribution. From fitting, we obtain A equal to 3.6 × 10 −4 µ.Ω.cm.K −2 and 3.36 × 10 −4 µ.Ω.cm.K −2 for 16 nm and 23 nm thick films respectively. The obtained value of A is higher for 16 nm film as compared to 23 nm and it is consistent with theory that where E F and m * are Fermi energy, effective mass of electron respectively 40 . The extracted values of A are slightly higher than the reported value of (A ∼ 10 −5 µ.Ω.cm.K −2 ) for bulk Pt 39 which could be due to reduction of system dimension.
In low temperature regime (2 K< T < 6 K), ρ(T ) exhibits an upturn which is attributed to the combined effect of quantum interference (WL/WAL) and EEI correction in quasi 2D limit. This regime is denoted by dark color shed in Fig.1 and it is discussed in Sec:IV in terms of sheet conductance correction.
IV. Signature of WAL Effect from Temperature dependent sheet conductance At low temperature due to weak thermal agitation, electrons are able to move longer distance (on an average) in it's trajectory with maintaining phase coherence which is known as phase coherence length (l φ ). Quantum interference manifests prominently when phase coherence length is very much longer than mean free path (l e ) of Solid lines represent the fitting of ∆σ(T ) vs ln(T /T d ) using Eq.(4) in the low temperature regime. (The upper x-axis denotes the real temperature and T d = 1 K 41 ) (vertically offset to plots are given for clarity).
electrons (l φ >> l e ). One of the indications of quantum interference correction is ln(T ) dependence of sheet conductance (∆σ QI ) in 2D limit (l φ > t, where t is film thickness) and in general, for N independent conducting channels or Fermi sheets, ∆σ QI can be expressed as [42][43][44] , where γ int is related with interaction coupling strength among independent Fermi sheets (strong coupling and zero coupling lead to N γ int ∼ 1 and N γ int ∼ N respectively), p is related with temperature exponent of phase coherence length (l 2 φ ∝ T −p 45-47 ), T 0 depends on l e and α takes different values depending upon dominant scattering process involved. For three extreme situations α follows as: α ∼ −1/2 in strong spin-orbit scattering with absence of magnetic impurity scattering (WAL), α ∼ 1 in quantum coherent regime (l φ > l e ) in absence of spin-orbit and magnetic impurity scattering (WL), α ∼ 0 in strong magnetic impurity scattering (which drives towards classical scenario) 42 . Therefore, the coefficient of contains crucial information about microscopic scattering process. Further, EEI contribution to sheet conductance (∆σ e ) exhibits also similar ln(T ) dependence. For N independent channels in 2D limit, it can be expressed as [48][49][50][51][52] , where F is averaged screened Coulomb interaction over Fermi surface, normalized with zero momentum transfer in EEI scattering process. Therefore, total correction to (σ total ) can be expressed as, where A = N (γ int αp + (2 − 2F )) and T is a constant. Hence, it is not straight froward to extract the value of α from zero magnetic field ∆σ(T ) vs ln(T ) experimental data, when the contribution from quantum interference and EEI coexist. To overcome this issue, one needs to exploit the temperature dependence of sheet conductance at constant weak magnetic fields since it will suppress mainly quantum interference contribution.
Quantum interference effect (WL/WAL) originates from particle-particle channel interaction between two electrons traversing in "Cooperon loop" (CL) and it is very much sensitive to enclosed magnetic flux through the CL due to applied external magnetic field. However, EEI effect is governed by particle-hole channel interaction and it can not be influenced by external weak magnetic field 48 (EEI can be influenced in higher magnetic field if gµ B B > 1/τ so , where τ so , g and µ B are SOI scattering time, Lande-g factor and Bhor-magneton respectively) 43,52-56 . As a consequence of the above fundamental differences between quantum interference effect (WL/WAL) and EEI, one can disentangle WL/WAL and EEI contribution by applying (perpendicular to film surface) constant weak external magnetic fields. The applied field will suppress WL/WAL contribution, while EEI effect remains unaffected. Hence, the change in ln(T ) coefficient upon the application of weak magnetic field will provide only WL/WAL contribution (i.e. A QI in Eq.(2)).
We investigated the variation of ln(T ) coefficient (A) in σ(T ) with applying constant perpendicular external magnetic fields ranging from 0 -0.8 T, shown in Fig.2. A systematic change in A was observed with the increment of external constant magnetic field and we obtained maximum change in ln(T ) coefficient as A QI = A| B=0 −A| B=0.8 = −0.56 (in quantum conductance unit, e 2 /2π 2 ). To evaluate the value of α from A QI that contains the information about dominant scattering process, one requires to know the values of N γ int and p. The p adapts a universal value depending upon the dominant interaction responsible for inelastic scattering process in the system. In particular, for EEI originated inelastic scattering, l −2 φ (l −2 φ ∝ T p ) exhibits linear temperature dependence at low-temperature and it shows a crossover from T to T 2 ln(T ) behavior with the increment of temperature 47 .
Two possible cases can be invoked to assess the value of α: (i) For Pt thin films, resistivity follows quadratic temperature dependence (14 K < T < 23 K) (Sec:III) which signifies the dominance of electron-electron scattering over electron-phonon scattering and gives rise to p = 1 at lower temperature. Considering single channel N = 1 or strong coupling interaction N γ int ∼ 1, one can obtain α ∼ −1/2 (Eq.2). (ii) Effective two independent channels N = 2 and p = 1 (extracted from magnetoconductance analysis, Sec:V A) give rise to α ∼ −0.28 (Eq.2).
From detailed magnetotransport analysis (discussed in Sec:V), it is realized that the conduction takes place through two independent channels N = 2 in Pt thin films. Considering two channel conduction, Eq.2 provides the value of α = −0.56/(N γ int p) = −0.56/(2 × 1) ∼ −0.28 (as discussed above) which is very much unexpected for high spin-orbit coupled system. To find out the source of this discrepancy, we re-looked into how the explicit temperature dependence arises in the sheet conductance correction (Eq.2) due to quantum interference effect. One can primarily visualize that with the increment of temperature, random thermal agitation enhances the inelastic scattering among itinerant electrons and as a consequence, l φ becomes temperature dependent (explicit functional form is determined by the dominating scattering mechanism (e-e, e-phonon scattering) at non zero temperature). The l φ fixes the upper cut-off spatial dimension for quantum interference effect and the interference originated correction is proportional to the available coherence area (∝ l 2 φ ). Thus, for single channel with phase coherence length l φ , ∆σ QI ∝ − ln(l 2 φ /l 2 e ). On the contrary, dominant impurity induced inelastic scattering leads to temperature independent l φ 57 and such kind of conducting channel can not contribute to ∆σ QI (T ) considerably. Therefore, for a system with N non-similar independent conduction channel, ∆σ QI (T ) can be expressed as, ∆σ QI (T ) = −α N n=1 e 2 2π 2 ln((l n φ /l n e ) 2 ).
It has been observed from magnetoconductance analysis that the examined Pt thin films possesses two independent conducting channels, one of which shows prominent temperature dependence (B 1 φ (T ) ∝ T , Fig.3) at lower temperature regime (2 K < T < 6 K), and B 2 φ (T ) associated with other channel follows a negligible variation with temperature. This implies that contribution to the slope of ln(T ) (Eq.5) from second channel is negligible. Thus, Eq.5 can effectively be converted into ∆σ QI (T ) = −α e 2 2π 2 ln(l 1 φ /l 1 e ) 2 + ln(l 2 φ /l 2 e ) 2 = −α e 2 2π 2 ln(l 1 φ (T )/l 1 e ) 2 + ln(l 2 φ (T )/l 2 e ) 2 ∼ α e 2 2π 2 ln(T /T 1 0 ) + c where, c is a temperature independent constant as l 2 φ exhibits negligible temperature dependence. Since only one channel contributes logarithmic temperature correction to ∆σ QI (T ), we obtained α = −0.56/(N γ int p) = −0.56/(1 × 1) ∼ −1/2 which is consistent with theoretical value of α in presence of strong spin-orbit scattering. We now turn into the estimation EEI strength which is related with screened Coulomb potential F . For strong interaction, F is small fractional number and for free electrons F ∼ 1. F can be evaluated by considering that at a magnetic field of 0.8 T, the quantum interference effect is completely exhausted and then one can approximate the coefficient A| B=0.8T ∼ N × 2(1 − F ) = 1.22, (N = 2 as discussed before) which leads to F ∼ 0.7. The 5d correlated transition metal Pt exhibits higher value of F which indicates e-e Coulomb interaction is weaker in comparison with strongly electron correlated 3d transition metal Cu F Cu ∼ 0.5 58 .
A. Perpendicular Magnetic field (B ⊥ )
The magnetotransport measurement is a powerful tool to extract different types of microscopic scattering lengths (i.e. spin-orbit, inelastic scattering) by exploiting the quantum interference originated correction (WAL/WL) to sheet conductance. Quantum interference effect (WL/WAL) manifests more prominently with the reduction of dimension. For quasi-2D system (where phase coherent length (l φ ) is greater than film thickness (t)) with N independent conducting channels, the variation of sheet conductance (∆σ(B ⊥ ) = σ(B ⊥ )−σ(0)) with external perpendicular magnetic field (B ⊥ ) is described by Hikami-Larkin-Nagaoka (HLN) equation which can be expressed as 42,52,[59][60][61] , where ψ(x) is the digamma function, B n e , B n φ and B n so correspond to characteristic magnetic fields of nth channel which is related with scattering lengths (l e , l φ and l so ) as B i = /4el 2 i , where l e , l φ and l so denote elastic, phase coherence and spin-orbit scattering lengths respectively and B e is determined from semi-classical approximation and detailed discussion is given in supplemental material 62 .
If independent channels are very much similar to each other, then the characteristic magnetic fields B n i corre- sponding to each channel are nearly equal and as a result HLN Eq. (7) for N channels becomes Fig.3(a) shows the sheet conductance variation with perpendicular magnetic field for 16 nm thick film at different temperature. The presence of sharp cusp at lower temperature indicates the WAL effect. However, the sharpness disappears rapidly with the increment of tem- 7)) for 16 nm Pt film (vertically offset to plots are given for clarity). (b) Extracted B n φ for two different independent channels (n = 1, 2) are illustrated at different temperatures, inset displays the value of B n so at various temperatures.
perature. Experimental data ∆σ(B ⊥ ) are fitted well with Eq.(8), nevertheless the fitting provides fractional value of N which can not be anticipated by considering independent conducting channels. The fitting and their corresponding best fitting parameters (B φ , B so , N ) are illustrated in Fig.3(a) and 3(b) respectively. The fractional value of N can appear due to following reasons, (i) presence of weak but non-negligible inter-orbital scattering among channels 63,64 and (ii) channels posses different set of B n e , B n so , B n φ . To obtain more meaningful insight about the conducting channels in Pt thin films, the magnetoconductance data is further analyzed by considering more general equation (Eq.7) with the independent channels having different set of B n e , B n so , B n φ . The fitting is shown in Fig.4(a) and the best fitting parameters 7)) for 23 nm Pt film (vertically offset to plots are given for clarity).
(b) Extracted B n φ for two different independent channels (n = 1, 2) are illustrated at different temperature, inset displays the value of B n so at various temperatures. Fig.4(b) (for 16 nm thick Pt film). From the fitting, we obtain N = 2 and the phase coherence magnetic field (B 1 φ ∼ 0.004 T at 2 K) for one channel to be higher than other (B 2 φ ∼ 0.15 T at 2 K). Further, spin-orbit scattering magnetic field B 1 so ∼ 2 T , B 2 so ∼ 2.5 T are found to be very much higher than both B 1 φ , B 2 φ .The obtained fitting parameters provide the following physical interpretations: (i) the Pt metal possesses large intrinsic spin-orbit interaction B so ∼ 2 T (in perpendicular configuration with integer number channels N = 2), (ii) presence of two independent conducting channels in Pt thin film, and (iii) the variation of B 1 φ with temperature is very much prominent in comparison with the B 2 φ and B 1 φ < B 2 φ . The prominent variation of B 1 φ vs T for one channel indicates that the inelastic scat- tering process is dominated by e-e scattering (Sec:III ). However, the other channel with weakly temperature dependent inelastic scattering process (at low temperature regime 2 < T < 6 K) indicates that significant amount of impurity scattering is present. To corroborate further, we analyzed the same for 23 nm thick Pt film and we found that it also follows similar behavior. The fitting and obtained parameters are illustrated in Fig.5(a), 5(b).
B. Parallel Magnetic field (B || )
In quasi-2D system, itinerant electrons can diffuse along film thickness by satisfying the restriction of quantum mechanical boundary conditions 59 . Hence, quan- 7)) for 16 nm Pt film (vertically offset to plots are given for clarity). (b) Extracted B n φ for two different independent channels (n = 1, 2) are illustrated at different temperatures, inset displays the value of B n so at various temperatures. tum interference effect gets influenced even in parallel magnetic field configuration. Quantum interference originated magnetoconductance correction in B || configuration for N independent channels is given by 44,59,[65][66][67] , where, B t is defined as, B t = 12 /et 2 and t is film thickness.
For simplicity, if we assume all independent channels to have equal characteristic magnetic filed then Eq.(9) 7)) for 23 nm Pt film (vertically offset to plots are given for clarity). (b) Extracted B n φ for two different independent channels (n = 1, 2) are illustrated at different temperatures, inset displays the value of B n so at various temperatures.
gets modified as Experimental data ∆σ(B || ) are fitted well with Eq.(10) (Fig.6(a)), but the fitting provides fractional value of N . The extracted parameters are illustrated in Fig.6(b) for 16 nm thick film. As, previously we have discussed that number of independent conducting channel can not be a fractional value, different channels must have different characteristic magnetic fields. Therefore, we analyzed ∆σ(B || ) considering the independent channels to have different set of B n so , B n φ (Eq. (9)) which is shown in FIG. 9. The schematic diagram for theoretically predicated three Fermi-sheets one derived from s-orbital (blue color) and other two derived d-orbital (orange color). The possible interaction among them is denoted by arrows (allowed and forbidden interaction is marked by ×, respectively). Interconnected two d-orbitals leads to one effective conducting channel and other one is from s-orbitals In parallel magnetic field configuration, the extracted values of B 1 φ (by fitting with Eq.9) are found to be petty close to B 1 φ obtained (by fitting with Eq.7) in perpendicular magnetic field configuration and follows very much similar behaviour with temperature. However, the obtained B so are very much different for perpendicular and parallel configuration. It confirms the anisotropic nature of the spin-orbit scattering potential in Pt thin films.
VI. Summary and Conclusions
It was predicted from a theoretical band structure calculation that Pt metal (electronic configuration 5d 9 6s 1 ) has three sheets of Fermi surfaces which are comprised of s-band and two from d-band 34 . We realised the pres-ence of two independent conduction channels from our magneto-transport analysis in both perpendicular and parallel magnetic configurations. This apparent discrepancy (theoretically three channels, experimentally two channels) leads to an insightful physics of Pt thin film in presence of weak disorder scattering which is unavoidable in real system. The two d-band channels are mixed up with each other due to their similar orbital nature and in presence of intermixing scattering, they will act as an equivalent single channel. On the contrary, intermixing scattering among s, d-band channels is very weak due to different orbital symmetry. As a result effectively there will be two independent conducting channels which is illustrated in Fig.9. Secondly, we have found that the characteristic magnetic field for coherence phase breaking scattering (B 2 φ ) corresponding to one channel shows a weak temperature dependence and this intriguing phenomena could arise very often in a system with significant impurity scattering. However, B 1 φ for other channel shows a prominent variation with temperature, indicating the the presence of effectively weak impurity scattering. The crucial point is that the same amount of disorder in a system can act differently for different conduction channels depending upon it's orbital symmetry. The bands derived from more anisotropic orbitals (e.g. d, p orbital) are very much sensitive to minute amount of disorder 68,69 which can give rise to very high inelastic scattering in comparison to conducting channel made out of symmetric orbitals (e.g. s orbital). To conclude, the symmetry of orbitals involved in conducting channels, presence of EEI and anisotropic spin-orbit interaction are revealed from our magnetoconductance study in Pt thin films. | 2022-09-15T01:16:28.097Z | 2022-09-14T00:00:00.000 | {
"year": 2022,
"sha1": "50acd2b20f1ec4af0d5bf6923f93403ae8bad882",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "50acd2b20f1ec4af0d5bf6923f93403ae8bad882",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
214073012 | pes2o/s2orc | v3-fos-license | Low-Temperature Quasi-Direct Copper–Copper Bonding with a Thin Platinum Intermediate Layer Prepared by Atomic Layer Deposition
A low-temperature Cu–Cu bonding technique using a thin metal intermediate layer deposited by atomic layer deposition (ALD) was developed. A thin Pt intermediate layer was selectively deposited on the Cu surface at the angstrom level by ALD without any mask under low vacuum conditions (24 Pa). To suppress the deterioration of bonding reliability caused by impurities at the bonding interface, quasi-direct bonding was realized by using a thin Pt intermediate layer. The Cu–Cu quasi-direct bonding with a thin Pt layer provided a bonding strength of 9.5 MPa, which was five times higher than that without the intermediate layer (1.9 MPa). These results will contribute to the development of low-temperature Cu–Cu bonding for three-dimensional integrated circuit chips.
Introduction
Rapid progress of information technologies for the internet of things has been promoted by the functionalization of single integrated circuit (IC) chips. Improvement of IC chips has been achieved by densification of transistors in IC chips following Moore's law. In the past decade, threedimensional (3D) ICs have become a research trend to overcome the limitation of Moore's law.[1, 2] A 3D IC is constructed of vertically stacked and interconnected thinned IC chips. Such a configuration lowers power consumption because it decreases the resistance by shortening the wire length, and also shortens propagation delays as well as increasing the transistor density. [3,4] One important technology in 3D ICs is the vertical interconnections formed between the stacked layers using metal bonding techniques. Cu has been widely used as an interconnection material because it shows high electrical conductivity (1.68 × 10 −8 Ωm) [5,6] and higher electromi-gration resistance rather than that of Al. [7] Therefore, interconnections formation using an effective Cu bonding technique is desired. Cu-Cu bonding has advantages for 3D IC technology compared with Cu solder bonding, which is widely used in present IC chips. In particular, Cu-Cu bonding can be applied to fine pitch bonding because it suppresses bump deformation, and Cu-Cu bonding provides bonds with improved electrical and mechanical performance compared with those produced by solder bonding. [8] Cu-Cu bonding generally has been demonstrated using thermocompression bonding, which is achieved by atom interdiffusion and grain growth at the bonding interface induced via application of heat and pressure. Thermocompression bonding is promising for Cu-Cu bonding because of its high reliability, although Cu-Cu thermocompression bonding requires a high temperature of 350-400°C. [9,10] A high temperature is required because a native oxide Suga's group [11] performed Cu-Cu direct bonding using a Cu surface that was cleaned and activated via Ar-ion beam treatment to remove the surface Cu oxide layer; this process was coined surface-activated bonding. Liu and Chen [12,13] used highly (111)-oriented nanotwinned Cu for Cu-Cu bonding because the Cu(111) surface has the fastest surface diffusivity of all the crystallographic planes and high oxidation resistance. Cu-Cu bonding with nanostructured Cu has also been proposed because size effects should enhance the surface melting of nanostructured Cu compared with that of bulk Cu. [8,14] These methods are promising to lower the Cu-Cu bonding temperature, although ultra-high vacuum (UHV) or specific fabrication conditions are required. Other methods to lower the Cu-Cu bonding temperature have also been reported, including inserting a passivation layer (Ti, [15] Au, [16,17] Cu nitride, [18] and a self-assembled monolayer of alkanethiol [19]) on the Cu surface to suppress its oxidation. However, a metal intermediate layer with a certain thickness leads to formation of Kirkendall voids, which lower the bonding strength, because of the difference between the atom diffusivities of Cu and the inserted metal layer. [20,21] In addition, Kirkendall voids, inserted metal layers, and impurities may cause deterioration of the electrical properties and reliability of IC chips.
In this study, we report a Cu-Cu quasi-direct bonding Si substrate with a 300-nm-thick planar Cu film was prepared in the same manner as a counter substrate. (Fig. S2). In addition, Cu substrates without a Pt layer and with a Pt layer on one side were also bonded as references.
Evaluation method
The shear strength of the bonded samples was evalu- μm, and 8.5 μm, respectively. A tapered edge structure was observed at the edge of the Cu bumps, as depicted in Fig. 2(b). These results indicated that the bilayer resist process allowed us to fabricate Cu bumps with an error of less than 6.3% from the designed value without any burrs, helping to prevent the formation of partial contacts.
The chemical compositions and surface morphologies of the samples were analyzed by EDS and SEM, respectively. (Fig. 4(b)), which were not observed on the Cu surface before ALD (Fig. 4(a)). These results indicated that the Pt layer was deposited as the particular shape on the Cu surface as previously reported via Volmer-Weber island growth model during the initial growth stages. [29][30][31] We also evaluated Pt ALD on an Si surface ( Fig. Fig. 2 (a) Low-and (b) high-magnification SEM images of the Cu bumps used for bonding tests.
E19-014-4
Transactions of The Japan Institute of Electronics Packaging Vol. 13,2020 3(c) and 4(c)). The Pt layer on the Si surface was not identified in EDS and SEM analyses, consistent with a previous report. [25,26] This result further confirmed that a thin Pt layer can be selectively deposited on a Cu surface by ALD.
Elam and Fang [24,25] reported that the deposition selectivity of Pt is caused by differences of the chemisorption ability of oxygen on the substrate surface, leading reactivity differences of the precursor. Therefore, ALD can be Figure 7 presents the Pt peaks detected for the samples exposed to different annealing temperatures.
The sample with the Pt layer deposited by ALD showed deeper diffusion of Pt than that of the sample with the Pt layer deposited by IBS. Furthermore, the Pt diffusion depth into the ALD sample, in which the Pt layer was deposited over the native oxide layer, was almost the same as that of the IBS sample without the native oxide layer.
This result suggests that Pt atoms slightly diffused into the Cu layer during the ALD process and a Pt-Cu interface may exist after ALD. From these results, we assumed that the thin Pt layer deposited by ALD had the following effects on Cu-Cu bonding: (i) the surface Pt layers, which have high oxidation resistance, promote initial bonding, and (ii) the Pt-Cu interface, which appeared during ALD process, accelerates atom interdiffusion. Therefore, the thin Pt layer deposited by ALD realized Cu-Cu quasidirect bonding at a lower temperature than 350°C. The effect of the shape of the Pt layer on the bonding mecha-nism is currently under study.
Conclusions
We proposed a bonding method with a thin metal intermediate layer using ALD called Cu-Cu quasi-direct bond- | 2020-03-05T10:54:33.303Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "2b4902f873a34a2326f5f50acda58d2943de1734",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jiepeng/13/0/13_E19-014-1/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bc127b0588fce33195e71f1908d79280ee9a575d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
53259961 | pes2o/s2orc | v3-fos-license | Chronic Ingestion of Sodium and Potassium Bicarbonate, with Potassium, Magnesium and Calcium Citrate Improves Anaerobic Performance in Elite Soccer Players
Anaerobic power and anaerobic capacity significantly influence performance in many sport disciplines. These include prolonged sprints in athletics, swimming, or cycling, and other high intensity intermittent sports, such as soccer or basketball. Considering the association of exercise-induced acidosis and fatigue, the ingestion of potential buffering agents such as sodium bicarbonate, has been suggested to attenuate metabolic acidosis and improve anaerobic performance. Since elite soccer players cover from 200 to 350 m while sprinting, performing 40–60 all out sprints during a game, it seems that repeated sprint ability in soccer players is among the key components of success. In our experiment, we evaluated the effectiveness of chronic supplementation with sodium and potassium bicarbonate, fortified with minerals, on speed and speed endurance in elite soccer players. Twenty-six soccer players participated in the study. The subjects were randomly divided into two groups. The experimental group was supplemented with sodium bi-carbonate and potassium di-carbonate fortified with minerals, while the control group received a placebo. The athletes were tested at baseline and after nine days of supplementation. Anaerobic performance was evaluated by the Repeated Anaerobic Sprint Test (RAST) protocol which involved 6 × 30 m max sprints, separated by 10 s of active recovery. Resting, post ingestion and post exercise concentrations of HCO3− and blood pH were measured as well as lactate concentration. The current investigation demonstrated a significant increase in RAST performance of elite soccer players supplemented with sodium and potassium bicarbonate along with calcium phosphate, potassium citrate, and magnesium citrate ingested twice a day over a nine-day training period. The improvements in anaerobic performance were caused by increased resting blood pH and bicarbonate levels.
Introduction
There are numerous sport disciplines in which performance depends to a large extent on anaerobic capacity. These include either single supramaximal efforts, such as prolonged sprints in athletics (200-400 m), swimming (100-200 m), cycling (1000 m) or speed skating (1000-1500 m). On the other hand, many sports are characterized by high intensity intermittent exercise. These include team sports, in which repeated sprint ability is significant for performance or combat sports, where repeated bouts of power are indispensable for success. Both types of exercise may cause disturbances in acid-base balance and fatigue of skeletal muscle. The factors determining fatigue are complex and include both central and peripheral components [1,2]. The decline in performance during exercise that is attributed to the CNS, which integrates input from various body parts, is known as central fatigue. Peripheral components of fatigue include the excessive accumulation of metabolites, of which hydrogen ions (H), potassium (K), and phosphate ions (Pi), as well as the depletion of energy substrates seem to be of greatest significance [3,4]. Single all out efforts lasting approximately 40-60 s cause substantial muscle (6.0-6.4) and blood (6.9-7.0) decreases of pH and lactate concentrations of 22-26 mmol/L and a significant inhibition of glycolytic flux [5]. During team sport games, which may last from 40 min (basketball) to 120 min in an overtime soccer game, decreased repeated sprint ability is primarily attributed to the depletion of muscle glycogen, as the acid-base disturbances are less evidenced, and post exercise lactate concentrations significantly lower [6,7].
Considering the association of exercise-induced acidosis and fatigue, the ingestion of potential buffering agents such as sodium bicarbonate, sodium citrate or potassium bicarbonate has been suggested to attenuate metabolic acidosis and improve anaerobic performance [8][9][10]. Some authors have also suggested chronic using highly alkalized water during periods of intense training and competition to improve hydration and to increase the rate of lactate utilization following anaerobic exercise [11,12].
The ergogenic effects of buffering agents such as sodium bicarbonate have been explored for many decades now. Most empirical data support the benefits of sodium bicarbonate or related substances on exercise performance of different type, duration and intensity [2,[13][14][15], however there are reports suggesting no ergogenic effects of buffering supplements as well [16][17][18]. It has been suggested that the discrepancies in results of empirical research with buffering agents are related to: selection of subjects; exercise protocols, especially the intensity, mode and duration of exercise; dosage and timing of supplement ingestion; and the chemical composition of the buffering supplements. As mentioned above, trained and untrained subjects have been included in research [19][20][21][22]; exercise protocols have included single bouts of supramaximal effort [23], intermittent high intensity exercise [24] and skilled based protocols [25,26]. The dosage has usually ranged within 0.3-0.4 g kg −1 /BM and ingestion time before performance has varied from 60 to 120 min, in single or split doses [20,[27][28][29][30][31]. Most studies have used sodium bicarbonate as the only buffering agent in their supplement [28,32,33], while others have tested the combined effects of carbohydrates and sodium bicarbonate [19], creatine and sodium bicarbonate [34], β-alanine and sodium bicarbonate [35,36], and caffeine and sodium bicarbonate [37] on different, sport specific or general exercise modalities. Recently, there are attempts to combine glucose and/or electrolytes with sodium or potassium bicarbonate to increase buffering capacity [7,11]. The majority of authors tested the acute effects of buffering substances on exercise performance [13,38,39], while recently chronic effects of sodium bicarbonate and other buffering agents have also been evaluated with regards to anaerobic performance [15,22]. Considering team sport games, repeated sprint ability test protocols have confirmed positive effects of buffering supplements [11,38], while empirical research with sport specific simulations, including football, soccer, rugby and water polo, have not confirmed ergogenic benefits of sodium bicarbonate [20,40,41].
Soccer is the most popular sport in the world. It is a team sport that involves speed, acceleration, changes of direction as well as numerous technical and tactical activities that require concentration and precision [42]. At the elite level, soccer players perform from 1300 to 1400 different motor activities during a 90 min game. Most specific and general motor activities are executed and repeated at high intensity, causing significant disturbances in acid-base equilibrium and gradual fatigue. A soccer game played at the elite level can elicit up to 85-90% of maximal heart rate, while blood lactate concentrations can reach 7-8 mmol/L at half time and decrease to 5-6 mmol/L after the game, because of glycogen depletion [42]. Depending on the position on the field, players cover from 10,500 to 12,000 m while walking, jogging, running backwards, striding and sprinting [43]. During a game, elite soccer players cover from 200 to 350 m while sprinting, performing 40-60 all out sprints at distances ranging from 5-8 m up to 25-33 m. Considering the above, it seems that repeated sprint abilities in soccer are among the key components determining success.
This study evaluated the effectiveness of chronic supplementation with sodium and potassium bicarbonate, fortified with potassium, magnesium and calcium citrate and phosphate on speed and repeated sprint ability in elite soccer players.
Subjects
Twenty-six well-trained soccer players, who compete at the elite polish league, participated in the study. The experiment took place during an 11-day camp in Spain, thus training, living and feeding conditions were identical for all participants. The athletes constituted a homogenous group in regards to age, somatic characteristics, as well as aerobic and anaerobic performance ( Table 1). The subjects (n = 26) were randomly divided into two groups: the experimental group (EG; n = 13), which received a complex of independent supplements (sodium bi-carbonate, potassium di-carbonate, calcium phosphate, potassium citrate, magnesium citrate, and calcium citrate ( Table 2)), and the control group (CG; n = 13), which received a placebo. All subjects had valid medical examinations and showed no contraindications to participate in the study. Subjects were informed verbally and in writing of the experimental protocol, and the possibility to withdraw at any stage of the experiment, and gave their written consent for participation. The study was approved by the Research Ethics Committee at the Academy of Physical Education in Katowice, Poland.
Diet and Supplemental Protocol
Energy as well as macro-and micronutrient intake of all subjects were determined by 24 h nutrition recall 3 weeks before the study was initiated. The participants were placed on an isocaloric (3455 ± 436 kcal/day) mixed diet (55% carbohydrates, 20% protein, 25% fat) prior to and during the investigation. The pre-trial meals were standardized for energy intake (600 kcal) and consisted of carbohydrate (70%), fat (20%) and protein (10%). The participants did not take any medications and substances not prescribed by the supplementation protocol for 3 weeks before and during the study.
The players from the experimental group ingested a single dose of 3000 mg sodium di-carbonate, 3000 mg potassium di-carbonate (6 caps containing 500 mg each), 1000 mg (600 mg + 400 mg) calcium phosphate and calcium citrate, 1000 mg potassium citrate, and 1000 mg magnesium citrate twice a day, 90 min before each practice session. The control group ingested identical capsules containing cornstarch. Supplements were taken with plenty of water (600 mL). The supplementation protocol included an additional dose of di-carbonates and minerals, 90 min before the exercise test protocol and the day before the test. The dose of di-carbonate was chosen according to the literature data, where amounts ranging from 5 to 9 g·day −1 are suggested. Such doses have shown significant improvements in buffering capacity with no gastrointestinal distress.
Study Protocol
The experiment lasted 11 days, during which two series of laboratory analyses were performed.
The tests were carried out at baseline and after 9 days of supplementation. The study was conducted during the preparatory period of the annual training cycle, when a high volume of work dominated the daily training loads. The participants refrained from exercise for one day before testing to minimize the effect of fatigue.
The subjects underwent medical examinations and somatic measurements. Body composition was evaluated in the morning, between 08:00 and 08:30. The day before, the participants had their last meal at 20:00. They reported to the laboratory after an overnight fast, refraining from exercise for 24 h. The measurements of body mass were performed on a medical scale with a precision of 0.1 kg. Body composition was evaluated using the electrical impedance technique (Inbody 720, Biospace Co., Anaheim, Los Angeles, CA, USA).
Anaerobic performance was evaluated by the Running-Based Anaerobic Sprint Test (RAST) protocol which involved 6 × 30 m maximal sprint efforts, separated by 10 s of active recovery. Infrared photocell gates (Witty, Micro Gate System, Mahopac, New York, NY, USA) were placed precisely 30 m apart. Additionally, two gates were placed at the 5th and 25th m of the sprint distance. The photocell system was used to evaluate the sprint times at 5 and 30 m. The 5 m distance time was considered as starting speed, the 30 m distance evaluated absolute speed, while total time of the 6 × 30 m determined the level of speed endurance and anaerobic capacity. Participants were verbally informed about the time of the rest interval between particular sprints. Before testing, participants were required to complete a 15-min warm-up, which included jogging, dynamic stretching as well as several starts and accelerations. After a 5-min passive rest the participants reported to the starting line and began the RAST protocol on a command. The subjects were instructed to sprint the 30 m distance as fast as they could, decelerate after the finish line and jog back to the starting line for the next repetition. The procedure was repeated until 6 sprints were completed.
Statistical Analysis
The Shapiro-Wilk, Levene and Mauchly's tests were used to verify the normality, homogeneity and sphericity of the sample's data variances, respectively. Verifications of the differences between analyzed values before and after di-carbonate and mineral supplementation, between rest and post exercise conditions in the E and C groups were verified using ANOVA with repeated measures. Effect sizes (Cohen's d) were reported where appropriate. According to Cohen's guidelines, the effect for r was established as follows: large effect ≥ 0.5, moderate effect < 0.5 and ≥0.3, and small effect < 0.3 and ≥0.1 [44][45][46][47]. Statistical significance was set at p < 0.05. All statistical analyses were performed using Statistica 9.1 (TIBCO Software Inc., Palo Alto, California, CA, USA) and Microsoft Office (Redmont, Washington, DC, USA), and are presented as means with standard deviations.
Results
The repeated measures ANOVA between the experimental and control group, considering baseline values and the post-intervention period (supplementation) at rest and after exercise, revealed statistically significant results for three variables (Table 1).
Post-hoc tests revealed a statistically significant increase in mean LA post-exercise when comparing the values (from 7.68 to 9.36 mmol/L with p = 0.0001) after exercise between the control and experimental group supplemented with di-carbonate and minerals. Similar changes were observed for post-ingestion blood pH (from 7.35 to 7.47 with p = 0.0001) and HCO 3 − (from 24.3 to 28.8 mmol/L with p = 0.0001) between the control and experimental groups. Intragroup analysis with repeated measures ANOVA between the baseline and post-intervention period (di-carbonate and mineral ingestion) at rest, post ingestion and after exercise for the experimental group, revealed statistically significant differences for six variables (Table 2 and Figures 1-4). The changes in the control group were not statistically significant. [44][45][46][47]. Statistical significance was set at p < 0.05. All statistical analyses were performed using Statistica 9.1 (TIBCO Software Inc., Palo Alto, California, CA, USA) and Microsoft Office (Redmont, Washington, DC, USA), and are presented as means with standard deviations.
Results
The repeated measures ANOVA between the experimental and control group, considering baseline values and the post-intervention period (supplementation) at rest and after exercise, revealed statistically significant results for three variables (Table 1).
Post-hoc tests revealed a statistically significant increase in mean LA post-exercise when comparing the values (from 7.68 to 9.36 mmol/L with p = 0.0001) after exercise between the control and experimental group supplemented with di-carbonate and minerals. Similar changes were observed for post-ingestion blood pH (from 7.35 to 7.47 with p = 0.0001) and HCO3 − (from 24.3 to 28.8 mmol/L with p = 0.0001) between the control and experimental groups.
Intragroup analysis with repeated measures ANOVA between the baseline and postintervention period (di-carbonate and mineral ingestion) at rest, post ingestion and after exercise for the experimental group, revealed statistically significant differences for six variables ( Table 2 and Figures 1-4). The changes in the control group were not statistically significant. The
Ergogenic Effects and Mechanism
The ergogenic effect of sodium bicarbonate and other buffering supplements on exercise performance stems from the reinforced extracellular bicarbonate buffer capacity to regulate acid-base balance during exercise. The oral intake of NaHCO 3 − elevates the concentration of bicarbonate ions (HCO 3 − ), thus increasing the alkalotic environment in the extracellular fluid compartments [27,28].
The elevated HCO 3 − enlarges the gradient between extracellular and intracellular H + , which stimulates the lactate/H + cotransporter [48]. This leads to a greater efflux of H + from intramuscular regions into the extracellular fluid, allowing HCO 3 − and buffering compensatory systems to remove H + , thus, increasing pH. Several mechanisms have been proposed to explain how induced alkalosis evokes an ergogenic response to anaerobic exercise, yet there is no consensus among sport scientists. Numerous propositions surrounding both peripherally and centrally driven mediators of fatigue and exercise performance have been investigated [49]. Such mechanisms include the attenuation of exercise-induced arterial oxygen desaturation allowing for enhanced oxygen delivery [50], delayed impairment of muscular contractile properties [51], and augmented glycolytic flux [52]. More recently, research is indicative of an altered neuromuscular response to pre-exercise NaHCO 3 − administration [53,54].
The neuromuscular response that is characterized by a reduced rate of force production declines during isometric contractions after a bout of submaximal exercise [54] and repeated bouts of high intensity exercise [53]. The suggestion therefore is that NaHCO 3 − modifies peripheral indices of fatigue to improve exercise performance. In addition, evidence also has alluded to a central derived contribution to NaHCO 3 − ergogenic effect.
Anaerobic Performance
The current investigation demonstrated a significant increase in anaerobic performance of athletes in the experimental group supplemented with sodium bicarbonate and minerals. The improvements in anaerobic performance following sodium bicarbonate consumption were influenced by significant increases in resting blood pH and bicarbonate concentration.
Anaerobic glycolysis leads to an equal production of lactate and hydrogen ions [55]. Most of the released hydrogen ions are buffered, however, a portion (~0.001%) that stays in the cytosol results in a decrease in muscle pH and impairment of exercise. The rationale for the ergogenic effects of bicarbonate is that the increase in extracellular pH and bicarbonate will enhance the efflux of lactate and H + from the muscle cell [56]. Buffering of protons can attenuate changes in pH and enhance the muscle's buffering capacity, allowing for a greater amount of lactate to accumulate in the muscle. The results of the current study demonstrated a significant increase in resting blood pH (from 7.38 to 7.47), resting HCO 3 − concentration (from 23.21 to 28.81 mmol/L) and post exercise lactate concentration (from 7.94 to 9.36 mmol/L) in the experimental group supplemented with bicarbonate and minerals. The concentration of bicarbonate is much lower in the muscle than in the blood (10 vs. 25 mmol/L), and the low permeability of the charged bicarbonate ion precludes any immediate effects on the acid-base status of muscles [57]. These results are in agreement with the view that an appropriate mineral and hydration status is necessary for active bicarbonate ion transport.
Fatigue development during high-intensity intermittent exercise may be caused by a complex interplay between intra and extracellular concentrations, as well as gradients of ions such as K + , Na + , Cl − , H + and Mg + [58,59]. In the present study, no differences were detected in K + and Na + ions, but a significant increase in Mg 2+ (from 2.17 mg/dL to 2.44 mg/dL). The supplementation with magnesium has been reported to increase muscle strength and power as well as hemoglobin levels [60]. Mg is a cofactor to over 325 enzymatic reactions, and a deficiency of the mineral therefore has many physiological and exercise performance implications. A transient shift of magnesium to the intracellular space during exercise is a probable explanation for a large proportion of the hypomagnesaemia. However, regarding magnesium variations with exercise in red blood cells, dissimilar findings are reported. The magnesium levels in RBC were reported to be increased after several types of exercise [61], and were related to increased metabolic activity during exercise, which would induce a shift of the cation from the plasmatic compartment. Ionized Mg concentration is supposed to be a more sensitive variable than total Mg, giving more reliable information about the status and regulation of major mobilization magnesium pools in the body. However, only limited information about the effects of exercise on the metabolically and regulatory fraction of Mg 2+ is available [62]. Mooren and co-workers [62] concluded that changes in the fraction of Mg 2+ should be sufficient to influence intracellular signaling and metabolic processes. Although some explanations have been offered for the compartmental shifts of magnesium, the precise mechanism remains to be clarified. Anaerobic performance enhancement is associated with physiological-regulatory functions of Mg 2+ within muscle contraction and relaxation. The potential effect is being justified by regulating troponin expression via Ca 2+ concentration gradients [63], MgATP complex formation optimizing energy metabolism, increasing protein synthetic rate, greater amount of actin-mysoin crossbridges [64] all of which contribute to improved strength and anaerobic metabolism.
Different strategies used for improving buffering capacity of tissues and blood do not allow for a direct comparison. Despite this, there appears to exist an ergogenic effect in response to NaHCO 3 − , which may explain the large effect size noted by Tobias et al. [15]. It seems that further work is required to elucidate the mechanism by which sodium bicarbonate and other buffering supplements improve anaerobic exercise performance, although most authors suggest interplay of peripheral and central components [9,14] The results of our experiment are in line with many other well controlled research projects, which have used repeated high intensity exercise protocols. However, there are some novelties to our study, which should be addressed. First, we used a chronic nine-day supplementation procedure, split into two daily ingestions of a complex containing 3000 mg of sodium di-carbonate, 3000 mg potassium di-carbonate (six caps containing 500 mg each), 1000 mg of calcium phosphate and citrate, 1000 mg potassium citrate, and 1000 mg magnesium citrate. The experimental group of players took the supplement twice a day, 90 min before each practice session. The control group received a placebo that was identical to the buffering supplement. The players were well conditioned before the start of the experiment, and had identical living and training conditions during the study as the experiment was conducted during a preseason camp in Spain. The diet and training loads of the players were controlled and the testing conditions for the RAST were identical for baseline and post intervention measurements. All biochemical evaluations were performed in duplicate in the same laboratory.
Conclusions
Chronic supplementation with sodium and potassium bicarbonate, fortified with potassium and magnesium citrate, as well as calcium phosphate and calcium citrate, improves repeated sprint ability in elite soccer players. The improvements in anaerobic performance are caused by increased resting and post ingestion blood pH and bicarbonate levels. Although our study is restricted to bicarbonate ingestion combined with chosen minerals, and the statistical power suffers from a low sample size, its results indicate a significant role of magnesium ions in delaying fatigue during high-intensity exercise. The parallel use of minerals and bicarbonate is an innovative aspect of this study and it requires further research. This experiment confirms both acute and chronic buffering effects in elite athletes of sodium and potassium bicarbonate fortified with minerals. Such supplementations protocols can be suggested for competitive athletes before competition or periods of high intensity training to improve anaerobic performance.
Author Contributions: J.C. and A.G., the main authors, were responsible for creating the concept of the research as it was part of a bigger project which considered the effects of high fat, low carbohydrate diets on exercise metabolism and performance in different sport disciplines. | 2018-11-15T16:41:54.569Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "43b6966b2ee0c61d9d0d71980b3f180bee62947a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/11/1610/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43b6966b2ee0c61d9d0d71980b3f180bee62947a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256263248 | pes2o/s2orc | v3-fos-license | HistoTrust: tracing AI behavior with secure hardware and blockchain technology
In areas of activity where the notion of accountability is strong, the adoption of artificial intelligence (AI) is limited by the opacity and lack of understanding of its behavior, all the more so in the embedded domain where neural networks are compressed and executed on microcontrollers. While the NIST introduced in 2021 several principles allowing the AI explainability, this paper introduces a novel scheme, HistoTrust, combining secure hardware and blockchain technology to bring trust in the traceability of AI behavior and allow its explainability. HistoTrust attests in an Ethereum ledger all the relevant data produced by a physical device, especially the heuristics inferred by AI. Thus, the audition of the ledger allows security verifications and AI behavior analysis.
Introduction
From the perspective of the factory of the future, smart robots are increasingly incorporating vision capabilities based on an on-board camera. From the pictures, embedded artificial intelligences (AI) make decisions impacting the tasks performed by the robot within the industrial process. The AI is previously trained to recognize learnt patterns in the image. The classifier built is a Neural Network (NN), Christine 1 Univ. Grenoble Alpes, CEA-Leti, Grenoble, France which given an image as input infers a probability for the recognition of the learnt pattern. A high probability provides trust in the recognition of the learnt pattern. With AI, this trust is based on a probabilistic process.
The adoption of AI in the industry is being slowed down by the opacity of the decision making when an AI is involved in the decision process. That's why in September 2021, the NIST published the report [1] that promulgates four principles to enable the AI explainability. Among these principles, the transparency of the AI behavior is a key factor of trust along with accountability and resiliency.
When an anomaly is detected on a production line, the causes and accountabilities must be determined. When the production process involves AIs, implementing the means to trace events and audit the digital system is a requirement. The solution HistoTrust [2] aims to provide such a tool to ensure the protection of embedded AIs against malicious intentions and to enable the explainability of the AIs behavior. HistoTrust combines the probabilistic trust provided by AI with the deterministic trust provided by the blockchain. The notion of trust in the blockchain is based on a consensus protocol between the actors involved, enabling them to agree on the transactions recorded in the ledger [4]. Once recorded, the transactions form an history considered as immutable. They can no longer be deleted, swapped or modified. Also, the integrity of the information recorded in the ledger is ensured by design, as well as the ordering of events and the authentication of issuers. The blockchain technology is relevant to trace, in a non-repudiable way, the activity of smart robots, and embedded NN.
HistoTrust introduces a device-centric [5] solution based on Ethereum technology that conciliates the need for security and privacy with the trust required between stakeholders. HistoTrust provides an architecture that ensures end-to-end security and privacy by design while enabling the traceability of embedded NN inferences. The authenticity of the issuer device is attested through secure hardware components such as Trusted Platform Module (TPM) and ARM TrustZone technology as Trusted Execution Environment (TEE). Hardware component serves as root-of-trust for the digital data processed by the embedded NN.
Thus, each of the smart robots operating on the production line sends to the ledger the attestations of the digital data it produces. An attestation includes the cryptographic fingerprint of a set of raw data, the authentication of the issuing embedded applications, and the timestamp of the record. The ledger maintains the history of transactions received from the smart robots distributed around the production line. In a context where several stakeholders cooperate in the manufacture of a product, each protecting its own interests, business and personal data, sharing attestations through the ledger brings trust between them. While each one keeps and protects its raw data, and must be able to explain the behavior of its embedded AI if requested.
HistoTrust has several objectives: (1) to protect the embedded NN from logical and physical attacks by ensuring the cyber robustness of the AI, (2) to protect the data produced by the embedded applications and processed by the NN in order to allow the explainability of the AI behavior, (3) to attest and trace the data produced in a blockchain in order to provide authentic non-repudiable attestations shared between the different stakeholders.
The following section positions the work done in HistoTrust in relation to existing solutions. The use case is described in Section 3. Section 4 presents the embedded NN used in HistoTrust. Section 5 outlines the attestation process of the data produced to the ledger. The integration with the embedded NN and the deployment are discussed in Section 6. A security analysis is led in Section 7 followed by the audition process in Section 8 before concluding this work.
Secure data history with trusted hardware
The added value of blockchain technology to meet the specific features of a smart manufacturing use case has been shown in [6]. Compared to a centralized solution based on digital certificates and PKI, the Ethereum-based solution offers a more refined management of security and privacy at the expense of performance. In [2], HistoTrust demonstrates that performance can meet the needs of a real-time usage when using a blockchain.
The EmLog framework [7] is presented as "the first attempt at preserving off-the-shelf ARM development board hosting OP-TEE". EmLog implements a secure logging system from end-to-end between embedded constraint devices and a remote database. HistoTrust introduces an architecture design and an on-board implementation design using off-the-shelf secure hardware components, as OP-TEE and TPM 2.0 [8], that goes beyond EmLog solution and achieves the EmLog perspectives. Preserving forward security thanks to the one-way hash chain scheme introduced by Shneier and Kelsey [9], EmLog and SGX-Log [10] are not designed for multi-stakeholders contexts and may suffer of data losses in case of power failure.
In the Logs system EngraveChain detailed in the paper [11], the data history is ciphered, then registered in an Hyperledger Fabric ledger. This implementation lacks agility because the blockchain is not designed to store large volumes of data, nor confidential data even encrypted. Moreover, the ciphering of recorded data in a ledger implies a complex key management. The blockchain technology provides by design the tamper-resistance of the recorded transactions history forming the ledger. HistoTrust provides an attestation scheme securing the history of data issued from distributed devices.
An Ethereum ledger maintains the history of cryptographic attestations of data produced by distributed devices owned by multiple stakeholders. The blockchain technology allows to share these cryptographic evidences between the stakeholders, ensuring mutual trust. In addition, the raw data is kept by its owner who ensures its persistence and confidentiality.
Based on an Ethereum blockchain, BlockPro [12] presents a decentralized architecture of IoT devices. The authenticity of the data-emitting devices is achieved through a challenge to the IoT device submitted to its PUF (Physical Unclonable Function). But it is not mentioned how the address of the account issuing the transactions is built and how it is linked to the PUF. The paper [13] shows that the dissociation between IoT devices and validation nodes is a powerful architecture that HistoTrust exploits.
Attestation scheme
Attestation schemes based on the use of a TPM offer standard solutions allowing the authentication of a platform by a remote device [14,15]. The authors of [16] highlight the question of the certification of sensor data, even by a trusted platform. The tension is tangible between privacy on the one hand and trust on the other. Privacy requires the protection of confidential data, while trust requires guarantees between the stakeholders working in a given ecosystem.
The principle of remote attestation is described in depth in [15]. The Trusted Platform Module (TPM) is the targeted device enabling the endorsement of attestation keys that the manufacturer, the vendor or the owner may own. The attestation scheme follows recommendations and standards provided by the Trusted Computing Group (TCG) [14]. Attestation aims at proving to a remote verifier the property of a target by supplying an evidence over a network. It consists in three stages: (1) key provisioning, (2) attestation process and (3) verification process.
Explainability of embedded artificial intelligence
The field of eXplainable AI (XAI) raises major attention as an important concept that increases the trust in AI-based systems and applications. The need of both interpretability and explanation methods has been recently highlighted by the NIST [1]. A large variety of approaches have been proposed to enlighten the blackbox paradigm of deep NN models [17] even for modern architectures.
The purpose of our work is not to introduce a new methodology to explain the intrinsic behavior of a Machine Learning (ML) model, but to frame the implementation of an AI in an embedded device in such a way that confidential data, presented to a third party, can be trusted to explain the behavior of an embedded NN. Our contribution is rather in the area of cyber robustness of embedded AI in the presence of multiple distributed NNs.
Context
In a factory, many actuators participate in the assembly of a product on a production line (see Fig. 1). Physical devices that embed inference engines, i.e., a NN previously trained to recognize determined patterns in an image, generate the digital commands sent to the actuators. The device may integrate several sensors and a camera. A picture of the product is taken before acting. This picture is presented in input of the NN to request an inference that contains heuristics, i.e., probabilities that the pattern recognized in the image corresponds to the learned patterns. This inference will guide the decision about the next action the actuator should perform.
In the event of an incident creating a financial loss, it is necessary to find the causes and eventually to charge the costs to the accountable stakeholder. However, the presence of AI makes difficult the reproduction of the decisions. So, how to determine who is accountable for the damage? In particular, who is accountable for the decisions that command the actuators? If the NN recognizes the digit "2" instead of the digit "8", is the error attributable to the learning quality? A configuration and/or system integration fault? A lack of operator guidance? Noisy input data? A physical or logical attack on the electronic devices? A network attack?
Digit recognition
Smart robots are often equipped with cameras that allow them to photograph the part of the product on which they will operate. The image is then analyzed, potentially with a classifier, and depending on the patterns recognized, the action is determined. For this work, we use a classical digit recognition task with the MNIST dataset [18] as it represents one of the most popular benchmarks in the ML literature with which many architectures can be tested (from shallow fully-connected networks to deeper convolutional NN). MNIST is composed of 60,000 training images of grayscale handwritten digits and 10,000 examples for test. Each sample is a grayscale 28x28 image (784 pixels) with the associated label "0" from "9". This dataset offers a school case with a known and qualified open-source model. The integration made for the use case can be generalized to other computer vision tasks, specific to the problem to solve.
For a given input image of the NN, the output inference is composed of 10 heuristics that correspond to the recognition probability of each digit from "0" to "9". An example is shown in Fig. 2 with the recognition of the digit "2" with the probability 0, 99 (99%).
Formalism
In this work, we consider a deep NN model that performs a supervised classification task with the following formalism. Input-label pairs (x, y) ∈ X × Y are sampled from a distribution D. The NN model M : X → Y, with parameters , classifies an input x ∈ X to a label M (x) ∈ Y. The parameters are optimized during the training phase in order to minimize a loss function L M (x), y (e.g., the cross-entropy loss) that evaluates the quality of a prediction compared to the ground-truth label. For the sake of readability, the model M is simply noted as M.
We distinguish a model, M, as an abstract algorithm from its physical implementations M. One model M (e.g., a CNN trained on MNIST for digit recognition) can be implemented for inference purpose in a microcontroller or in FPGA. Functionally, the embedded models rely on the same abstraction M but strongly differ in terms of implementation along with their respective hardware environments. Thus, there is no equivalence between M and its embedded variants.
Embed deep NN models on a constrained platform such as a 32 bits microcontroller usually needs model compression techniques to fit the model complexity to the hardware requirements [19]. More particularly, memory footprint is usually an important challenge: for a typical Cortex-M MCU, the trained parameters are stored in the Flash memory and, at inference time, the internal computations (mainly multiply-accumulations and non-linear activations) are processed in SRAM. Two classical approaches are used to fit state-of-the-art models: quantization and pruning. Although the learning process may require 32 bits floating point computations, at inference time, a low bitwidth representation of the parameters is sufficient and does not alter the performance of the model. Thus, most of the tools that enable NN embedding on MCU (such as STM32Cube.MX AI 1 ) propose a 8-bit quantization of the parameters. Pruning refers to techniques that cut useless connections in the network and rely on the fact that most of the models are over-parametrized. Both approaches can also help speeding up the inference process.
Neural network
Two different architectures of model working on MNIST dataset have been used, a MLP and a CNN. Both needed to be small to fit hardware material limitations. As such, MLP is composed of an input (784 points due to the fact that the images must be flattened to be used ) and an output layer (10 neurons corresponding to number of label). This model has only 7850 trainable parameters which makes it a quite small model compared to others doing same task with additional intermediate hidden layers. However, model accuracy is just below 92%. Despite that state-ofthe-art MLP model can reach higher accuracy on MNIST classification, this accuracy remains acceptable in light of model reduced architecture.
On the other hand, a CNN is also considered. This kind of model is divided in two parts with distinct goals. First layers and made for feature extraction (convolution, max pooling CNN are particularly efficient and adapted for image recognition and classification as shown in Fig. 3. Indeed, despite its reduced size, model reaches accuracy slightly over 96% for MNIST image classification.
Learning
In order to implement deep NN models on microcontrollers such as STM32, we previously generate the model with Google Tensorflow [20]. The model architecture (number of neurons, layers, used activation functions) is created according to the target specification, an ARM Cortex-M4. Then, an empty model is trained with labelled data corresponding to the task to perform, the digit recognition, following a supervised learning paradigm. Validation and test of the dataset complete the training. The validation adjusts the hyper-parameters value and distinguish overfitting. The test qualifies the model performance with examples that have not been seen during the training phase. This allows the simulation of real model behavior while having ground-truth class for each example of the dataset. At the end, TensorFlow provides an accuracy score. The trained model characteristics (architecture, parameters and hyperparameters values) composes the embedded NN in a ".h5" file.
Attestations to ledger
The attestation scheme follows the 3 phases depicted in Fig. 4: 1. The secrets and the trusted applications (apps) are provisioned in the embedded device by the device's owner in its private office. Once the secrets protected by secure hardware, the device is delivered to the factory. 2. On the factory floor, during the execution, the device is supervised by an operator. It produces data attested by a trusted app to a distributed ledger. 3. Any stakeholder may perform the verification of the authenticity of the involved devices, thanks to the information registered in the shared ledger, available to all. An accredited and independent auditor may also verify the tamper-resistance of the data produced.
Provisioning of the secret keys
The goal is to provision the private key sk in the TPM2 vault, while enabling its secure access from the TrustZone for the attestation phase, and the verification of its authenticity for the verification phase. Thus, the private key sk is created by the device's owner in a private location. sk should have a high entropy and be on the elliptic curve secp256k1. To endorse sk, the owner generates sk certificate signed with its owner's master key ok. Previously, the owner has created its owner's master key ok, which may be supported in a PKI. Both owner master key ok certificate and endorsed device key sk certificate are in the ledger and available to all the stakeholders.
To avoid the eavesdropping of sk when it is accessed from the TrustZone, sk is ciphered with a symmetric key noted symKey. Once ciphered, the key sk c is written in the TPM permanent memory. The symmetric key symKey is also hidden in TrustZone, in order to decipher sk c in a TEE when used.
Provisioning of the trusted apps
The Ethereum technology requires that the incoming transactions are signed with a private key of the elliptic curve family secp256k1. However, this asymmetric cryptosystem is not supported by the TPM 2.0 standard and is not integrated in the TPM crypto-accelerator. That's why, for HistoTrust, the cryptographic functions, dedicated to the compliance with Ethereum technology, are implemented in TrustZone of an ARM microcontroller.
Two trusted apps are developed in HistoTrust: • industrial app: This application is the "business" application as it realizes the task required. It produces digital data that may be a huge value. • attestation app: This application builds the cryptographic elements included in the transactions sent to the Ethereum blockchain to attest the data produced.
The attestation app is composed of a part executed in the normal world of the microprocessor, and another part protected during the execution in the TrustZone. In order to carry out the measurement process (Section 6.2), a fingerprint of the binary code of each app is computed and stored in the TPM Platform Configuration Registers (PCR).
Attestation of the data produced
During the production phase, the cryptographic attestations are registered in Ethereum ledger through a smart contract. The attestation process, detailed in Fig. 5, consists in computing the fingerprint of the latter dataset produced, which is included in the data field of an Ethereum transaction (Fig. 12). This transaction is signed in the TrustZone with sk which is also used to build the account address of the issuer device. To achieve the signature, the private key sk c is accessed in the TPM permanent memory through the SPI bus and is deciphered in the TrustZone. The signed transaction is sent to the blockchain and a receipt is returned if the registration in the ledger is confirmed. The implementation of this attestation process is tricky because it must respect both temporal constraints and the real-time of the industrial app that produces new data. No data should be lost, due to processing time of the attestation app, power failure of the physical device, or latency of recording in the remote blockchain. In fact, the use of secure hardware components, as TPM and TEE, adds an overhead on the computing time to generate the attestation. The paper [2] presents a detailed study of the performance of HistoTrust according to the security level of the private key sk. On the one hand, on the blockchain side, a huge latency may be observed due to the time interval between two consecutive blocks. The delay between two blocks is very different from a blockchain to another. Ethereum implemented in private blockchain with the Clique algorithm [3] as consensus protocol provides by default a time interval around 12 s between two consecutive blocks. As a comparative example, two consecutive blocks are 10 min apart in the Bitcoin blockchain. On the other hand, the rate of data production by the real-time industrial app can be very high. To circumvent this problem, HistoTrust uses the receipt that confirms the registration of an attestation in the ledger to trigger the reading of a new dataset from the industrial app.
Verification
The attestation history is available in the shared ledger and transparent to all stakeholders. It does not include confidential information, only cryptographic attestations enabling the verification. Each record is a transaction signed with sk, emitted from the account of the issuing device, and sent to the smart contract. It includes the fingerprint of the attested dataset.
Two types of verifiers are distinguished: • involved stakeholder: any actor is able to access the information present in the shared ledger. The registered attestations enable to authenticate the acting devices and their owner in a given time interval. • independent auditor: an independent auditor, such as an insurance expert or a bailiff, may be accredited to request the raw data, to the authenticated device's owner, from the information registered in the shared ledger.
6 Embedded design
The IoT device: a system-on-module
This section briefly presents the IoT platform design. A STM32MP157-EV1 evaluation board is associated with a STPM4RasPI TPM Expansion Board. The STM32MP157 is a single board computer composed of a dual-core ARM Cortex-A7 core processor operating at 650Mhz forming a System-on-Module (SoM). The processor also integrates an ARM Cortex-M4 coprocessor, which makes it suitable for real-time tasks. The dual-core ARM Cortex-A7 is very low-power processor designed for smartphone or edge devices. It includes both a normal world operating with a Rich OS and a secure world with a TrustZone operating with OP-TEE OS. The transition from the normal world to the secure world is done by setting the NS bit in the SCR register to 1. The executed code remains confidential and is protected against logical attacks.
The coprocessor ARM Cortex-M4 offers a real-time environment accessible from the normal world of the ARM Cortex-A7 to extend its computing capabilities and increase its performance while preserving low-power consumption. The functions embedded in the ARM Cortex-M4 are built upon the dedicated Hardware Architecture Layer (HAL). STMicroelectronics provides a protocol called RPMSG [21] to ensure the communication between the ARM Cortex-A7 micro-processor and the ARM Cortex-M4 micro-controller.
The daughter board STPM4RasPI completes the STM32MP157 with a TPM 2.0 from STMicrolectronics. This board is connected through the GPIO making the TPM accessible from the OP-TEE environment via the SPI bus. An Ethernet connection and a serial link enable the monitoring of the SoM. A small screen displays some information about the hardware configuration.
Secure boot and measurement
The ARM Cortex-A7 includes an open-source Trusted Execution Environment (OP-TEE) implementing the ARM TrustZone technology. At start, a secure boot is achieved according to the application note [22] relying on Elliptic Curve Brainpool-256 crypto-system. At start and during the execution in production mode, the integrity of the two embedded trusted apps is checked through the measurement process. To enable this, the fingerprint of the apps binary code is previously provisioned in the TPM PCR as explained in Section 5.1.2.
Integration
The integration consists to make the industrial app and the attestation app working together in the SoM as depicted in Fig. 5, while respecting the real-time constraint of the industrial app.
The industrial app is embedded in the normal world operating on a linux kernel as rich OS of the ARM Cortex-A7, with a part including the NN insulated in the ARM Cortex-M4. It handles the pictures coming from the attached camera in the ARM Cortex-A7. The pictures are transmitted to the NN in the ARM Cortex-M4, to request an inference. As output, the NN provides 10 heuristics, one by digit from "0" to "9". The heuristics are carried to the ARM Cortex-A7. Generally, the recognized digit corresponds to the highest probability.
The communication protocol between the ARM Cortex-A7 and the ARM Cortex-M4 microcontrollers is suggested by STMicroelectronics in [21]. It implements a virtual interface, noted ttyRPMSG, that enables the exchange of small size messages and low data flows. The transmission of small images to the ARM Cortex-M4 with this protocol leads to a loss of information because the throughput is not sufficient. That's why, HistoTrust implements a new communication scheme between the ARM Cortex-A7 and the ARM Cortex-M4 on the SoM. The virtual interface ttyRPMSG is used to notify the presence of data in a shared memory, accessible to both microcontrollers, and the direction of the communication.
Several buffers are implemented in the shared memory in order to handle full duplex communications without data loss. The data to attest composes the new entry written in the file #1. For the use case considered, the format of each new entry is as follows:
[index timestamp url hash inf erence]
The field url is a pointer to the raw data in entry of the NN, while the field hash is the hash of the raw data. The field inference is composed of the 10 values of heuristic, one for each digit from "0" to "9". Each heuristic is a floating value coding a probability between 0 and 1.
The industrial app writes in real-time in the file #1 all the data produced that needs to be attested. The size of this buffer is not limited, as it is stored on an SD card of several GB. Only the industrial app is authorized to write in this file, while attestation app is authorized to read it. The receipt received from the blockchain confirms the registration of the attestation of the previous dataset in the ledger. This receipt triggers the read of the next dataset in the file #1. The file #1 is stored in persistent memory. If a power failure occurs, the data is saved and the attestation process resumes where it left off when the power returns. The file #1 may be ex-filtrated by its owner.
The attestation app includes a part located in the normal world and another part located in the secure world of the ARM Cortex-A7. The TPM is only accessed from the secure world, thanks to the integration of the SYS layer of the TPM stack in the OP-TEE environment. The lightweight mbedTLS library is also integrated into the OP-TEE environment. It provides cryptographic primitives and allows to build dedicated functions such as the Ethereum digital signature. In the normal world, low level commands
Deployment
All the devices are distributed on a local network following a star topology around an access point. A proxy allows the communication with the outside to enable raw data ex-filtration. A consortium Ethereum blockchain is locally deployed. Each stakeholder of the use case owns a validator node with a complete copy of the ledger, and has one vote in the consensus protocol. The validator nodes are depicted with a computer in Fig. 6. Thus, the governance of the system is ensured with equity and fairness by all the stakeholders.
The devices acting in the production line, are provided with the embedded apps, enabling to send transactions to the validator nodes. Thus, each device is the root-oftrust of the data it produces, forming a distributed rootof-trust network. The provisioning is done, independently by each device's owner, prior to the deployment of the hardware in the factory. The management of the access rights and authorizations is done through smart contracts.
Performances
The blockchain is a time-stamping system consisting of a sequence of blocks spaced out in time. The recording of new transactions in the ledger is performed at low rate. We want to show that with our implementation of HistoTrust, the security and privacy properties brought thanks to the use of the blockchain have no impact on the industrial process flow and on the rate of inferences of the embedded NN.
To carry out the performance measurements, we consider the processing time of a transaction and its recording in the ledger by using the Ethereum Ganache simulator configured in automining, i.e., the transaction is recorded as soon as it arrives, without any latency due to the consensus protocol between the network validator nodes. The processing time of a transaction from the computing of the hash of the dataset to the receipt of the registration proof is estimated at 156ms.
We also considered the rate of inferences of the NN, and determined the processing time of an inference at the output of the NN, given an image presented at the input. This is estimated at 12.3ms.
So, the highest input rate of the NN is 1 picture every 12.3ms. We then chose the following measurement points: 1 picture every 20ms, 30ms, 50ms, 100ms, 150ms, 200ms, 300ms. For each rate considered for the input data, we determined the number of inferences contained in a transaction recorded by the Ganache simulator.
The results are presented on the graph in Fig. 7. For each measurement point on the x-axis, we presented 5000 images as input to the NN, and averaged the number of inferences contained per transaction on the y-axis.
This graph illustrates that the security and privacy properties, presented in the following paragraph and ensured with the blockchain technology, are implemented without impacting the performance of the industrial application.
The security model
In 1990, Reason [23] introduces the swiss cheese model to analyze the causality of an incident and manage risks. The physical device, that embeds the NN, integrates several security layers to protect and detect attacks or malfunctioning, as depicted in Fig. 8.
The first layer is a physical protection that prevents access to the components embedded in the smart robot, and that remains physically damaged in case of intrusion. By this way, succeeding in a physical attack on the electronic components that support the NN is difficult and leaves marks. The second layer is the cyber protection against logical attacks. The use of secure hardware components like TPM and OP-TEE to protect the cryptocraphic keys and seeds, is the foundation of this protection. The third layer is the detection of intrusions or tampering. At this layer, secure boot and measurement are deployed to monitor the integrity of embedded firmware and software. The fourth layer concerns the traceability to be able to understand what happens when the previous layers are bypassed. A blockchain is used to register the traces as attestations of the logged data produced by the embedded apps.
The asset
The assets to protect are the business-relevant data for the stakeholders. It is the logged data including all the relevant data produced by the physical devices, which contributes to make decisions of the digital command sent to the actuators. This includes inferences produced by the embedded NN (see Fig. 9). The authenticity must be ensured, as well as integrity and completeness. The traceability is a valuable service to understand the origin and sequence of the events, while the raw data produced remains confidential to its owner. In order to reduce the attack surface on the electronic board, the different protection layers of Fig. 8 integrate several countermeasures. The goal is to fulfill these security requirements: • R1: AI explainability: The behavior of the embedded AI should be explainable. • R2: forward integrity: The data attestation history must be immutable and transparent to the stakeholders. The raw data must be persistent and of integrity. • R3: public authentication: Any stakeholder should be able to authenticate the devices issuing data in a given time interval through the attestations history. • R4: power failure: No raw data or attestations should be lost in the event of a power failure. • R5: privacy-preserving data: The raw data shall not be exposed to the other devices. • R6: verifiability: An accredited auditor must be able to verify the data attestations. • R7: multiple stakeholders: The scheme shall support multiple stakeholders owning multiple devices issuing data concurrently.
The threat
The threat events are the tampering of the data produced, the production of fake or dysfunctional data, the spoofing of data or issuing devices, and the theft of data. The main sources of risks come from the following profiles: • Negligence: this threat arises from unintentional human error, but causing a failure, • Ransacking: this threat corresponds to a malicious action with the intention to destroy, tamper, spoof, modify value data, • Concurrence: this threat may seek to destroy data like the ransacker, but also to steal valuable data for analysis. The main stakeholders, involved in the smart manufacturing use case, are: • the provider of the smart robot, by default he is the owner of the logged raw data produced by its devices, • the expert who trains the embedded AI, • the manufacturer of the product (e.g., the car) for which the robot performs tasks, • the operator of the smart robot during production, • the maintenance agent who intervenes on the smart robot, • the accredited and independent auditor mandated in case of litigation. Table 1 shows the role that each stakeholder can play. The provider of the smart robot may be negligent in providing an unreliable device, poorly configured, or in which bugs remain. In the event of a litigation, he must provide the integrity of the data requested by the auditor. Thus, it is the provider's responsibility to maintain the tamper-resistance and confidentiality of his data. As there are usually several suppliers of smart robots in a factory, they are potentially concurrent. This may be an incentive to obtain confidential data from their concurrent for analysis to gain market share. The expert is responsible for the learning of the AI and the decision of the embedded NN. He must be able to explain how the heuristics are derived. The manufacturer is physically present in the factory and has access to the smart robots. He may take any profile of attacker in order to hide a problem for which he is responsible and pass the blame on to another stakeholder. An operator or a maintenance agent may make a human error, and possibly seek to cover it up by destroying elements.
The auditor's mandate is in the legal field, which gives him legal accreditation and independence from other stakeholders.
Security and privacy review
R1: AI explainability. Explaining the behavior of an AI requires measures to be implemented at the design stage. The blockchain technology provides obviously and by design the property of traceability. However, the blockchain does not manage the confidentiality of the traced data. This is why HistoTrust proposes a scheme combining the use of a blockchain to transparently guarantee the properties of immutability, authenticity and ordering, and the use of private storage of raw data, under the responsibility of their owner.
R2: forward integrity. The blockchain ensures by design the forward integrity of the information recorded in the ledger. The ledger maintains the history of cryptographic attestations, each one being a pointer to a raw dataset stored outside the blockchain. Thus, any tampering or removal of raw data is detectable.
R3: public authentication. The recorded attestation authenticates the device issuer, and all genuine devices are endorsed by their owner. The consultation of the ledger allows any stakeholder to know the devices acting in a given time interval, and the order of the performed actions.
R4: power failure. Resilience when a power failure occurs implies that no raw data or cryptographic attestations are lost. The use of a file buffer stored in permanent memory ensures data persistence in case of power failure.
R5: privacy-preserving data. This requirement covers raw data at storage and during transportation. The physical protection of the device in the factory makes access to the board peripherals difficult and detectable. The ex-filtration of the raw data is performed through VPN.
R6: verifiability. HistoTrust distinguishes two roles of verifiers. All the stakeholders can play the first role, having access to the attestations recorded in the ledger. The second role is reserved to an accredited auditor, under a legal mandate, to request the raw data.
R7: multiple stakeholders. HistoTrust brings a solution where the number of stakeholders is not limited by using blockchain technology as a complement to existing technologies. The stakeholders ensure the governance together, each having a validator node.
Audit
The audit is launched when an incident occurs. The goal of the audit is to determine the cause and the accountabilities with the maximum of transparency for the involved stakeholders. The audit takes place in two phases: the first to trace the events in a given time interval before the incident. The second is to analyze the behavior of the AIs involved.
Traceability of the events
The blockchain provides an immutable history, shared among all stakeholders, of all past events. Figures 10 and 11 Each block includes a tree of recorded transactions, as shown in Fig. 12. The sender address authenticates the issuer device, while the contract address authenticates the recipient smart contract. The field data includes the fingerprint of the raw dataset produced by the issuing device at the given time, whereas the field gas indicates the computing power required to execute the targeted smart contract in the blockchain. This value is an indicator of the energy consumed to execute an instance of the smart contract.
Until the request of personal data, any stakeholder member of the ecosystem can achieve the verification. The first step consists to get the recorded attestations of the considered time interval in the shared ledger. Each attestation authenticates the issuer device, as well as the owner who has endorsed his devices.
Explainability of the AI
Each device's owner may be requested to provide the raw data associated to the recorded attestations. As this data is confidential, only an accredited and independent auditor is authorized to perform this task regulated by the legal. The provided data must be complete and of integrity; otherwise, the accountability of the owner is engaged, with the suspicion of hiding a fraud. Each owner is responsible for keeping and protecting its logged raw data.
Once the completeness and the integrity of the attested data established, the analysis of the raw data is conducted, in particular the analysis of the AI behavior. Each owner is responsible for providing an explanation of the behavior of its embedded NN.
At this stage, the analysis relies on tools and methods the expert used to explain the behavior of the NN, and on human expertise. For example, the picture presented in Fig. 13 is labelled "9". However, the inferences of the embedded NN recognize the digit "4" with a probability of 68%, the digit "7" at 16% and the digit "9" at 5%. With a school case and a labelled image, one knows that it's a "9". But the pictures acquired by the smart robot's on-board cameras are not labelled. And, only the explainability of the learning model and human expertise can remove the doubt on the most likely pattern. In a factory, the smart robots are supervised by human operators. Thus, one can consider that if the inference does not return any heuristics above a certain threshold, e.g., 71%, the decision is the accountability of the human operator. On the other hand, when the error is obvious, for example, the NN recognizes a "3" with 95% certainty when it is a "0", the human operator will not be solicited, which can potentially lead to an incident on the production line. This may be due to an adversarial attack, i.e., an attack on the NN affecting the cyber protection layer (see Fig. 13), and not detected by the embedded system. The traceability implemented with HistoTrust allows to discover the cause.
Conclusion
This paper introduces HistoTrust, a robust scheme using TEE and TPM secure components to trace the behavior of embedded AI. It begins with the challenge of embedding a learnt NN in an ARM Cortex-M4 microcontroller. Next, based on an attestation scheme to an Ethereum ledger, an embedded design is proposed to secure the NN, ensure its robustness and enable the explainability of its behavior. Then, several devices, following a distributed architecture, are deployed around a blockchain. The security analysis and the audit process provide verification tools that brings trust and fairness between the stakeholders involved in the use case. In future work, the preservation of data privacy will be deepened, and some cryptographic process will be ported to the TPM.
Funding This work is a collaborative research action that is partially supported by (CEA-Leti) the European project ECSEL InSecTT 2 and by the French National Research Agency (ANR) in the framework of the Investissements d'avenir program (ANR-10-AIRT-05, irtnanoelec).
Conflict of interest Not applicable
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 2023-01-26T16:08:40.255Z | 2023-01-24T00:00:00.000 | {
"year": 2023,
"sha1": "66b5bc528884808df8126ae83230076952b9847d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12243-022-00943-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "c266d93bd466e1805f894d0efb190350f1d65b2a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231627709 | pes2o/s2orc | v3-fos-license | Unveiling the role of plasticity rules in reservoir computing
Reservoir Computing (RC) is an appealing approach in Machine Learning that combines the high computational capabilities of Recurrent Neural Networks with a fast and easy training method. Likewise, successful implementation of neuro-inspired plasticity rules into RC artificial networks has boosted the performance of the original models. In this manuscript, we analyze the role that plasticity rules play on the changes that lead to a better performance of RC. To this end, we implement synaptic and non-synaptic plasticity rules in a paradigmatic example of RC model: the Echo State Network. Testing on nonlinear time series prediction tasks, we show evidence that improved performance in all plastic models are linked to a decrease of the pair-wise correlations in the reservoir, as well as a significant increase of individual neurons ability to separate similar inputs in their activity space. Here we provide new insights on this observed improvement through the study of different stages on the plastic learning. From the perspective of the reservoir dynamics, optimal performance is found to occur close to the so-called edge of instability. Our results also show that it is possible to combine different forms of plasticity (namely synaptic and non-synaptic rules) to further improve the performance on prediction tasks, obtaining better results than those achieved with single-plasticity models.
Introduction
From the first bird-inspired "flying machines" of Leonardo da Vinci to the latest advances in artificial photosynthesis, humankind has constantly sought to mimic nature in order to solve complex problems. It is therefore not surprising that the dawn of Machine Learning (ML) and Artificial Neural Networks (ANN) was also characterized by the idea of emulating the functionalities and characteristics of the human brain. Within his book The Organization of Behavior, Donald Hebb proposed in 1949 a neurophysiological model of neuron interactions that attempted to explain the way associative learning takes place [1]. Theorizing on the basis of synaptic plasticity, Hebb suggested that the simultaneous activation of cells would lead to the reinforcement of the involved synapses, a hypothesis often summarized in the today's well-known statement: "neurons that fire together, wire together". Thus, Hebbian theory was swiftly taken by neurophysiologists and early brain modelers as the foundation upon which to build the first working artificial neural network. In 1950, Nat Rochester at the IBM research lab embarked in the project of modeling an artificial cell assembly following Hebb's rules [2]. However, he would soon be discouraged by an obvious flaw in Hebb's initial theory: as connection strength increases with the learning process, neural activity eventually spreads across the whole assembly, saturating the network.
It would not be until 1957 when Frank Rosenblatt -who had previously read The Organization of Behavior and sought to find a more "model-friendly" version of Hebb's assem-bly-came with a solution: the Perceptron, the first example of a Feed Forward Neural Network (FFNN) [3]. Dismissing the idea of a homogeneous mass of cells, Rosenblatt introduced three different types of units within the network, which would correspond today to what is usually known as input, hidden and output layers in a FFNN. Mathematically, the output of the perceptron is computed as: where ⋅ is the dot product of the input with the weight vector and is a bias term that acts like a moving threshold. In modern FFNNs, the step function is usually substituted by a non-linearity ( ⋅ + ) which receives the name of activation function. Being computationally more applicable than the original ideas of Hebb, Rosenblatt paved the way that would progressively detach ML from its biological inspiration.
Despite the initial excitement, in 1969 Marvin Minsky and Seymour Papert proved that perceptrons could only be trained to recognize linearly separable patterns [4]. The authors already foresaw the need for Multilayer Perceptrons (MLP) to tackle non-linear classification problems, but the lack of suitable learning algorithms lead to the first of the AI winters [5], with neural network research stagnating for many years. The thaw would not arrive until 1974 with the advent of today's widely known backpropagation algorithms [6,7]. Understood as a supervised learning method in multilayer networks, backpropagation aims at adjusting the internal weights in each layer to minimize the error or loss function at the output using a gradient-descent approach. Despite their success in tasks as diverse as speech recognition, natural language processing, medical image analysis or board game programs; backpropagation methods lack of a corre-sponding biological representation. Instead, ANN that aim to resemble the biology behind the operation of the human brain ought to include neurons that send feedback signals to each other. This is the idea behind a Recurrent Neural Network (RNN). Whereas FFNNs are able to approximate a mathematical function, RNNs can approximate dynamical systems -i.e. functions with an added time componentso that the same input can result in a different output at different time steps [8].
It is within this context when two fundamentally new approaches to RNNs appeared independently: the Echo State Network (ESN) [9] and the Liquid State Machine (LSM) [10], both constituting trailblazing models of what today is known as the Reservoir Computing (RC) paradigm. These models are particularly fast and computationally much less expensive since training happens only at the output layer through the adjustment of the readout weights. Although very flexible, this approach also leaves the open question of how to choose the reservoir connectivity to maximize performance. While most reservoir computing approaches consider a reservoir with fixed internal connection weights, plasticity was rediscovered as an unsupervised, biologically inspired adaptation to implement an adaptive reservoir. It appeared first as a type of Hebbian synaptic plasticity to modify the reservoir weights [11], but soon the ideas of nonsynaptic plasticity that inspired the first Intrinsic Plasticity (IP) rule [12] were also implemented in an Echo State Network [13]. After that, many different models of plasticity rules have been implemented in RC networks with promising results [14,15,16]. Today, the fact that biologically meaningful learning algorithms have a place in these models, together with recent discoveries suggesting that biological neural networks display RCs' properties [17,18], make reservoir computing a field of machine learning in continuous growth. Echo State Networks have been shown to successfully perform in a wide number of tasks, ranging from speech recognition [19], channel equalization [20], or robot control [21], to stock data mining [22]. Here, we will focus in the challenging problem of chaotic time series forecasting. This type of task has been addressed for a large number of different time series [23,24,15,25], and ESNs implementing plasticity rules to improve time series forecasting have been treated before in [15,25,11]. Nevertheless, in this paper we will move away from the finest-performance approach, focusing instead in understanding how unsupervised learning through plasticity rules affects the ESN architecture in a way that boosts its performance.
The paper is structured as follows. The Methods section includes the standard definition of the ESN, the models considered for synaptic and intrinsic forms of plasticity, as well as the measures that will be employed for performance characterization. We consider the so-called anti-Hebbian types of learning rules for synaptic plasticity, which in our case means that neurons' activity at subsequent times tend to become decorrelated. As for intrinsic plasticity, we modify the parameters of the response functions of individual neurons to accommodate to a target Gaussian distribution function. In the Results section, we find that the best performance is usually obtained by employing a combination of both, synaptic and intrinsic plasticity, thus revealing the emergence of synergistic effects. Finally, we discuss the influence of the plasticity rules on the dynamical response of the individual neurons as well as on the global activity of the reservoir.
The ESN model: architecture, training and testing.
The basic architecture of an ESN model is made of three layers: an input layer, a hidden layer or reservoir, and an output layer. Fig. 1 illustrates the ESN architecture, where we already particularized the more general concept for two input units -one feeding a point of the series at each discrete time step and a second one acting as a bias-and one output neuron.
In Fig. 1, points ( ) ∈ ℝ in the temporal series are fed as input after being multiplied by a weight matrix ∈ ℝ ×2 . The internal connections between neurons in the reservoir are defined by ∈ ℝ × , where is the number of neurons in the reservoir. The states of the neurons in the reservoir produce the final output after multiplying by an output weight matrix ∈ ℝ 1× . Thus, the network dynamics for the reservoir and readout states are given by: where is the input scaling, and and are often randomly initialized. Here we chose the hyperbolic tangent as our activation function, but it could be in general any nonlinear function. Using a supervised learning scheme, the goal is to generate an output ( ) ∈ ℝ that not only matches as closely as possible the desired target ( ) ∈ ℝ but can also generalize to unseen data. Because large output weights are commonly associated to overfitting of the training data [26], it is a common practice to keep their values low by adding a regularization term to the error in the target reconstruction. Although several regularization methods have been proposed [9,27,28], here we use the Ridge regression method, for which the error is defined as: where ‖⋅‖stands for the Euclidian norm, is the regularization coefficient and is the total number of points in the training set. Notice that choosing = 0 removes the regularization, turning the ridge regression into a generalized linear regression problem. After training, the expression for the optimal readout weights can be easily obtained -minimizing the above error-as: where is the identity matrix, ∈ ℝ 1× contains all output targets ( ), and ∈ ℝ (1+ + )× consists of all concatenated vectors [1; ( ); ( )]. It is worth noticing that the standard training in ESNs focuses on the optimization of the readout weights, , but does not modify the initial reservoir, which is usually considered randomly connected. A natural step forward is then to optimize the weights of the reservoir connections, , or the excitability of the neurons according to the inputs reaching the reservoir, that is, to introduce rules of neuronal plasticity.
Neuronal plasticity: biological and artificial implementations.
The term plasticity has been used in brain science for well over a century to refer to the suspected changes in neural organization that may account for various forms of behavioral changes, either short-or long-lasting [29]. From a biological point of view, mechanisms of plasticity in the brain can be grouped into two large categories: synaptic and nonsynaptic. Synaptic plasticity deals directly with the strength of the connection between neurons, which is linked to the amount of neurotransmitter released from the presynaptic neuron and the response generated in the postsynaptic channels [30,31,32]. Nonsynaptic plasticity, instead, involves modification of the intrinsic excitability of the neuron itself, operating through structural changes that usually affect voltage-dependent membrane conductances in the axon, dendrites or soma [33,34].
Likewise, but now from the perspective of RC, plasticity rules aim to modify either the weights of the connections (synaptic plasticity) or the excitability of the reservoir units (nonsynaptic plasticity) based on the activity stimulated by the input. In this manner, the information carried by the input signal is partly embedded in the reservoir. Although non-Hebbian forms of synaptic plasticity have been found empirically [35,36], most rules modifying the synaptic strength among neurons fall into the category of Hebbian learning. The Hebbian rule, as originally proposed by Hebb [1], can be described mathematically as a change in the synaptic strength between two neurons that is proportional to the product of the pre and post-synaptic activities at time : where is the weight of a synapse connecting neurons k and j -with j triggering the activity of k-; ( ) and ( + 1) represent the activity of the pre and post-synaptic neurons, is a parameter accounting for the learning rate, and all weights in the reservoir are updated in parallel at each discrete time step. Notice that we refer to matrices using capital letters and denote with small letters their elements. The growth of the weights in the direction of the correlations between pre and post-synaptic units has an obvious flaw: as the connections get stronger following Hebb's postulate, activity will eventually spread and increase uncontrollably throughout the network. To avoid this, one possibility is to normalize the weights arriving to each post-synaptic neuron , so that √ ∑ 2 = 1. We can then rewrite the update rule in Eq. 6 as: Note that Eq. 7 is non-local (NL), meaning that a modification in a given weight also depends on other neurons in addition to the connected neurons and . Finally, assuming a small learning rate and linear activation functions in the absence of external inputs, Oja derived a local approximation to Eq. 7, known today as Oja's rule [37]: It has been suggested that a change in the sign of Hebbian plasticity rules may be advantageous in making an effective use of the dynamic range of cortical neurons [38], while also promoting decorrelation between the activity induced by different inputs. Therefore, in this paper, we will work with such so-called anti-Hebbian learning rules, which are obtained simply by changing the signs of the weight update in Eqs. 6, 7, and 8. The precise writing of the anti-Hebbian learning rules used here and a complete derivation of the anti-Oja rule can be found in App. B. For the sake of clarity we stress that, from a practical point of view, the synaptic strengths updated with the plastic rules correspond to the reservoir weights of our ESN models. Although there are examples of the anti-Oja rule applied to ESNs with nonlinear activation functions [11,15], Eq. 8 is strictly valid only when the state of the post-synaptic neurons is a linear combination of the pre-synaptic states in the form ( + 1) = ∑ ( ), which is no longer true in the presence of nonlinear neurons. In order to evaluate the influence of the local approximation derived by Oja, we will compare the performance obtained by using Eq. 7 (with the minus sign, see Eq. 16) and the one obtained by using Eq. 8 (with the minus sign, see Eq. 17). We will show in the Results section that the NL anti-Hebbian rule outperforms the anti-Oja rule in chaotic time series prediction tasks.
We now consider the intrinsic plasticity (IP), which adjusts the neurons' internal excitability instead of the individual synapses. Based on the idea that every single neuron intends to maximize its information transmission while minimizing its energy consumption, Jochen Triesch proposed a mathematical learning rule that leads to maximum entropy distributions for the neurons output with certain fixed moments [12]. Although the original derivation of Triesch applied to Fermi activation functions and exponential desired distributions, soon Schrauwen et al. [13] extended the rule to account for neurons with hyperbolic tangent functions. In this case, each neuron updates its state through the following expression: where and are the gain and bias of the post-synaptic neuron, and ( ) = is the total arriving input. The minimization of the Kullback-Leibler divergence with respect to a desired Gaussian output distribution with a given mean and variance leads to the following online learning rules for the gain and bias: where is the learning rule and and the mean and standard deviation of the targeted distribution, respectively. Finally, we will also consider the combination of two of the above rules -the NL anti-Hebbian and IP algorithmsto assess the performance of an ESN when these two types of plasticity act in a synergistic manner. For this combination, there are three natural ways in which the training can be carried out: ) applying both rules simultaneously to update the intrinsic parameters and connections weights after each input; ) modifying first the connections through the synaptic plasticity and then applying the IP rule; or ) conversely, changing first the intrinsic plasticity of the neurons and then the synapses strength among them. From all the alternatives, the application of the NL anti-Hebbian rule through the whole training set followed by the application of the IP rule through the same training set yielded the best performance, and is therefore used in the forthcoming results section. Computational models combining the effect of synaptic and non-synaptic plasticity have been previously suggested in the literature for simple model neurons [39], FFNNs [40] and RNNs [40,41,42]. However, we find that a simple combination of two standard plasticity rules can ease the tractability of the results, while allowing fairer comparisons against the other plasticity models.
Prediction of a chaotic time series.
The task at hand consists on the prediction of the points continuing a Mackey-Glass series, a classical benchmarking dataset generated from a time-delay differential equation (see App. A for details on the generation of the dataset).
Since this series exhibits a chaotic behavior when the time delay > 16.8, we construct two different sets: one with = 17 (MG-17), often used as an example of mildly chaotic series; and a second one with = 30 (MG-30) that presents stronger chaotic behavior. To assess its performance, we initially feed the ESN with the last input of the training set, ( ), then run the network for a number of steps using the predicted output at time as the next input at time + 1 (i.e. ( + 1) = ( )). In this manner, the testing phase is done in the so-called autonomous or generative mode with output feedback. To quantify the error for this task, we use two different quantities: • The root mean square error ( ) over the predicted continuation of the series: • The furthest predicted point (FPP): this is the furthest point up to which the trained ESN is able to continue the series without significantly deviating from the original one. The tolerance for significant deviation is taken as = 0.02, which represents approximately 2% of the maximum distance between any two points in the original MG-17 and MG-30 series.
Memory capacity task.
The task of memory capacity (MC) is based on the network's ability to retrieve past information from the reservoir using the linear combinations of reservoir unit activations. To assess the ability of each ESN model to restore previous inputs fed into the network, we compute the (short-term) MC as introduced by Jaeger in [43]: where and denote covariance and variance, respectively. In the above expression, ( − ) is the input presented steps before the current input ( ), and ( ) = = ( − ) is its reconstruction at the output unit with trained output weights . A value ∼ 1 means that the system is able to accurately reconstruct the input fed to the network steps ago. Thus, the sum of all represents an estimation of the number of past inputs the ESN is able to recall. Although the sum runs to infinity in the original definition -accounting for the complete past of the inputin practice the data fed is finite and it will suffice with setting = , with being the number of output units of the ESN. Each of the output units is independently trained to approximate past inputs with a different value of . A theoretical limit for the memory capacity was derived in [43] to be ≈ − 1, with the number of reservoir neurons.
Hyper-parameter optimization.
One of the biggest drawbacks of Echo State Networks is their high sensitivity to hyper-parameters choice (see [26] for a detailed review on their effects over the network performance). In this work, we focus on tuning four hyperparameters to improve the performance of each ESN model: the reservoir size or number of neurons in the reservoir , the input scaling , the spectral radius of the reservoir's weight matrix (i.e. the maximum absolute eigenvalue of ) and the regularization parameter in the ridge regression. Weights in the reservoir and input layers are initialized randomly according to a uniform distribution between -1 and 1. Sparseness in the reservoir matrix is set to 90%, meaning that only 10% of all connections have initially a non-zero value. When incorporating plasticity rules, an extra tunable hyper-parameter describing the learning rate in the update rules is included. When IP is implemented, we find that best results are obtained when using = 0 and = 0.5 as the mean and variance of the targeted distribution for the neuron states. For the sake of comparison between different ESN models, we choose initially a common non-optimal, but generally well-performing set of hyper-parameters { = 0.95, = 1, = 10 −7 } for all of them, with = 300, = 10 −6 for the MG-17 series prediction and = 600, = 10 −7 for the MG-30.
Performance in prediction tasks.
In order to compare the influence of the different plasticity rules, we first estimate the number of plasticity training epochs that optimizes the performance for each model. We note that the neuronal plasticity rules are only active in the unsupervised learning procedure but not during the prediction, as detailed in App. A. The unsupervised learning can last for several epochs, with each epoch containing = 4000 points of the time series for the MG-17 task. Once the plastic unsupervised learning has finished, is computed in a supervised fashion after letting the reservoir evolve for an additional = 4000 steps. Fig. 2 shows the evolution of the RMSE and FPP for the non-local anti-Hebbian and IP rules. In this figure, the optimal number of epochs can be easily found as the point in which the RMSE (FPP) presents a global minimum (maximum). As we see, the performance gets worse as the ESNs with plasticity are over-trained. We will focus on understanding the role of plasticity rules in Sec.
3.4.
In Table 1 we show the results obtained for the anti-Oja, NL anti-Hebbian and IP rules when each model is trained optimally (i.e. for the optimal number of epochs). The number of epochs for the Anti-Oja, NL anti-Hebbian, and IP rules are 10, 8, and 100, respectively. For the sake of comparison, we also include in Table 1 the results for a non-plastic ESN with the hyper-parameters mentioned at the end of Sec. 3.1. It can be observed that the implementation of plasticity rules reduces the average prediction error and its uncertainty, specially in the highly chaotic series MG-30, while keeping consecutively predicted points close to the original test set for a When comparing different plastic rules in Table 1, we find that the NL anti-Hebbian and IP rules yield a better prediction than the anti-Oja one. In addition, we find that the combination of NL anti-Hebbian and IP reaches the lowest RMSE and largest FPP, thus providing "better and further" predictions.
Performance in memory capacity task.
We construct single input node ESNs with = 150 reservoir neurons and = 300 output nodes, such that is computed up to a delay = . For this task, we feed the network with a random time series of = 4000 points, drawn from a uniform probability distribution in the interval [-1,1]. Fig. 3 shows the memory curves for an ESN before and after implementation of the different plasticity rules. Again, we notice how models with implemented plasticity outperform the original non-plastic ESN, with the memory decaying faster in the latter case. In Table 2, we present the estimated MC computed for the plastic and non-plastic versions of the ESN. Here, we find that the IP rule and the combination of NL anti-Hebbian and IP yield the largest memory capacities. These results are in agreement with the average values presented in [44], where the maximum memory was observed at the edge of stability for a random recurrent neural network.
In the next section, we are going to explore in more detail the properties of the plastic ESNs.
Influence of plasticity rules on the reservoir dynamics.
To analyze the effects of plasticity over the ESN performance, we focus now on the MG-17 prediction task, casting our attention into the dependence of the performance on the number of training epochs. As mentioned above, Fig. 2 shows that the measures of performance exhibit absolute extrema (minimum of the errors, maximum of the number of predicted points), which are followed by a worsening of the predictions as the number of epochs increase. In order to understand this behavior, we studied quantities related to the reservoir dynamics as the plasticity training advanced. In Fig. 4, we show the average absolute Pearson correlation coefficient among reservoir states at consecutive times, as defined in App. C. In addition, and for the case of synaptic plasticity only, we present the spectral radius of the reservoir matrix (which does not change in the IP rule) as the non-supervised plasticity training evolves .
Focusing first on the NL anti-Hebbian rule, we observe in Fig. 2a) that the prediction error increases significantly beyond 10 training epochs. This fact could be attributed in first place to the associated increase of the reservoir matrix spectral radius, as seen in Fig. 4a). A maximum absolute eigenvalue exceeding unity has often been regarded as a source of instability in ESNs due to the loss of the "echo state property", a mathematical condition ensuring that the effect of the initial conditions dies out asymptotically with time [9,45,26]. Nevertheless, subsequent studies proved that the echo state property can be actually maintained over a unitary spectral radius, depending on the input fed to the reservoir [46,47], which could be the reason why we find optimal performance slightly above = 1.
The results presented here seem to agree with the results presented in [44], where it was suggested that information transfer and storage in ESNs are maximized at the edge between a stable and an unstable (chaotic) dynamical regime. In our case the ESN becomes unstable (periodic) for ∼ 1.2 and we find an associated decrease in the memory capacity of the ESN. A chaotic dynamic inside the reservoir is not observed in our numerical simulations.
Additionally, we find that the increase in the prediction error coincides with a sharp decrease in the consecutive-time pair-wise absolute correlations as shown in Figs. 2 and 4. This decrease in the correlations, which was to be expected in any anti-Hebbian type of rule -by their very own definition -occurs also along the training of the IP rule. This remarkable common trend hints at the possibility that, to some extent, decorrelation inside the reservoir could indeed enhance the network computational capability. However, an over-training of the plasticity rules yields an error increase. To further evaluate the effects of plasticity over the dynamics of the reservoir, we also analyzed the distribution of the reservoir states ( ) before and after implementation of each rule. Keeping the values of all the states at each input of the training sequence, then averaging the resulting matrix over 20 realizations with different training sets, an "average" histogram describing the distribution of the states is presented in Fig. 5. It can be observed that the application of the plasticity rules changes the distribution of the reservoir states from a rather uniform shape to a unimodal one. The initial uniform shape is given by our choice of uniform input weights and the given input scaling. As expected from the mathematical formulation of the IP rule, the distribution of the states after its implementation approaches that of a Gaussian centered around zero. Remarkably, the application of synaptic plasticity also shapes the initial distribution to a unimodal one peaking around zero. The observed distribution of reservoir states after the application of plasticity rules entails that the individual reservoir neurons tend to avoid operating at the saturation of the ℎ non-linearity.
Influence of plasticity rules on the neuron dynamics.
So far, we have focused on understanding the effects of plasticity at the network level, but nothing has been said about the way each individual neuron "sees" or "reacts" to the input after implementation of the plastic rules. To shed some light into this question, we define the effective input ( ) of a neuron at time as the sum of the input and bias unit once filtered through the input mask,̃ ( ) = 0 + In Fig. 6, we plot the response of 4 different neurons to this effective input before (blue dots) and after (red and yel- Figure 6: Activity of 4 different neurons as a function of the effective input in a non-plastic ESN (blue) and in the same reservoir after training it with NL anti-Hebbian rule for 8 epochs (red) and IP rule for 100 epochs (yellow). On the right side we zoom in one of the neurons, plotting also the evolution of the effective input over a section of the training. We highlight in green the range of inputs for which the activity broadens more notably with respect to the non-plastic case, coinciding with one of the most variable parts of the input.
low dots) the implementation of the non-local anti-Hebbian and IP rules. On the right side we have zoomed in on one of these neurons and plotted also 1000 points of the effective in-put̃ ( ) that arrives to it. It can be clearly seen that plasticity has the effect of widening the activity range of the neurons, specially on those areas -as highlighted in green-in which the same point may lead to very different continuations of the series depending on its past. To quantify this widening, we measured the average area of the reservoir neurons' activity phase space before and after implementation of the plasticity rules. The results for these average phase space areas, as presented in Table 3, back up the aforementioned expansion, which is specially significant in the case of the IP rule.
Note from Eq. 14 that if a neuron is mainly influenced by the external input at each time , then ( + 1) ≈ ℎ( ̃ ( )) and the corresponding states are distributed in a narrow region around the hyperbolic tangent curve. This is what we see in Fig. 6 for the non-plastic case. Conversely, a broadened phase space (as found after plasticity implementation) suggests a greater role of the interactions among past values of the reservoir units in determining the neuron state. In the case of neurons where IP was implemented, a further displacement of their activity towards the center of the activation function is observed, which should come as no surprise since we chose a zero-mean Gaussian as our IP target distribution. The fact that mechanisms apparently so disparate exhibit similar effects at the neuron and network level motivates the idea of synergistic learning involving both, synaptic and nonsynaptic plasticity, which has been extensively backed up also in biological systems [48,49].
To finalize, we apply the same neuron-level framework to see if we can understand the effects of an over-trained plasticity. It was shown in Fig. 2 that once a certain number of epochs were exceeded, the prediction error increased, and Table 3 Average neuron phase space area for plastic and non-plastic implementations of a 300 neurons ESN. The error is given as the standard deviation over all neurons.
Non-Plastic
Anti-Oja NL anti-Hebb IP NL anti-Hebb + IP 0.02 ± 0.02 0.03 ± 0.02 0.04 ± 0.03 0.07 ± 0.07 0.07 ± 0.07 that this co-occurred with a sharp increase in the spectral radius of the weight matrix for the NL anti-Hebbian rule (as shown in Fig. 4). Is this observed transition from stable to unstable (periodic) dynamics reflected in any way in the activity of the neurons? Choosing the same initial ESN and training set as in Fig. 6, we now apply either the NL anti-Hebbian or IP rules for a total of = 25, = 175 epochs, respectively. From the resulting plot of the activity as a function of the effective input, as shown in Fig. 7, two different paths leading to the reported worsening of the prediction performance can be identified. On the one hand, the IP rule leads to a seemingly blurred phase space representation at each unit of the reservoir, in which each effective input value leads to a very spread network activity. We have identified that in this regime some neurons of the ESN lose their consistency (an important property that needs to be fulfilled in RC as discussed in [50,51]), producing different responses for the same input when the initial conditions are changed. On the other hand, an excess of NL anti-Hebbian training produces the split of the original phase space representation into two disjoint regions. We observed that the instability in this case is associated with a self-sustained periodic dynamics of the reservoir states, leading to consecutive jumps from one phase space region to the other. We have noticed that this transition, which results from the imposed decorrelation, is also followed by a decrease in the memory capacity of the network.
Discussion and Outlook
We showed that numerical implementation of plasticity rules can increase both the prediction capabilities on temporal tasks and the memory capacity of reservoir computers. In the case of the Hebbian-type synaptic plasticity, we proved that a non-local anti-Hebbian rule outperforms the classically used anti-Oja approximation. We also found that the synergistic action of synaptic (non-local anti-Hebbian) and non-synaptic (Intrinsic Plasticity) rules lead to the best results.
At the network level, different quantities that might be modified by the plasticity rules were analyzed. For the nonlocal anti-Hebbian rule, we showed in Fig. 4 how the sudden increase in the reservoir weight matrix spectral radius cooccurs with a sharp drop on the states correlations at consecutive times. More concretely, we observed that the optimal number of epochs occurred just before the transition to a periodic self-sustained dynamics inside the reservoir. Similarly, continuous application of the IP rule also tends to decorrelate the states of the neurons within the reservoir. An over-training of the IP rule eventually results in a loss of the consistency of the neural responses and the corresponding performance degradation.
From the distributions of the states depicted in Fig. 5, we found that both types of plasticity lead to unimodal distributions of states centered around zero. This seems to imply that the optimal performance is achieved -for this type of temporal tasks-when most neurons distribute along the hyperbolic tangent by avoiding the saturation regions. Indeed, similar results observed both in vivo, on large monopolar cells of the fly [52], and in artificial single neuron models [53], suggest that this form of states distribution helps to achieve optimal encoding of the inputs.
Interesting results emerged also from the one-to-one comparison among individual neurons before and after the implementation of plasticity rules. At the neuron level, we saw how plasticity rules expand each neuron activity space -measured in terms of its area-, adapting to the properties of the input and thus possibly enhancing the computational capability of the whole network. Within this same framework, we observed how the regime of performance degradation found when over-training the plastic parameters is of different nature for the synaptic and non-synaptic rules. In the synaptic case, the phase space region occupied by the activity of each single neuron splits in two disjoint regions, with the state jumping from one region to the other at consecutive time steps. In the IP rule, on the other hand, we found that decorrelation of the states and expansion of their phase space continues progressively, with different inputs eventu-ally leading to similarly broad projections of the reservoir states in the activity phase space.
Our findings also raise interesting questions that will hopefully stimulate future works. On the one hand, we observe in Fig. 6 that the resulting phase space after implementation of synaptic and non-synaptic plasticity is qualitatively similar for three out of the four neurons presented, while differing considerably from the non-plastic units. This remarkable result comes as fairly surprising given that the synaptic and non-synaptic rules employed have indeed very little in common from an algorithmic point of view. Nevertheless, they drive the network toward similar optimal states. On the other hand, the instability arising from an over-training of the plastic connections or intrinsic parameters shows to be of fundamentally different nature in the NL anti-Hebbian and IP rules. A thoughtful characterization of these transitions and a deeper understanding of the underlying similarities between synaptic and non-synaptic plasticity rules will likely trigger interesting research avenues.
The computational paradigm of reservoir computing has been shown to be compatible with the implementation constraints of hardware systems [54,55]. The finding that a physical substrate with non-optimized conditions can be used for computation has been exploited in the context of electronic and photonic implementations of reservoir computing [56,57]. Although the physical implementation of plasticity rules is certainly challenging, the results presented in this manuscript anticipate a potential advantage of considering such plasticity rules also in physical systems.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
where we are already using the minus sign to account for anti-Hebbian behavior. Notice that this form of the rule does not assume any particular form of the activation function ( ) = (⃗ ). Now, this update rule can be approximated by expanding the above expression in powers of : ⎤ ⎥ ⎥ ⎥ ⎦ Imposing normalization of the incoming weights, leads to with √ ∑ ( ) 2 = 1.
Finally, assuming linear activation functions and no external input, so that ( ) = ∑ ( ) ( ), we obtain the widely-known anti-Oja's rule: The more adequate use of Eq. 16 comes of course with an important computational cost compared to Eq. 17, but it is still feasible for the reservoir sizes we considered.
C. Measures of reservoir dynamics during plasticity training.
To evaluate the decorrelation among pre and post-synaptic reservoir states, we employed the Pearson correlation coefficient between activity of unit at time and that of unit at time + 1, given by: After each epoch of the plasticity training, the mean absolute correlation was computed as: | | ( ( ), ( + 1)) | | (19) where N denotes the size of the reservoir and M the number of independent realizations over which the results were averaged.
Guillermo B. Morales is currently working towards his PhD at the University of Granada. Previously, he received the MSC degree on complex systems at the University of the Balearic Islands. His main research interests cover topics of neural network dynamics and epidemics spreading from a complex systems perspective.
Claudio R. Mirasso is Full Professor at the Physics Department of the Universitat de les Illes Balears and member of the IFISC. He has co-authored over 160 publications included in the SCI with more than 7500 citations. He was coordinator (and principal investigator) of the OCCULT project (IST-2000-29683) and PHOCUS project (IST-2010-240763) and principal investigator of other national and European projects. His research interests include information processing in complex systems, synchronization, fundamentals and | 2021-01-18T02:15:46.767Z | 2021-01-14T00:00:00.000 | {
"year": 2021,
"sha1": "056513abd12271bd6f1900e8bda974a2d5e62fb9",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.neucom.2020.05.127",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "895a30adb178597016b9692663dab142d02305df",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
119294388 | pes2o/s2orc | v3-fos-license | Pseudo-fermions in an electronic loss-gain circuit
In some recent papers a loss-gain electronic circuit has been introduced and analyzed within the context of PT-quantum mechanics. In this paper we show that this circuit can be analyzed using the formalism of the so-called pseudo-fermions. In particular we discuss the time behavior of the circuit, and we construct two biorthogonal bases associated to the Liouville matrix $\Lc$ used in the treatment of the dynamics. We relate these bases to $\Lc$ and $\Lc^\dagger$, and we also show that a self-adjoint Liouville-like operator could be introduced in the game. Finally, we describe the time evolution of the circuit in an {\em Heisenberg-like} representation, driven by a non self-adjoint hamiltonian.
I Introduction
In some recent papers, [1,2], one of us (FB) introduced the notion of pseudo-fermions, (PFs), arising from a deformed version of the canonical anti-commutation relations (CAR). These PFs have been shown to be quite useful, mainly in connection with some specific quantum mechanical systems, [2]. Moreover, PFs are intrinsically related to a very nice functional structure, so that they appear also mathematically appealing.
Here we show how the same algebraic construction proposed for PFs can be useful also in the analysis of a completely different, classical, system, i.e. an electronic circuit recently introduced in a series of recent papers, [3,4,5], in connection with PT-quantum mechanics. In particular, by adopting our strategy, biorthogonal bases of the Hilbert space where the system lives, are generated, bases which are therefore, somehow, attached to the circuit. Also, intertwining operators can be defined and two equivalent circuits, corresponding to the adjoint version of the Liouvillian and to a third self-adjoint similar operators, can also be defined.
The paper is organized as follows: in the next section we introduce the electronic circuit and we derive the differential equations of motion. We also list some results on PFs. In Section III we apply the pseudo-fermionic structure to the analysis of the dynamical behavior of the circuit, adopting both the Schrödinger and an Heisenberg-like representation. We also consider other circuits which arise, because of the existence of similarity transformations, starting from the original Liouvillian. Section IV contains our conclusions, while a different approach to the dynamical behavior of the circuit is sketched in the Appendix.
II Stating the problem and first considerations
In [3,4,5] the authors, with the aim of discussing a suitable interplay between loss and gain in a two-components circuit, introduced a very simple model, see Figure 1, consisting in two different parts, interacting via a mutual inductance. The physical interest of this circuit is that it produces a concrete system which, apparently, seems to produce an arbitrary fast dynamics. The reason for that is that the time evolution is not unitarily implemented, while it is tuned by a suitably chosen non-hermitian hamiltonian.
Calling V j (t) and I j (t), j = 1, 2, the potential and the current for the j-th component of the circuit, the following equations are easily deduced: where we have assumed that µ = ±1, the following equation are deduced for V 1 (τ ) and V 2 (τ ): Here the prime is the derivative with respect to τ , which is clearly proportional to the ordinary time derivative. We will see that these equations can be rewritten as two uncoupled, fourth-order, differential equations in the Appendix. Here we are more interested in considering them from a different point of view. For that, we introduce the vector Ψ(τ ) and the matrix L as follows: Then it is clear that (2.2) can be rewritten as which could be still written as iΨ ′ = H ef f Ψ, simply by introducing a 4 × 4 matrix H ef f = iL, [4]. This can be seen as a Schrödinger-like equation, with H ef f manifestly not self-adjoint. However, it should be stressed that this is not really so simple, since the four components of the vector Ψ(τ ), contrarily to what happens in a general quantum mechanical system, are related among them: the third component, V ′ 1 (τ ), is infact the τ -derivative of the first one. It might be interesting to notice that going from (2.2) to (2.4) is nothing but doubling the number of variables to rewrite a second order differential equation as a set of two first-order differential equations, which is a standard procedure in the mathematical literature.
The analysis of the circuit in Figure 1 was used in [4] as a prototype model which bypass, as the authors suggest, the lower bound imposed by the bandwidth theorem. This is not our main interest here: in fact, we are more interested in showing that PFs can be useful in the general treatment of equation (2.4), treatment which will naturally produce, as we will show, more equivalent circuits.
Before beginning our analysis, we need to recall few useful and interesting facts on PFs.
II.1 The pseudo-fermionic structure
We limit our analysis of PFs to one and two dimensions. The extension to higher dimensions is straightforward, and it will not be given here, since will not be useful for us. We begin with d = 1. The starting point is a modification of the CAR {c, c † } = c c † +c † c = 1 1, {c, c} = {c † , c † } = 0, between two operators, c and c † , acting on a two-dimensional Hilbert space H. The CAR are replaced here by the following rules: {a, b} = 1 1, {a, a} = 0, {b, b} = 0, (2.5) where the interesting situation is when b = a † . These rules automatically imply that a non zero vector, ϕ 0 , exists in H such that a ϕ 0 = 0, and that a second non zero vector, Ψ 0 , also exists in H such that b † Ψ 0 = 0, [1]. Let us now introduce the following non zero vectors as well as the non self-adjoint operators We further introduce the self-adjoint operators S ϕ and S Ψ via their action on a generic f ∈ H: Hence we get the following results, whose proofs are straightforward: Nϕ n = nϕ n , NΨ n = nΨ n , (2.10) for n = 0, 1.
The above formulas show that (i) N and N behave (almost) like fermionic number operators, having eigenvalues 0 and 1; (ii) their related eigenvectors are respectively the vectors of F ϕ = {ϕ 0 , ϕ 1 } and F Ψ = {Ψ 0 , Ψ 1 }; (iii) a and b † are lowering operators for F ϕ and F Ψ respectively; (iv) b and a † are rising operators for F ϕ and F Ψ respectively; (v) the two sets F ϕ and F Ψ are biorthonormal; (vi) the very well-behaved operators S ϕ and S Ψ maps F ϕ in F Ψ and viceversa; (vii) S ϕ and S Ψ intertwine between operators which are not self-adjoint. Another interesting feature is the following: since the square roots of S Ψ and S ϕ surely exist, from the first equation in (2.14) we get is a self-adjoint operator, similar to N (and to N,
II.1.1 A two-dimensional extension
Let (a j , b j ) be two pairs of pseudo-fermionic operators, {a j , b j } = 1 1, a 2 j = b 2 j = 0, j = 1, 2, satisfying also the following independence relation: Let ϕ 0,0 be a vector annihilated by a 1 and a 2 : and N 2 ϕ k,l = lϕ k,l . Similar results as those deduced in the one-dimensional case can be recovered also here. For instance, a biorthogonal basis of H, F Ψ , can be found, and these new vectors are eigenstates of N † j , j = 1, 2. Also, intertwining operators mapping F Ψ into F ϕ and viceversa can again be defined.
We refer to [1] for further remarks and consequences of these definitions. In particular, for instance, it is shown that F ϕ and F Ψ are automatically Riesz bases for H, and the relations between fermions and PFs are discussed.
III Pseudo-fermions from the circuit
In this section we will work under the following useful requirements: These conditions allow us to check that the eigenvalues of L are all different and reals. In particular, calling Then, if we introduceL = L − l 3 1 1, its eigenvalues λ j , j = 0, 1, 2, 3, are easily found: , and the following hold: Let us introduce the matrices They satisfy the following CAR: A 2 j = 0, and {A j , A † k } = δ j,k 1 1, j, k = 1, 2. We further introduce the following self-adjoint operator: They are orthonormal and satisfy the eigenvalue equation It is possible to show that H 0 andL are related by an intertwining operator T . In fact we can deduceL where T is the following matrix: Consequences of (3.2) will be considered below. Here the following quantities have been introduced: it is clear that det(T ) is always non zero if the four t 2,j , j = 1, 2, 3, 4, are non zero. In this case, T is invertible and the previous intertwining relation becomesL = T H 0 T −1 : as a consequence, the non self-adjoint Liouvillian L =L + l 3 1 1 = T (H 0 + l 3 1 1) T −1 associated to the circuit in Figure 1 is similar to the self-adjoint adjoint hamiltonian H 0 (plus l 3 1 1), whose eigenvalues and eigenvectors are given above.
III.1 Consequences of the pseudo-fermionic settings
What discussed in Section II suggests to introduce now the operators a j = T A j T −1 and b j = T A † j T −1 , j = 1, 2, since in this way L can be written as L = λ 1 N 1 + λ 2 N 2 + l 3 1 1, where, as in Section II.1.1, we have introduced N j = b j a j . It is obvious that (a j , b j ) are pseudo-fermionic operators: a 2 j = b 2 j = 0, {a j , b k } = 1 1δ j,k , j, k = 0, 1. The eigenstates of L can be constructed from the vacuum of a j , ϕ 0,0 satisfying a j ϕ 0,0 = 0, j = 1, 2: k, n = 0, 1. It is now easy to check that there exists a relation between the vectors ϕ k,n and Φ k,n . In fact we have ϕ k,n = T Φ k,n , k, n = 0, 1. Needless to say, the set F ϕ = {ϕ k,n } is a basis for H. However, since T is not unitary, F ϕ is not an o.n. basis. It is very easy now to find a second set of vectors, F Ψ = {Ψ k,n , k, n = 0, 1}, which is a new basis, biorthogonal to F ϕ . For that it is sufficient to introduce the vectors like this: Ψ k,n = (T −1 ) † Φ k,n , k, n = 0, 1, which surely exist in our hypotheses, since T is invertible. We can check the following facts: 1. As already stated, F ϕ and F Ψ are biorthogonal: Ψ k,n , ϕ l,m = δ k,l δ n,m .
3. Defining an operator S ϕ as S ϕ f = k,n ϕ k,n , f ϕ k,n , this can be written as S ϕ = T T † . Hence it is strictly positive and, clearly, self-adjoint. 4. Analogously, defining an operator S Ψ as S Ψ f = k,n Ψ k,n , f Ψ k,n , it turns out that 5. The vectors Ψ k,n are eigenstates ofL † and, consequently, of L † :
4)
k, n = 0, 1. Hence L and L † are isospectral, as expected. This is, in fact, a simple consequence of the fact that these two operators are related by an intertwining operator, T , as we will see in Section III.2.
A similar analysis can be carried out if we consider the energy of the two sub-circuits, as in [4]: E n (τ ) = 1 2 CV n (τ ) 2 + 1 2 LI n (τ ) 2 , n = 1, 2. Using equations (2.1), putting ω 0 = 1 It is now possible, in principle, analyze E n (τ ) for all τ . However, here, we will limit ourselves to consider the asymptotic behavior for τ very large. Repeating the same steps as above, we deduce that E 1 (τ ) diverges to +∞ if They are both satisfied if ω 2 0 − ω 2 p < l 4 < ω 2 0 + ω 2 p , which is very similar to (3.6). The only difference is in the appearance of both ω 0 and ω p , which therefore both play a role in this analysis: the eigenvalue l 4 must belong to a suitable neighborhood of ω 0 , with a width fixed by ω p .
III.2 On L †
We have seen that, adopting our pseudo-fermionic strategy, a second natural operator, other that L, appears in the game. This operator, L † , can be directly related to L simply recalling that L =L + l 3 1 1 and thatL = T H 0 T −1 . In fact, these simple equalities imply the following (3.7) Therefore, recalling that S ϕ = T T † , we conclude that L = S ϕ L † S −1 ϕ or, equivalently, that LS ϕ = S ϕ L † . This last equation is a typical intertwining relation, [7], relating L and L † by means of the intertwining operator S ϕ . Among the other consequences of this relation, a crucial one is that the eigenvalues of L and L † should coincide, as it actually happens in our concrete model. Moreover, the related eigenvectors of L and L † should be somehow related by S ϕ . Again, this is exactly what happens here. In fact, recalling that ϕ k,n = T Φ k,n and that Ψ k,n = (T −1 ) † Φ k,n , k, n = 0, 1, we deduce that ϕ k,n = S ϕ Ψ k,n , k, n = 0, 1, as expected. It could be worth stressing that these results are not peculiar of the model we are considering here; they appear everywhere when pseudo-fermions (or pseudo-bosons, [8]), are involved.
Going back to L = S ϕ L † S −1 ϕ , this means that, [8], L is crypto-hermitian with respect to S −1 ϕ . This fact has a lot of consequences, which are described in [8]. We should probably stress that all the mathematical difficulties which we are forced to consider in [8], here do not appear, since we are working with intrinsically bounded operators (finite-dimensional matrices!).
The above procedure does not clarify the electronic meaning of L † . Then, it is interesting to set up a different procedure. For this reason, we assume that the four dimensional vector X(τ ), with X T (τ ) = (x 1 (τ ), x 2 (τ ), x 3 (τ ), x 4 (τ )), satisfies the differential equation X ′ (τ ) = L † X(τ ). After some minor manipulations, and recalling that α = 1 1−µ 2 , we get the following set of equations for x j (τ ): This set of equations are analytically very close to that in (2.1). In particular, they even coincide if we make the following identifications: x 1 (τ ) ↔ I 1 (τ ), x 2 (τ ) ↔ I 2 (τ ), x 3 (τ ) ↔ −V 1 (τ ) and x 4 (τ ) ↔ −V 2 (τ ). The only price we have to pay is that we also need to fix L = C = 1. In other words, the electronic content of both L and L † is exactly the same, except for the fact that, in this second circuit, L and C are fixed, while R is not. Moreover, it is not difficult to extend these results in order to get rid of the constraint L = C = 1. The only difference is that we should identify x 3 (τ ) not with −V 1 (τ ), but with −LV 1 (τ ) and x 4 (τ ) with −LV 2 (τ ). We can understand this sort of electronic equivalence between L and L † simply recalling that there exists a similarity transformation, implemented by the self-adjoint operator S ϕ , which maps L into L † and viceversa.
III.3 Heisenberg-like dynamics
In [2] we have briefly discussed that, when dealing with the time evolution of a quantum system driven by a non self-adjoint hamiltonian, the natural choice of the Heisenberg dynamics is not the standard X(t) = e iHt X(0)e −iHt , since this choice does not preserve the independence of the mean values of the observables with respect to the representation chosen. The choice we made, which also agrees with the choice made by other authors, see for instance [6] and references therein, is the following: since the wave function of a system, Φ(t), satisfies the equation iΦ(t) = HΦ(t), where H could be self-adjoint or not, we put for each observable X of the system. In this way we have that Φ(t), X(0)Φ(t) = Φ(0), X(t)Φ(0) . We adopt here this same recipe, identifying H with iL, as suggested in Section II. Then, after few computations, we deduce that for each operator X of the circuit. In particular, if we look for the time evolution of the number operators N 1 and N 2 , using the expansion eα N j = 1 1 + (eα − 1)N j , j = 1, 2 and α ∈ R, and its adjoint, we find: Since N j = 1, and λ j > 0, j = 1, 2, we can check that N j (τ ) ≤ e −2l 3 τ , j = 1, 2.
Recalling now that l 3 < 0, this inequality can be used to give an upper bound on the possible growth of the operators N 1 (τ ) and N 2 (τ ). It could be worth noticing that N j (τ ) is not explicitly related to the j−th sub-circuit, so that we cannot use the above formulas to deduce the time evolutions of the two gain-loss parts of the original circuit.
IV Conclusions
We have shown how a general framework, originally proposed in a quantum mechanical settings, can be used in the analysis of an electronic circuit. In particular we have shown that the dynamical behavior of a gain-loss circuit can be analyzed by means of twodimensional pseudo-fermionic operators. In our opinion, this approach is interesting at least for two reasons: • first for a purely mathematical reason: out of our simple circuit, we have produced two sets of biorthogonal bases of H = C 4 having a lot of nice properties. For instance, they are related by an intertwining operator, which is the same operator which can be used to make the Liouvillan of the circuit self adjoint; • from an applicative point of view, we have seen how pseudo-fermions can be useful to solve the differential equations for the circuit, and we have also shown that other circuits can be constructed starting from the original one.
In our opinion, these results open new interesting research lines. In particular, a natural question is about some general relation, if any, between other kinds of circuits and pseudo-fermion operators. Or, stated in different terms: for what kind of circuits a pseudofermionic structure can be found? And, viceversa, given some pseudo-fermion operators and some non self-adjoint hamiltonian constructed out of them, is there any electronic circuit which implements the dynamics? A deeper understanding of the relations, if any, between the two circuits in Figures 1 and 2 is also worth. Needless to say, a comparison between ours and the results in [3,4,5] is also worth. These, we believe, are interesting open questions which will be considered in a near future. | 2013-09-03T13:52:07.000Z | 2013-09-03T00:00:00.000 | {
"year": 2013,
"sha1": "e51ee94453bc4624759d709c309fb45a41d3630e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1309.0678",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e51ee94453bc4624759d709c309fb45a41d3630e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
89606711 | pes2o/s2orc | v3-fos-license | Supercapacitor Electrode Based on Activated Carbon Wool Felt
An electrical double-layer capacitor (EDLC) is based on the physical adsorption/desorption of electrolyte ions onto the surface of electrodes. Due to its high surface area and other properties, such as electrochemical stability and high electrical conductivity, carbon materials are the most widely used materials for EDLC electrodes. In this work, we study an activated carbon felt obtained from sheep wool felt (ACF’f) as a supercapacitor electrode. The ACF’f was characterized by elemental analysis, scanning electron microscopy (SEM), textural analysis, and X-ray photoelectron spectroscopy (XPS). The electrochemical behaviour of the ACF’f was tested in a two-electrode Swagelok®-type, using acidic and basic aqueous electrolytes. At low current densities, the maximum specific capacitance determined from the charge-discharge curves were 163 F·g−1 and 152 F·g−1, in acidic and basic electrolytes, respectively. The capacitance retention at higher current densities was better in acidic electrolyte while, for both electrolytes, the voltammogram of the sample presents a typical capacitive behaviour, being in accordance with the electrochemical results.
Introduction
In recent decades electrochemical capacitors, also known as supercapacitors, have received great scientific and technological attention because of their interesting possibilities as energy storage devices [1,2].Although different commercial devices already exist, there are still many scientific and technological challenges in the supercapacitor research area, mainly with respect to increasing the amount of stored energy [2].Given their suitable characteristics for supercapacitor electrode application, carbon materials have received the most attention and they are the most widely used in this application [3][4][5][6][7][8][9][10][11][12].Within carbon materials, carbon fibres (CF) have special characteristics when compared with other carbon materials [13].The CF can also be transformed into fabric, woven or yarn forms, which give them self-sustainable characteristics.The CF may have a high surface area with a well-defined pore structure, good electrical conductivity, and easy electrode formation and containment [14].The study and production of CF, as well as activated carbon fibres (ACF), are of particular interest among different research groups due to their several applications in a wide range of fields from science to industry.These materials have chemical, electrical, and mechanical properties that make them unique; therefore, the demand for this kind of material is expected to increase in the future.The use of ACF have been extended to multiple processes based on adsorption and catalysis, such as gas separation, wastewater purification, advanced oxidation processes, and supercapacitors [15][16][17][18][19]. Nowadays the CF, as well as ACF, production is based on the use of petroleum derivatives as precursor materials, which implies high-energy demands and a major contribution to the carbon footprint [20,21].These problems could be minimized by using a precursor material from a renewable source; many studies had already used biomass as an ACF precursor, and in other cases natural fibres, such as silk, jute, cotton, or bamboo, have been used [22][23][24][25][26].Among natural fibre sources wool is one of the main examples, a large quantity of waste is generated among the major wool producer countries, creating a final disposal problem because of their quite slow biodegradability.Therefore, the use of this waste as an ACF precursor is quite interesting because of their availability and low cost, besides being a renewable and environmentally friendly material.In addition, what makes wool more suitable in comparison with other bio-based sources is its high carbon content (47%) and the presence of sulphur (3.4%), these two factors combined will enhance the carbon fibre yield during the thermal treatment.
Until now, Marcuzzo et al. [27] performed the only previous experience reported on the obtaining of ACF from wool felt.There are previous works with wool; Chen et al. produced activated carbon powder by chemical activation, obtaining a small specific area for an activated carbon material, while Hassan et al. studied the best conditions to obtain CF from wool.
In this work, we reported the preparation and characterization of activated carbon wool felt (ACF'f) as a supercapacitor electrode.This activated carbon material was characterized by textural and chemical analysis, and its performance as a supercapacitor electrode was evaluated through galvanostatic charge-discharge curves, cyclic voltammetries, and electrochemical impedance spectroscopy using basic and acidic electrolytes.
Activated Carbon Felt Preparation
A temperature controlled Carbolite furnace (Carbolite Furnaces, Sheffield, UK) was used to convert the wool felt into ACF'f.The samples were prepared according to Marcuzzo et al. [27], the commercial sheep wool felt were cut in regular pieces (100 × 30 mm).For achieving fibre stabilization, the felt was heated up at 10 • C•min −1 under a constant airflow (100 mL•min −1 ) up to 300 • C and kept under these conditions for 120 min.Then, the samples were heated at 10 • C•min −1 under a constant nitrogen gas flow (100 mL•min −1 ) up to a temperature of 800 • C and kept there for 30 min; at this stage, the pyrolysis took place.Finally, the samples were heated at 10 • C•min −1 up to 1000 • C where activation took place by injecting water in a 1:1 (carbon felt-water) mass relation.After cooling the furnace under a constant nitrogen flow (100 mL•min −1 ), the ACF'f was removed and placed in a desiccator.All reactants employed were analytical grade (Lynde Group).The samples were named according to the thermal process to which they were exposed: "Felt" for the raw material, "Oxidized" for the stabilized sample, "Ox/Carbonized" for the stabilized and carbonized sample, and "ACF'f" for the activated carbon felt.
Scanning Electron Microscopy (SEM)
The surface and structural characteristics of the wool felt and the ACF'f were assessed by scanning electron microscopy.Electronic images of the samples were obtained with a JEOL JSM 5900L microscope (JEOl LCC, Peabody, MA, USA).The SEM images of the ACF'f were performed without any conductive coating, as the samples were conductive enough to prevent electrical discharge.
Chemical and Electrochemical Characterizations
The ACF'f and the intermediate products were characterized by elemental analysis in a Thermocientific Flash 2000 Elemental Analyser (Thermo Fisher Scientific Inc., Waltham, MA, USA).Textural analysis was carried out in a Beckman Coulter analyser (Beckman Coulter, Brea, CA, USA), at 77 K, after degassing the samples at 100 • C for 10 h.Brunauer-Emmett-Teller (BET) area, Dubinin-Radushkevich micropore volume, and total pore volume (measured at relative pressure of 0.995) were obtained, as well as the pore size distribution through the non-local density functional theory (NLDFT) method.X-ray photoelectron spectroscopy (XPS) is a powerful method for the investigation of surface chemistry.All the XPS measurements were carried out with a Kratos Axis Ultra XPS spectrometer (Kratos Analytical Ltd., Manchester, UK) using monochromatic Al-K alpha (1486.5 eV) X-ray radiation at a power of 15 kV at 150 W. The emitted photoelectrons were detected using a hemispherical analyser (Kratos Analytical Ltd., Manchester, UK) and 15 µm-spatial resolution.The vacuum system was maintained at approximately 10 −9 Torr during all the experiments.Survey scans were collected from zero to 1200 eV with 160 eV pass energy and step size of 1 eV, in order to identify the elements present on the surface.High-resolution detection of specific elements were performed with a pass energy of 40 eV.The peaks deconvolution analyses were obtained by software CasaXPS (Casa Software Ltd., Teignmouth, UK).
For the electrochemical analysis, two-electrode Swagelok ® -type cells having two tantalum rods as current collectors were used for galvanostatic charge/discharge and cyclic voltammetry measurements.A glassy microfiber paper (Whatman 934 AH) was chosen as a separator.The samples used as electrodes had a cross-section area of 0.5 cm 2 and thickness of 0.10 cm.The weight of the electrodes was between 6.4 mg and 7.6 mg.The gravimetric specific capacitance (C s ) was determined from galvanostatic charge/discharge measurements in the voltage range of 0-1.0 V at current densities in the range of 1-100 cm −2 .C s was determined at each current according to the Equation (1): In the Equation ( 1), I is the applied current, t d is the discharge time, E 2 is the voltage range during the discharge, and m e is the mass of one electrode.Cyclic voltammograms were obtained at room temperature in the range 0-1.0 V at different scan rates (10, 20, and 50 mV•s −1 ).Electrochemical impedance spectroscopy (EIS) measurements were carried out in the frequency range from 10 −4 to 10 5 Hz with perturbation of sinusoidal amplitude of 30 mV (rms) and 10 points per frequency.
All measurements were carried out at room temperature by a PGSTAT 302N Autolab potentiostat/galvanostat (Utrecht, The Netherlands), using 2.0 mol•L −1 H 2 SO 4 and 6.0 mol•L −1 KOH aqueous solution as the electrolyte.In order to improve de electrolyte infiltration, before the cell assembly, the electrodes were soaked in the electrolyte for 24 h.
Sample Obtention
For this work a commercial felt was used as precursor (Figure 1a).After the thermal treatment, an ACF'f was obtained (Figure 1b) the material retains the structural integrity in terms of its manipulation, however, it is less resistant than the original material.It is presumed that the presence of sulphur found in the disulphide bonds is responsible for the high mechanical strength, which allows the fibre to keep its morphology during the thermal treatment.However, the ACF'f shows poor tensile strength and elongation at break compared to any commercial carbon fibre.
SEM Microscopy
The SEM image of the wool felt (Figure 2a) clearly shows the external cuticle layer, with overlapping scales.The cuticle layer represents about 10% of the fibre, while the cortex forms the rest of the structure.It can be observed that the scaly structure of the cells is not so well defined for the stabilized sample (Figure 2b), as well as part of the cortex was lost.Finally, the ACF'f (Figure 3a) presents a tubular structure without the presence of the cortex while the surface lost its characteristic scaly appearance (Figure 3b).
Elemental Analysis
Elemental analysis (Table 1), carried out on the different samples showed that the carbon is the major component in all cases.Although the carbon content of wool is lower than in other materials, like lignin (63.4%) or polyacrylonitrile known as PAN (67.91%), this material presents the advantage of being rich in sulphur, responsible for the structure integrity of the fibre.Carbon content increases after stabilization and carbonization due to selective volatilization of non-carbon components.This behaviour is expected, since the carbonization involves thermal decomposition eliminating non-carbon species and producing a fix carbon mass with a rudimentary pore structure, and that is the reason for the carbon percentage increase, to decrease later during activation because of the removal of the disorganized carbon.After the stabilization, the sulphur content does not change, which allows hypothesising that the functional groups containing sulphur remain stable during the rest of the thermal treatment.
Textural Analysis
ACF'f N 2 adsorption-desorption isotherms (Figure 4) present type IV behaviour according to BDDT classification [28], indicating a strong presence of micropores and a proportion of mesopores.BET area was determined in the relative pressure range of 0.03-0.3was of 1140 m 2 •g −1 , micropore volume was 0.37 cm 3 •g −1 and total pore volume was 0.64 cm 3 •g −1 , values similar to those reported by Marcuzzo et al. [27].The pore size assessment results obtained through NLDFT method demonstrate that most of the pores have diameters thinner than 2.0 nm, while the presence of hysteresis indicates filling and emptying of the mesopores by capillary condensation [29].The ACF'f presents a higher specific surface area comparing with the area obtained by Chen et al. (438 m 2 •g −1 ); therefore, physical activation demonstrate being more effective for the pore development than chemical activation.The material is conformed by microporous activated carbon hollow fibres with different content on their surface of oxygenated functional groups.
X-ray Photoelectron Spectroscopy
In order to obtain information about chemical composition of the ACF'f and binding characteristics of the surface material elements, XPS measurements were analysed (Figure 5).Quantitative analysis shows a surface composition of 88.75 at % C 1s, 7.99 at % O 1s as expected, and, small peaks of Na 1s, S 2p, N 1s and Si 2p can be found.On the other hand, the presence of Si 2p was detected despite being an element not common for sheep wool; therefore, it can be concluded that the responsible source for this element is the quartz furnace tube.More information on nature of the functional radicals may be obtained from high-resolution XSP analysis.High-resolution C 1s spectra (Figure 6) indicate different types of chemical bonding for C 1s atoms, especially oxygen groups that are usually reported in the literature.The binding energy from 284.6 to 285.1 eV is associated to carbon sp2; 286.3 to 287.0 eV is related to ether or alcohol groups (C-O); 287.5 to 288.1 eV is related to quinine or/and carbonyl groups (C=O) and, finally, carboxyl groups (COO) with a binding energy between 291.2 and 292.1 eV [30].The high-resolution N 1s spectra show a main peak near 401.0 eV that can be associated with the N-H bond (Figure 8) [32].As presented, the XPS analyses shows the presence of several types of oxygen, as well as nitrogenous bases, these kind of surface chemistry are normally connected to pseudocapacitive behaviour of some carbon materials [11].
Charge-Discharge Curves
The next graph (Figure 9a), shows the charge-discharge curve obtained at a constant current of 20 mA in acidic and basic electrolytes.The curves have the typical triangular shape of capacitive materials.The equivalent series resistance (ESR), which represents the sum of the resistances of the cell, can be determined from the voltage drop (E 1 ) at the beginning of the discharge as ESR = E 1 /2I [1,33].At a current of 20 mA the determined ESR were ~3 Ω and ~9 Ω in acidic and basic electrolytes, respectively.This represents a weakness for the basic electrolyte since the higher the ESR, the lower the capacitance retention at high current and the power capability of the cell [1,33,34].The C s vs. the specific current is shown in the next graph (Figure 9b).The capacitance retention at higher current densities was better in the acidic electrolyte, which is in agreement with what was already discussed for ESR values.The ACF'f presents a moderate C s in both electrolytes with a maximum C s value of 163 F•g −1 and 152 F•g −1 obtained at 0.15 A•g −1 in basic and acidic electrolytes, respectively.These values are higher or comparable with some other previously reported C s values for carbon fibre materials [19,35,36], biomass-derived carbon materials [9,10], and other types of carbonaceous materials [3].However, the determined C s for ACF'f are lower than those reported for several metal oxides [37,38] and conducting polymer-based materials [39,40].Notwithstanding, the results achieved are in accordance with the high specific surface area of the ACF'f sample, which would determine a high double layer capacitance for this carbon material.On the other hand, the presence of oxygenated and nitrogenated surface functional groups in the sample, which were confirmed in the elemental and XPS analysis, can enhance the observed C s through reversible redox reactions (pseudocapacitance contribution).This phenomenon could explain the fact that the observed C s was slightly higher than expected for this material if it were purely capacitive.For carbon materials without a pseudocapacitive contribution, the observed C s is only related to the double layer capacitance which can be estimated according to the following Equation (2): where C dl is the expected double layer capacitance in F•g −1 , 0.1 is the normalized area capacitance for a carbon material in aqueous electrolyte expressed in F•m −2 , and SSA is the specific surface area of the electrode material in m 2 •g −1 .Thus, in accordance with the BET SSA of the ACF'f sample, the value of C dl was expected to be 115 F•g −1 , which, in fact, is lower than the C s values determined in acidic (152 F•g −1 ) and basic electrolytes (163 F•g −1 ).Therefore, taking into account everything discussed so far, it is reasonable to think that the sample presents some pseudocapacitive contribution to the observed C s .
Voltammograms
The cyclic voltammograms (Figure 10), demonstrate that, in both electrolytes, the ACF'f show a typical capacitive behaviour and agrees with the galvanostatic analysis.Peaks related with reversible redox reactions are not clearly shown in the voltammograms, but a pseudocapacitive contribution cannot be discarded.For suitable visualization of this type of contribution, it would be more convenient to perform the potentiometric study in a three-electrode configuration [1,33].Furthermore, it can also be clearly seen that the voltammogram obtained in the basic electrolyte presents a lower slope at the beginning of the charge or discharge compared to that obtained with the acidic electrolyte.This behaviour may be due to the greater ESR of the cell using basic electrolyte.The next graph (Figure 11) shows the Nyquist plot obtained for the EIS experiments.The sample analysed using acidic electrolyte shows high capacitive behaviour, which is evidenced by the vertical-shaped line at low frequencies [41].In contrast, the sample analysed using basic electrolyte shows a more inclined line at low frequencies.The series resistance (R s ) found at high frequency (see the inset of the Figure 11), related to the bulk-solution resistance and electronic resistance of the electrode, is low and very similar in both electrolytes, 0.23 and 0.34 Ω for basic and acidic electrolyte, respectively.This is consistent with a high electrical conductivity of the ACF'f.Charge transfer resistance (R ct ), associated with the charge transfer across the electrode-electrolyte interface, can be determined from the width of the semi-circle that appears at high frequencies [1,33,34] (see the inset of the Figure 11).R ct was higher in the basic electrolyte than in the acidic one, which can be related to a higher pseudocapacitive contribution (already discussed above) in the basic electrolyte.In both electrolytes, the total resistance (R s + R ct ) value is in good agreement with the ESR value determined from the galvanostatic experiments.These values are similar to other carbon materials used as supercapacitor electrodes [34].
Conclusions
A wool derived ACF'f was successfully prepared and characterized as a supercapacitor electrode.The material presents a good BET specific surface area, according to a hollow fibre, and a heterogeneous surface chemical composition.This heterogeneity is supported by the elemental and XPS analysis results, suggesting the presence of nitrogenated and oxygenated functional groups.The ACF'f has a moderate C s value in both electrolytes with a maximum of 163 F•g −1 in KOH-based electrolyte and 152 F•g −1 in H 2 SO 4 -based electrolyte, both determined at 0.15 A•g −1 .On the other hand, the ACF felt has a higher capacitance retention and lower ESR when the used electrolyte was the sulphuric acid aqueous solution in comparison with the basic one.In sum, this work shows that wool-derived ACF'f can be a promising low-cost and environmentally friendly material for supercapacitor electrode applications.
Future works are going to be focalized in completing the electrochemical characterizations, mainly carrying out studies of cyclability and the electrochemistry performance using organic electrolytes.Furthermore, thinking of a possible commercial application, it could be interesting to design modifications in the fibre structure in order to decrease the electrical resistance.
Figure 2 .
Figure 2. A 1000× magnification SEM image of the wool felt (a), and the stabilized felt (b).
Figure 3 .
Figure 3.A 5000× magnification SEM image of the ACF'f, front view of the hollow material (a), and a view of the surface (b).
Figure 5 .
Figure 5. XPS survey spectrum and the surface element composition (at %) for ACF'f.
Figure 9 .
Figure 9. (a) Galvanostatic charge-discharge curves measured at 20 mA; and (b) specific capacitance vs. specific current measured in acidic and basic electrolytes.
Figure 10 .
Figure 10.Cyclic voltammetries recorded at 20 mV•s −1 .The electrolyte used is indicated for each curve in the graph.
Figure 11 .
Figure 11.Nyquist diagram, obtained from the EIS experiment performed in acidic (black squares) and basic electrolytes (red squares).The inset shows the magnification of the diagram at the high-frequency range and indicates the R s and R ct of the cell using the basic electrolyte.
Table 1 .
Elemental analysis of the material at every stage of the treatment (mass percentages). | 2019-02-09T00:53:33.283Z | 2018-04-16T00:00:00.000 | {
"year": 2018,
"sha1": "f40a74a7f46b4ca8c1d256f4109c243b82373807",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5629/4/2/24/pdf?version=1525348675",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f40a74a7f46b4ca8c1d256f4109c243b82373807",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
198906067 | pes2o/s2orc | v3-fos-license | Medically Unexplained Symptoms and Attachment Theory: The BodyMind Approach®
This article discusses how The BodyMind Approach® (TBMA) addresses insecure attachment styles in medically unexplained symptoms (MUS). Insecure attachment styles are associated with adverse childhood experiences (ACEs) and MUS (Adshead and Guthrie, 2015) and affect sufferers’ capacity to self-manage. The article goes on to make a new hypothesis to account for TBMA’s effectiveness (Payne and Brooks, 2017), that is, it addresses insecure attachment styles, which may be present in some MUS sufferers, leading to their capacity to self-manage. Three insecure attachment styles (dismissive, pre-occupied and fearful) associated with MUS are discussed. TBMA is described and explanations provided of how TBMA has been specifically designed to support people’s insecure attachment styles. Three key concepts to support insecure attachment styles involved in the content of TBMA are identified and debated: (a) emotional regulation; (b) safety; and (c) bodymindfulness. There is a rationale for the design of TBMA as opposed to psychological interventions for this population. The programme’s structure, facilitation and content, takes account of the three insecure attachment styles above. Examples of how TBMA works with their specific characteristics are presented. TBMA has been tested and found to be effective during delivery in the United Kingdom National Health Service (NHS). Improved self-management has potential to reduce costs for the NHS and in General Practitioner time and resources.
INTRODUCTION
This article builds on attachment theory (Bowlby, 1969;Holmes, 1993Holmes, , 1994Main, 2000;Holmes and Slade, 2018) and draws the links made between it and medically unexplained symptoms (MUS) by Adshead and Guthrie (2015). Its contribution to knowledge lies in that it describes how a novel group model, using a biopsychosocial perspective, called The BodyMind Approach R (TBMA) (Payne, 2009a(Payne, ,b, 2015 supports people with MUS who also have insecure attachment. The rationale for the use of TBMA as opposed to psychological interventions is that the characteristics of insecure attachment are seen in some people with MUS so TBMA has been specifically designed in content and structure to work with these characteristics. It has been shown to be effective, in research, at reducing participant's symptoms, anxiety and depression and increasing wellbeing, activity levels and overall functioning (Payne and Stott, 2010;Payne and Brooks, 2016, 2017Payne, 2017a). The research also employed qualitative (participants comments, Payne and Brooks, unpublished) to assess the outcomes in an NHS community setting (Payne, 2014(Payne, , 2017b. The concept here is that the effectiveness seen in the empirical research derives from the design (explained below in detail) of this novel approach which specifically addresses attachment-related issues for people suffering MUS. TBMA uses a learning treatment methodology with the aim of self-managing symptoms (Payne and Brooks, unpublished) rather than offering psychological treatment. We interpret self-management as an outcome due to the fact participants report seeking less external help for symptoms such as visiting General Practitioners (GPs), hospital and/or accident and emergency (A&E) departments. Therefore, TBMA provides a new, different and acceptable pathway for people with MUS and adds to the discourse and understanding of the condition and its management.
ATTACHMENT Attachment is the social connection that a child forms with a primary caregiver for emotional support/regulation (Munsell et al., 2012). Attachment happens during a "critical period" between six and twenty-four months enabling the child to create a working blueprint for future relationships. This forms an attachment style for the adult dependent on those from whom they seek and receive care (Bowlby, 1969), particularly relevant for people suffering MUS and seeking repeated care from the health service. Attachment style is embodied and to a large extent stored in implicit memory (Schachner et al., 2005;Bentzen, 2015).
When there is a perceived threat (real/imagined) to survival, wellbeing or safety, attachment behavior kicks-in to reduce distress for example, to increase proximity to, and receive soothing comfort/reassurance from, an identified attachment figure. Thereafter in the long term the adult has self-soothing behaviors for comfort when in distress, with healthy self-care and trust in the adequacy of caregivers.
MEDICALLY UNEXPLAINED SYMPTOMS
Medically unexplained symptoms are common world-wide affecting mostly women (Verhaak et al., 2006;Steinbrecher et al., 2011), young people and non-native speakers (Steinbrecher et al., 2011). Illness is the context from which their experience is constructed hence people with MUS tend to overly-identify with their symptoms. Research has found people with MUS have increased social isolation (Dirkzwager and Verhaak, 2007), more functional impairments (Katon and Walker, 1998), poorer quality of life (Smith et al., 1986), associated depression (Malhi et al., 2013), and anxiety (Lowe et al., 2008) when compared with non-MUS populations. Although moderate and severe MUS appear comorbidly with common mental disorders, a direct psychological causality to symptoms is too crude to explain most MUS (Henningsen et al., 2007).
One definition of MUS is chronic, persistent bodily symptoms for which no medical explanation has been found. MUS can also be termed "somatic symptom disorder" (SSD) (American Psychiatric Association, 2013) within the mental health field. It is defined as the total number of somatic symptoms and the degree to which the patient is concerned about them both of which are the predictors of health outcome and use.
Of the ten most common symptoms (fatigue, chest pain, headache, dizziness, swelling, back pain, insomnia, shortness of breath, abdominal pain, and numbness) GPs cannot find a medical explanation for 75% (Kroenke and Mangelsdorff, 1989). One in five GP consultations and 18% of consecutive attenders are for MUS (Taylor et al., 2012). Edwards et al. (2010) found studies from around the world showing MUS totals 26-35% in primary care and 50% in secondary care (Barsky and Borus, 1995).
Treatment studies have been varied with mixed outcomes. Most have been based on one single condition such as fibromyalgia, which has associated symptoms, although in practice patients have more than one additional condition. TBMA is different in that it can include all types of symptoms in one group. Schröder et al. (2012) is the only other approach which found group cognitive behavior therapy (CBT) to be effective with generic MUS conditions. TBMA is a group approach similar to those for specific symptoms in CBT (Arnold et al., 2004;Zonneveld et al., 2012) and group psychotherapy (for example Selders et al., 2015). Treatments are normally found in specialized clinics and mental health centers, limiting accessibility as patients refuse mental health referrals (Raine et al., 2002;Allen and Woolfolk, 2010). Approaches derived from individual CBT reduce the strength and occurrence of symptoms and improve functioning (den Boeft et al., 2014). Short-term intensive dynamic psychotherapy reduces symptoms and visits to A&E settings (Abbass et al., 2009). Mindfulness-based CBT may also be effective (van Ravesteijn et al., 2014). Training of GPs in reattribution therapy has had little success (Gask et al., 2011), however physical exercise (graded) and yoga have promising outcomes (Aamland et al., 2013;Yoshihara et al., 2014). None of the above mention insecure attachment styles.
ATTACHMENT ISSUES
Not everyone has a secure attachment. Insecure attachment can derive from adverse child experiences (ACEs) such as neglect, emotional/physical/sexual abuse, separation, loss to create insecure future relationships (Murphy et al., 2014) into adulthood. The result is a vulnerability to manage stress, suppress negative feelings and care for self. Trust in the care-givers' competence is eroded leading to withdrawal from help-seeking behavior (Ciechanowski et al., 2002) which may be true for some MUS sufferers, dependant on the insecure attachment style involved.
LINKS BETWEEN ATTACHMENT STYLE AND MEDICALLY UNEXPLAINED SYMPTOMS
Bodily symptoms may be felt as a threat to survival, wellbeing and safety creating a susceptibility to an insecure attachment style. Adshead and Guthrie (2015) reviewed the evidence that insecure attachment is common in people with MUS and with some long-term conditions. They found three studies are relevant to insecure attachment style and MUS. For women in a health maintenance organization (Ciechanowski et al., 2002) only 34% had secure attachment which was half the expected number for a non-clinical sample. The women exhibited fearful (21%), pre-occupied (22%), and dismissing (23%) insecure attachment styles. Furthermore, the number of symptoms reported were significantly associated with these styles. A greater number of somatic symptoms were reported for preoccupied and fearful compared with secure. Attendance costs/call outs were higher for people with insecure attachment styles compared with secure. Patients presenting with MUS were 2.47 times more likely to have insecure attachment according to Taylor et al. (2000Taylor et al. ( , 2012 showed frequent attendance at GPs was related to insecure attachment style. Waller et al. (2004) assessed attachment security in 37 patients with ICD-10 somatoform disorder (without severe physical or mental illness) compared with 20 healthy matched controls. Compared with 60% of controls, only 26% rated as securely attached. The healthy controls demonstrated the expected incidence of insecure attachment, that is 25% were dismissing and 15% were pre-occupied. Patients though had high levels of dismissing (48.6%) and pre-occupied (25.7%) attachment styles in sharp contrast. Other studies showed how early insecure attachment styles are more common in patients with MUS (Taylor et al., 2000;Ciechanowski et al., 2002;Noyes et al., 2003;Spertus et al., 2003). It is proposed here that symptoms could be related to threats to attachment and thus to the self, resulting in fragility.
Using the natural stress adaptations e.g., flight, fight, (for mobilization) freeze, fold or faint (defensive immobilization) does not appear to resolve the internal perceived threat to wellbeing, survival and safety presented by MUS because the threat is in the body and not the environment. There is a correlation between female survivors of sexual abuse and preoccupied or insecure attachment (Stalker and Davies, 1995). Additionally, ACEs and somatization are linked (Waldinger et al., 2006), as are ACEs and attachment issues (Sansone et al., 2001). Insecure attachment has also been linked to somatization (Stuart and Noyes, 1999). Hence, ACEs are linked with both somatization (of which MUS is a subset) and attachment issues.
We know from research MUS is associated with cumulative ACEs, to include attachment issues (Elbers et al., 2017). We know also that insecure attachment creates stress and stress can result in mental health conditions and/or MUS. Thus, it could be concluded, having unexplained bodily symptoms might be a way for people with some insecure attachment styles to legitimately seek help to meet their physical needs from those expected to be unresponsive to emotional needs. Some insecure attachment styles result in the perception that health professionals are inadequate in reducing arousal levels to relieve stress. That is, the professional is experienced as the mirror of the early inadequate care-giver (i.e., the child's primary care-giver).
HYPOTHESIS
Not everyone with MUS will experience insecure attachment. However, Adshead and Guthrie (2015) showed three insecure attachment styles are associated with MUS: dismissing, preoccupied and fearful.
It has been demonstrated that TBMA is effective (Payne and Brooks, 2017) in promoting the self-management of symptoms. Building on the work of Adshead and Guthrie (2015), which demonstrates the link between MUS and some insecure attachment styles, TBMA has been specifically designed to take account of different insecure attachment styles. MUS presents as many and various symptoms. TBMA groups reflect this as they are heterogeneous. As a result, there will be some participants with insecure attachment as an underlying issue within these groups. At every stage, therefore, TBMA addresses issues of insecure attachment in the structure of the program, facilitation, group content/practices and mind-set of the population. Rather than one-to-one models, or non-interactive class-based methods, such as dance, Tai Chi or yoga, TBMA is a group interactive model. It supports people with MUS to take the risk of interacting with others (facilitator and other group members) within a safe, regulated environment. It may be that this interaction is the element of TBMA which helps address insecure attachment patterns.
We hypothesize to account for the effectiveness of TBMA, that it can address insecure attachment styles, which may be present in some MUS sufferers, leading to their capacity to self-manage. There are considerable benefits from TBMA as a specific type of bodymind approach that differs in that it is a group approach that avoids the stigma of, or aversion to, psychological therapies. In TBMA people learn to live well by self-managing symptoms. All this makes TBMA different from somatic therapies such as somatic experiencing (Levine, 2015), sensorimotor therapy (Ogden, 2006) and contemporary bodymind approaches. Whilst people report TBMA has helped them with their symptoms TBMA does not aim to transform trauma, relieve symptoms, help clients to discover the emotional and physical source of their trauma, discharge the consequences of that trauma from the nervous system, and then support their ability to selfregulate. Therefore, TBMA is unlike these models or any other psychological intervention.
The design of the model is apt for people with MUS because it is accessible and acceptable as a learning treatment methodology rather than a psychological treatment intervention. This population often do not accept or understand psychological methods/therapies due to their physical experiences and explanation for them. Consequently, TBMA can engage this hard-to-reach population.
THE THREE INSECURE ATTACHMENT STYLES
Consequently, the three insecure attachment styles linked to MUS to which TBMA attends are: dismissing and preoccupied (Bartholomew and Horowitz, 1991;Main, 2000); and fearful (Bartholomew and Horowitz, 1991). Not all participants attending TBMA groups will necessarily be insecurely attached, however, the program supports this population specifically and can be helpful to all.
Dismissing
There may be an expectation that inadequate attention or care from others will be received with a "dismissing" type of attachment style. There may be anxiety about their symptoms and fear they will not be believed or taken seriously by health professionals. There may also be anxiety that the health professionals may assume there is a mental health condition. Therefore, any form of mental health referral is often rejected and generally the health service is seen as unhelpful. The GP and other health care providers may become, to the patient, "the inadequate clinician" as they attract the patient's dismissive attitude.
Pre-occupied
In contrast, an individual with a "pre-occupied" attachment style could become more concerned about losing the relationship with a health care professional after tests and scans etc., are over, and/or treatment is not indicated. There may be anxiety this relationship will need to end, they may become overly needy and dependent, pre-occupied with the relationship through their symptoms, so returning to the GP frequently. Bodily symptoms engage both parties, the patient visits the GP with more and more symptoms becoming emotionally needy of attention. The GP tries to find a resolution, so sends them again for more tests and scans etc., thus feeding their anxiety. These patients may be referred to by GPs as "frequent flyers." Fearful Waldinger et al. (2006) showed that fearful insecure attachment style is correlated with childhood ACEs and adult somatization in women. When a child is abused/neglected by a significant, yet unreliable adult caregiver, fearful attachment ensues. In this style a self-image may develop whereby the child feels unworthy of support from others, and of caregivers as being unreliable, or damaging. The combination of caregiver/GP and patient experience in the consultation may develop frustration and misunderstandings. Consequently, there may be a poor GP-patient relationship, and reduced care. The patient may feel they might drive others away and/or trigger inadequate outcomes due to their emotional neediness. Furthermore, this may develop into a compensatory emphasis on care-seeking for unexplained symptoms, due to an increased attention to bodily sensations.
THE BODYMIND APPROACH R
We propose the insecure attachment above affect the sufferers' ability to self-manage. Hence the need to develop a more secure attachment as part of learning to self-manage. TBMA appears to be effective for supporting people with MUS (Payne and Stott, 2010;Payne and Brooks, 2016, 2017, and we suggest this as a result of increasing secure attachment, in some participants, enabling the development of self-management. Due to TBMA's purpose-built design (discussed in detail below) insecure attachment may be reworked. In our experience, working with the symptoms through the body using improvisation, movement play, clay modeling, collage, mark-making, bodymindfulness, creativity and bodymind-emotion connections enable participants to explore and access meaning (Kossak, 2009). Using the imagination and creativity in movement, for example, can tap into sensoryemotional connections allowing embodied tacit knowledge of the symptom (which may otherwise be inaccessible) to surface. In contrast to CBT, TBMA uses the notion of the embodied unconscious (van der Kolk, 2014) by accessing the sensory experience in the body acquired through lived experience of the symptom. Accessing meaning explicitly invites people to make their own interpretations of the symptoms, for example when making marks or moving hands to describe how they feel about/experience their symptom. This symbolizes for themselves their unconscious meaning of the symptom which helps to make their previously unconscious experience explicit, similar to how arts therapies work. However, the authors are unaware of any arts therapies being employed for supporting people with MUS to self-manage. Establishing meaning helps the participant to validate the symptom. This is liberating because many MUS sufferers have been disbelieved.
The embodied style of attachment will be symbolized by the relationship to the symptom. Cognitive behavior therapy comes at the world from thinking about thinking (meta-cognition) i.e., content. TBMA, in contrast, when employing bodymindfulness comes at the world from the awareness of awareness (metawitness of the experience of sensation and process). The ability to have awareness of awareness enables people to recognize the possibility of non-attachment to the symptom (Wallin, 2017). Adshead and Guthrie (2015) propose mindfulness-based practices may help with MUS by improving regulation of negative affect and to alter the awareness of, and relationship to, pain and bodily experience. Additionally, they suggest approaches offering "here and now" bodily experience connecting with images whereby links can emerge between physical sensations, emotions and relationships. They go on to recommend that "clinicians need to develop interventions that "fit" the attachment narratives of individual patients, rather than forcing patients into one size fits all psychological therapeutic techniques" (Adshead and Guthrie, 2015: 8). TBMA satisfies this recommendation because it has been specifically designed to fit the attachment narratives of individuals, additionally in a group setting. Furthermore, TBMA works with the imagination and bodily experiences, and somatic mindfulness practices to help people make connections between emotions, sensations and relationships. TBMA works in the "present moment" to raise and change awareness of the bodily sensation and the individual's relationship to it (Payne, 2019). TBMA is framed as experiential learning (Kolb, 1984;Payne and Brooks, unpublished) as well as transformative in adult learning . The exercises enable access to perceptions of symptoms through the facilitator coaching enactive embodied mindful practices. They aim to shift the experience of the symptom, changing the relationship, perception and mind-set toward the symptom. This leads to the cultivation of self-management of symptoms thereby encouraging wellbeing.
Unlike psychological interventions in TBMA the body is emphasized first and foremost hence bodymind, joined together, rather than "mind-body" with "mind" written first and separated from "body" with a hyphen. TBMA works from the subjective body experience to the mind and back again. It privileges the interactive relationship between the body and mind, which is so emphasized in MUS. TBMA is focused holistically on the whole person rather than relying solely on language with more of a focus on the right side of the brain (creative side). In TBMA there is no explicit discussion of psychological or causal relationship with the symptoms unless the participant makes such connections themselves.
The BodyMind Approach R transforms seeing symptoms or the body as the "enemy" in a dismissive attachment style to embracing them as an "ally" flagging up the need for selfcare and compassionate acceptance of symptoms/self (Payne and Brooks, unpublished). Caring for the self (self-soothing normally developed from early attachment experiences) is initially modeled by the facilitator as a proxy caregiver e.g., how to sit, breathe, use bodymindfulness and listen to the body for signs of stress. Practices compare symptom sensations with other areas of the body as functioning and positive to create a balance between health and "dys-ease." Rather than immobility, as often found in mindfulness, TBMA encourages mindful mobility/mindful movement which favors agency, and somatic mindfulness, for example, "being in the movement moment" as in walking around the space together with a focus on what is happening in the body and to the symptom in action.
Group interaction is important to aid different styles of attachment with peers rather than solely with the facilitator, who for some may be a health professional to whom they may have a corresponding negative attitude (dismissive style). This attitude may not be so prominent with the group members. The group gives the opportunity for shared resources, a sense of belonging helps engagement, reduces isolation and promotes hormones to be released, for example, dopamine, oxytocin, serotonin, and endorphins (Porges, 2003;van der Kolk, 2014).
THREE KEY CONCEPTS
The BodyMind Approach R is designed to support people with MUS and insecure attachment to learn to self-manage through three key concepts pragmatically built into the program.
Emotional Regulation
Emotional regulation is how a person manages feelings with cognitive, physiological and behavioral associated processes. It is the process that raises or lowers the degree of emotions (Parrott, 1993) to enhance wellbeing. This emotional selfregulation framework provides for vitality but also reduced arousal for calmness. It is developed through attunement with a reliable caregiver. Attachment is therefore a significant aspect of emotional self-regulation. More securely attached children rate higher in emotional regulation and empathy (Panfile and Laible, 2012). TBMA appears to overcome the powerful blueprint of early insecure attachment, using the relationship with the facilitator and the group to cultivate a more secure relationship enabling the development of resilience drawing on neuroplasticity.
Holmes (1993) reporting on Bowlby indicates that attachment is a primary motivational system related to a spatial environment in association with a loved one. When an individual feels safe and securely attached to the loved one they can begin to pursue exploration. When they feel unsafe dysregulated signs of distress appear in behavior. TBMA engages with individuals to explore their symptoms by providing a safe environment. The facilitator models unconditional positive regard and a nonjudgmental attitude. When this is combined with stable closed group membership (few withdrawals), a constant space, predicted dates/times for meetings and a consistent facilitator, safety ensues making for regulated behavior.
The Importance of Safety in Groups
Participants were requested to commit for the first six sessions and thereafter for the following six. The opportunity to withdraw after the first six sessions appeared to add to the safety element for some people but was never used. Paradoxically it seems likely that this structure was less threatening for individuals with a fearful or dismissive style enabling them to complete the 12 sessions. Participants with a pre-occupied style would feel compelled to complete anyway.
In Maslow's (1943) hierarchy of needs for self-actualization the first is physiological then comes safety needs followed by the need for a sense of belonging. Insecure attachment means that a sense of belonging is missing, maybe because social engagement is too difficult. We know reliable safety is crucial to allow social engagement to occur. When safety and wellbeing is threatened, as in MUS, there is a greater need for safety to reduce the activation of the stress adaption response of mobility (Porges, 2018). In people with both MUS and insecure attachment the need for safety is even more critical. Hence the group needs to be a safe place, non-threatening and social to give a sense of belonging through the shared purpose. Another aspect of safety in TBMA sessions is that no one need disclose their symptom/s which helps enable experimentation and exploration of symptoms.
Bodymindfulness
Depression and/or anxiety often accompany MUS (Rosmalen and de Jonge, 2010;Burton et al., 2011). Mindfulness reduces depression and anxiety (Hofmann et al., 2010) and has a moderate effect on some MUS, such as pain (Grossman et al., 2004). Segal et al. (2002) found an association between a lack of mindful self-awareness and depression, resulting in poor recognition of, and reflection on, bodily cues or signals like tension, pain, fatigue. A "mindful attitude" can be defined as a state of presence moment to moment, realized through intentionally directed attention. At the same time both internal body sensations and external stimuli can enter and leave awareness without judgment. For example, in kindly attending to the symptom sensation interoceptively can, ironically, reduce the distress experienced. A mindful state results from participating in this state as though one was an empathic witness "benignly regarding the self." "Bodymindfulness" incorporates body awareness practices and movement in the present moment ("kinesthetic mindfulness"). It can help with dis-identification with bodily symptoms which is so often tied up with identity for the individual with MUS (Sanders et al., 2018).
THE DESIGN OF TBMA TO SUPPORT INSECURE ATTACHMENT
The intervention is referred to as "learning groups"; "symptoms groups" and "workshops" with a focus on the lived body experience of the symptoms rather than any mental health or psychological title. People are referred to as "participants" rather than "patients" which may help a sense of agency since it reduces dependency and any expectations the facilitator will be unsatisfactory. The program normalizes the symptoms, i.e., nonmedicalizing them which helps acceptance of the condition and promotes feelings of agency, where previously there may have been none. For all insecure attachment styles this sense of agency can be helpful for engagement.
The group workshops are held twice a week for the first 2 weeks. This intensity at the outset helps to promote cohesiveness in the group. Bonds can be forged with each other and the facilitator, promoting engagement and reducing dropout. The 12 × 2 hourly sessions are optimal for change (Lambert, 2013) with enough time for engagement. The individual consultation with the facilitator conducted before the group commences and the week it ends is in the same venue as the group sessions which can add reassurance for individuals with pre-occupied insecure attachment styles. Participants are aware they will be contacted by text, email and letter by the facilitator every 6 weeks for a further 6 months, i.e., they are not dropped after the group ends. A participant who has a pre-occupied insecure attachment style will be reassured by the level of contact on-going, initially the fearfully attached will be frightened but they can opt in or out after six sessions. The participant with a dismissive insecure attachment style will disengage and sabotage the group. However, the facilitator having a very high level of psychological skills can "hold" the group and provide enough safety to prevent disintegration occurring.
The sessions are carefully structured to cultivate interaction with rituals and predictable events for safety which will have supported participants with fearful or preoccupied insecure attachment styles substantially. There is predictable on-going contact between participants and facilitator, even after face-toface contact has concluded, via text, email and letters, seems to reduce concerns whether participants have fearful, dismissing or a pre-occupied attachment style.
The Power of the Group
For people with MUS who are insecurely attached the group can act as a support and pathway toward learning to make healthy attachments in a safe setting. The group acts as a source of peer support rather than support being from one health professional i.e., from only the therapist/teacher as in one-to-one approaches. Friendships test out and strengthen the ability to form more secure attachments. Group solidarity and approbation develop, encouraging each other toward improvement. The group shares goals, for example, improving health and wellbeing and the belief in hope for change. These shared goals/beliefs help form the group identity, rationale for the sense of belonging, the protection offered, and the group's continuous existence through the bond created (Bar-Tal, 2000). This type of group for this population which have tended to have experienced isolation can be a welcome "comfort blanket" bridging them into a different world of experimentation and exploration.
The group gives permission to share intimate personal stories. Participants discover common experiences shared in the group, they feel less isolated, make friends and often meet up following the group. The fact people wish to meet after the group is in line with group identification and group attachment. Smith et al. (1999) explain the subsystems and functions regulating one-toone attachment are the same as attachment to social groups. These include seeking support and responsiveness and emotional disclosure, all of which are affected by personal history which in turn affects future relationships. Bearing this in mind careful preparation is given to the beginning and ending of sessions and of the whole program. For example, cohesion is strongly encouraged, and safety promoted from the outset. Additionally, there are individual consultations with the facilitator, an action plan for going forward post-group and non-face-to-face contact every 6 weeks for 6 months. The group's capacity to act as an attachment object and provider of security can affect neural integration. The group may help to down-regulate participants' emotions by being a regular, steady influence in their lives. Porges' Polyvagal Theory (Porges, 2003) concludes that human social interaction combined with taking the psychological mindset into account in interventions turns off the sympathetic fight/flight response. The calming of the sympathetic nervous system, combined with feeling listened to, enables people to feel safe enough to engage in the play. This enables the work of creativity, imagination, self-reflection, self-regulation and selfmanagement (Porges, 2003).
It is possible that the group may be self-selecting since people who tend to avoid attachment or who are anxiously attached may filter themselves out before committing. Anxiously attached participants may be frightened of rejection so might be overly positive of their experiences.
The Facilitator as a Catalyst Bowlby (1982: 207) suggests "the link between leader and group is a facilitating, rather than a necessary element of the individual's attachment to the group." Sochos (2015) claims there can be an attachment to the group via an image which symbolizes the group. There is a sense of security and protection derived from the leader -a powerful other -however, in TBMA the attachment is with the facilitator and group members. It is symptom which can be symbolized.
The facilitator initially holds the hope for the group and that change is possible which helps transform the group mind-set to a more positive one. Facilitators have a passion for the approach which influences engagement from the group. They are all trained and certified in TBMA, have experience of over 5 years in leading groups of adults in mental health and a background in embodied, enactive approaches. Furthermore, facilitators are selected based on their qualities of warmth, empathy, and genuineness (Rogers, 1961). The facilitator's training and attitudes are specifically geared toward supporting individuals with insecure attachment.
The individual consultation with the group facilitator at the outset sets the tone for the group workshops, building early rapport with the group facilitator to provide safety. An insecurely attached participant will have opportunities to see and experience secure attached relationships, and to transform the relationship with the facilitator over time. This early relationship set up may help calm anxieties and helps to ensure future participation and relationship formation.
The individual consultation with the facilitator at the end of the group helps reflection, closure, clarification of their action plan and support arranged for this during the following 6 months. This session provided for preparation for the ending of the group face-to-face is so important for pre-occupied insecurely attached participants who will not have had many experiences of goodenough endings. The subsequent 6 months of non-face to face contact with the facilitator supports continuity, a sense of agency to self-manage and the embedment of new habits promoted via their action plan.
Each insecure attachment style has its own characteristics and we speculate on how these are interacted with through the design of structure, facilitation and practices of the TBMA intervention below.
Dismissive
In this style there is a positive view of self (I am ok) and a negative view of others (you are not ok). A dismissing type of attachment style may bring the expectation of inadequate attention or care will be received from others. Those who care for them, such as GPs are not OK. In TBMA people are in a group with shared experiences of the health service which may, perhaps, reinforce their lived experience of inadequate care. However, the other participants are not their health professionals (not authority figures) and this is an important advantage for their sources of support. People share their experience, strengths and hopes for change. This is empowering. Participants are encouraged to consider ways to care for themselves (self-sooth), manage stress levels and re-interpret their symptom distress. This individual usually rejects any form of mental health referral and generally sees the health service as unhelpful for their MUS. In order to facilitate acceptance and access for this style TBMA is framed as "workshops" for "selfmanagement" rather than a medical intervention or mental health treatment methodology.
People with a dismissive style deny and minimize the impact of their own experience and their feelings. They tend to lack confidence in the helper and in their ability to help themselves. They may have poor self-reflection and tend to be critical of practices and helpers to date (e.g., GP). In order to accommodate this the facilitator accepts and welcomes their stance nonjudgmentally and reflects it back to the participant to support and validate it. This avoids criticism of the helper. Other group members then act as models for reflection, again taking their attention away from the facilitator. The facilitator encourages mobilization to generate more experience on which to reflect and to think about the meaning of their symptom.
Pre-occupied
In the pre-occupied style people tend to feel overwhelmed by their symptoms. The stance taken by the facilitator is that many people have unexplained symptoms which she can work with thus normalizing the condition reducing fear. There is also the threat of what will happen if they lose their symptoms i.e., a leap of faith into the unknown. Eventually, after a while, when trust has been established this can be addressed by exploring the pros and cons of having the symptom. The facilitator forms a stable attachment figure, as does the group thus engendering trust. The non-verbal communication of the body is a root to access what is unknown, as yet, regarding the meaning of the symptom. So, practices employing movement such as gestures and postures to represent the sensation of the symptom may bring meaning to the forefront and in-depth knowing which cannot be arrived at in any other way.
In the pre-occupied attachment style, there is a negative model of self, a positive model of others-"I am not OK, others are OK." The pre-determined frequency and nature of the contact post group is reassuring for people with a preoccupied insecure attachment style. The facilitator models selfacceptance and compassion enabling people to develop a more solid, coherent sense of self and to acknowledge their own vulnerabilities resulting from their experiences.
Additionally, since the attachment style is more secure as a result of the TBMA program this may enable them to become less dependent on the GP as the monitoring of the 6 months follow up data showed. This participant may find the ending of the group problematic and experience it as loss. The closing meeting with the facilitator mitigates some of this but also groups do tend to go on voluntarily meeting up following the ending. Another, strategy to support the participant who has a pre-occupied insecure attachment style is the on-going nonface to face contact every 6 weeks post group. The shareddecision making (with the facilitator) of their tailor-made action plan (derived from experiences in the group to support new habits of self-management) also helps with the ending process and sustainability.
The efficacy of TBMA in promoting self-management enables participants who have a pre-occupied attachment style to accept their condition obviating the need for further tests and scans. TBMA promotes a belief they can live well and thrive despite their symptoms. Their symptom distress levels and anxiety decrease as they let go of the need for a medical explanation.
Fearful
Individuals who have a fearfully insecure attachment style have a negative model of self and others -neither are OK. They may present as angry, frustrated, difficult, prone to develop a selfimage as unworthy of support from others and of caregivers as unreliable, or even dangerous. TBMA promotes a sense of agency and self-care i.e., deserving of care for themselves. The facilitator understands the importance of always present for the group demonstrating reliability, which in turn offers safety. Both participants with fearful insecure attachments and the facilitator may experience misunderstanding and frustration. However, regular supervision supports the facilitator to contain any frustrations and to ensure best practice when working with this participant.
People with a fearful insecure attachment style may worry about not being believed and/or taken seriously by health care providers who may assume they have a mental health condition. In TBMA the participant's lived body experience is believed and symptoms honored. They also worry about their symptoms which defy diagnosis despite numerous tests and scans which can lead to catastrophizing about them. The embodied, preverbal feelings, thoughts, relationships and impulses form an attachment style in childhood which is repeated symbolically in the adult's relationship to their symptom. TBMA helps people change their stance toward their experience of the symptom through a shift in the view of self. This may be a dynamic relationship with the symptom and the self. The view of self becomes much more than simply the symptom thus reducing the tendency to catastrophize.
The participant may sense their emotional neediness may drive others away. Emotional needs are welcome in the group, although the facilitator ensures shared attention is available to each member. People who are fearfully attached may avoid longterm care situations because of concerns about greater intimacy with providers and an assumption they will be given insufficient care. Hence TBMA is short term, the number of sessions overall is 12, the first four are in the first 2 weeks (i.e., two sessions per week), of 2 h duration each, with an opt-out after session six. Twelve sessions are the optimum for engagement for group psychotherapy according to Lambert (2013).
Fears about caregiver dependability promotes GP-shopping, i.e., visiting each GP in a practice and/or changing practices frequently, and a fragmentation of care. TBMA groups have a number of participants to offer resources and care. The caregiver may experience people who are fearfully attached as difficult to reassure, inadequate, needy, and fragile. Facilitators are trained to expect participants like this and have strategies to support them e.g., offering alternatives to practices, treating the practices as experiments to try out -reducing risk and stakes, lessening exposure. Individual consultations with the facilitator before the group sessions provide an opportunity for this participant to ask questions and gain reassurance leading to feelings of safety. This mediates the initial stress of attending a group of unknown people.
The outreach of 6 months non-face-to-face contact subsequent to the group ending can feel safer than being in the group whilst maintaining an on-going relationship with the group facilitator. This can replace seeking care in settings such as A&E. TBMA is designed to support participants over a period of 9-month from acceptance of the referral. It has been found that the 12 face-to-face sessions over 10 weeks in the first 3 months are just about manageable and bearable for the participant who is fearfully insecurely attached.
CONCLUSION
The research conducted previously supports the hypothesis that TBMA can support people with insecure attachment styles and MUS to self-manage. This article has illustrated how the design of TBMA is built on three insecure attachment styles associated with MUS. It goes on to explain how TBMA helps people with MUS and insecure attachment styles to learn to selfmanage. Its contribution to knowledge lies in that it describes a novel group model TBMA designed specifically as a new alternative pathway for supporting people with MUS, some of whom may be insecurely attached. TBMA is particularly suited as an intervention for people with MUS because symptoms are experienced in the body first and foremost. TBMA honors those symptoms using them as a gateway to the mind and subsequent self-management, in contrast to CBT which tend to marginalize the body. TBMA is also different because it is a groupwork model including people with all sorts of conditions in a generic group.
Early attachment is first experienced through the body via touch from the primary caregiver (White, 2004). Body memory (Giuseppe, 2018) of early attachment is reflected in relationships in the future, including the relationship with the symptom which can become a metaphor for the individual's insecure attachment. TBMA works with the symptom and its meaning employing the body-felt sensation of the symptom as the basis for learning self-management. It seems likely that the pain from ACEs is transported into the body unconsciously and held there as a bodily memory only to be triggered in response to stressful situations to form a MUS. By learning to address the stress MUS suffering can be self-managed.
The BodyMind Approach R is innovative since all elements involved have been designed to compensate for insecure attachment issues. This includes program structure, qualities of facilitation, group methods and content to take account of safety, self-regulation, and bodymindfulness. The group and facilitator are crucial to outcomes for participants helping them to prevent the repetition of a dysfunctional attachment style, affecting the maintenance of self-management to sustain recovery. TBMA enables a re-sculpting of the self and the symptom and their relationship to each other. The improved self-management participants exhibit when tested for effectiveness through practice-based evidence resulted in reduced symptom distress, depression, anxiety and increased wellbeing, activity and overall functioning. It is proposed the behavior changes noted have become conscious which is essential for self-management. Importantly, there are also potential reduced costs for the health service and in GP time and resources (Payne, 2014).
The hypothesis that TBMA can address insecure attachment in people with MUS can be tested in the framework of current knowledge by conducting an adult attachment assessment (Bartholomew and Shaver, 1998) pre and post intervention with participants suffering MUS undergoing TBMA treatment.
AUTHOR CONTRIBUTIONS
HP researched the literature and developed the TBMA program. SB contributed equally to the writing of the manuscript.
FUNDING
This work was supported by the funding received from East of England Development Fund, Department of Health and the National Health Service (NHS). | 2019-07-19T20:52:05.907Z | 2019-11-06T00:00:00.000 | {
"year": 2019,
"sha1": "f6a6c32c0f0d5026deb42d0acc67b96484944d85",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01818/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "760ddea49e64841cde7b78e48d567f53d6c90295",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
264815555 | pes2o/s2orc | v3-fos-license | Observation of parallel intersystem crossing and charge transfer-state dynamics in [Fe(bpy)3]2+ from ultrafast 2D electronic spectroscopy
Transition metal-based charge-transfer complexes represent a broad class of inorganic compounds with diverse photochemical applications. Charge-transfer complexes based on earth-abundant elements have been of increasing interest, particularly the canonical [Fe(bpy)3]2+. Photoexcitation into the singlet metal–ligand charge transfer (1MLCT) state is followed by relaxation first to the ligand-field manifold and then to the ground state. While these dynamics have been well-studied, processes within the MLCT manifold that facilitate and/or compete with relaxation have been more elusive. We applied ultrafast two-dimensional electronic spectroscopy (2DES) to disentangle the dynamics immediately following MLCT excitation of this compound. First, dynamics ascribed to relaxation out of the initially formed 1MLCT state was found to correlate with the inertial response time of the solvent. Second, the additional dimension of the 2D spectra revealed a peak consistent with a ∼20 fs 1MLCT → 3MLCT intersystem crossing process. These two observations indicate that the complex simultaneously undergoes intersystem crossing and direct conversion to ligand-field state(s). Resolution of these parallel pathways in this prototypical earth-abundant complex highlights the ability of 2DES to deconvolve the otherwise obscured excited-state dynamics of charge-transfer complexes.
Introduction
7][8] Amongst this class of complexes, the prototypical chromophore is tris(2,2 ′ -bipyridine)ruthenium(II), i.e., [Ru(bpy) 3 ] 2+ .The photophysical properties of [Ru(bpy) 3 ] 2+ -specically the existence of a metal-toligand charge-transfer (MLCT) excited state that stores ∼2 V of energy 9 and persists for ∼1 ms in deoxygenated solution 10 lie at the heart of its utility in such a wide range of settings.Despite its advantages, the elemental scarcity of ruthenium (as well as related chromophores containing iridium, rhenium, osmium, etc.) raises important questions about the cost and scalability of processes built on these materials. 113][14][15][16][17] Accordingly, there has been a rapid expansion of research into the synthesis and photophysical properties of chromophores based on elements of the rst transition series (e.g., iron, cobalt, nickel, chromium).
Many of these efforts have focused on ions possessing a d 6 conguration due to their valence isoelectronic relationship with Ru(II).The canonical example of this class of compounds, [Fe(bpy) 3 ] 2+ , exhibits similar steady-state optical properties to its second-and third-row transition metal analogs, namely a strong MLCT absorption in the mid-visible region, yet its excited-state properties bear little resemblance to its heavier group 8 congeners. 18Specically, the absence of a spectroscopic signature associated with the bipyridyl radical anion (i.e., bpyc − ) within 10 ps following MLCT excitation was an early indication of an excited-state lifetime that was many orders of magnitude shorter than its Ru(II) counterpart. 4The sub-100 fs lifetime of the MLCT manifold for an Fe(II) polypyridyl complex was rst quantied in 2000 using ultrafast time-resolved absorption spectroscopy in conjunction with spectrochemical identication of an optical signature for the MLCT excited state. 19This was later observed specically in [Fe(bpy) 3 ] 2+ using XANES 20 and ultraviolet transient absorption spectroscopy. 216][27] These ligand-eld states are characterized by large geometric distortions relative to both the ground-and MLCT excited states, thereby facilitating rapid non-radiative decay out of the charge transfer-state manifold and the eventual formation of the high-spin 5 T 2 excited state on a timescale of ∼200 fs.Following conversion from the MLCT excited state manifold to the lowest-energy ligand-eld excited state, specically the 5 T 2 state, ground state recovery (i.e., 5 T 2 / 1 A 1 relaxation) occurs on a timescale of ∼1 ns.Recently, Miller and McCusker identied solvent dependent kinetics for this ground-state recovery. 28The dependence was attributed to solvent reorganization in response to the large decrease in molecular volume associated with the conversion from a high-spin to a low-spin conguration.Although subtle, the solvent dependence associated with electronic state evolution localized on the metal center and relatively insulated from the solvent environment raises questions about the solvent dependence of dynamics in the charge-transfer manifold.Here, the transfer of an electron from the metal to the ligand places negative charge density on the periphery of the molecule and therefore in direct contact with the surrounding solvent.Despite there being ample evidence from studies on complexes possessing long-lived charge-transfer states that ultrafast solvent-coupled processes can inuence their initial evolution, 22,[28][29][30] the effect of solvent at early timescales and its coupling to intersystem crossing processes in [Fe(bpy) 3 ] 2+ have not been investigated.
Although relaxation from the MLCT manifold into the ligand-eld 5 T 2 state has been established for [Fe(bpy) 3 ] 2+ , the pathway involved in this relaxation is still under debate.Direct relaxation from the MLCT band into the 5 T 2 state is formally a two-electron process, thus making a direct transition highly improbable. 31It has therefore been proposed that the 1 MLCT / 5 T 2 conversion likely occurs via intermediate metal-centered states.While progress has clearly been made with regard to bringing processes localized on the metal center into better focus, details are sparse when it comes to dynamics occurring within the initially formed charge-transfer state(s).Transient absorption spectroscopy conducted by Auböck and Chergui was interpreted in terms of a 1 MLCT / 3 MLCT intersystem crossing event followed by a 3 MLCT / 5 T 2 direct relaxation mechanism with an overall timescale of <50 fs, 32 whereas X-ray uorescence spectroscopy data were modeled without invoking an intersystem crossing event within the charge-transfer manifold. 33Because the photoexcited MLCT state relaxes into the high-spin 5 T 2 state within a few hundred femtoseconds, fast time resolution [33][34][35][36] is required to properly resolve the early-time dynamics within the MLCT manifold.A range of time-resolved methodologies are available to access this regime, but the issue is compounded by the broad and overlapping spectroscopic features associated with relevant processes in [Fe(bpy) 3 ] 2+ .These temporal and spectral requirements present signicant challenges for determining what mediates the excited-state dynamics.Two-dimensional electronic absorption spectroscopy (2DES) is an advanced spectroscopic technique that combines the ability of transient absorption spectroscopy to probe ultrafast dynamics with direct excitation and detection frequency correlation.The additional dimension attained through this correlation allows for energetic deconvolution of different contributions to the excited-state dynamics of systems, providing information about the energy landscape that would be difficult, if not impossible, to divine from transient absorption spectroscopy alone.Although 2DES has been commonly used to study light harvesting systems, 34,[37][38][39] inorganic nanomaterials, [40][41][42][43] and organic molecular chromophores, 35,44,45 amongst other systems, it has been underutilized as a tool to understand ultrafast dynamics in molecular, transitionmetal based chromophores.
In this report, we show that challenges associated with characterizing early-time dynamics within the MLCT manifold of [Fe(bpy) 3 ] 2+ can be overcome using 2DES.Here, the additional spectral separation afforded by this technique uncovered a previously hidden 1 MLCT / 3 MLCT cross peak while simultaneously resolving sub-100 fs dynamics of intersystem crossing and transfer out of the MLCT manifold.Collectively, these observations revealed parallel pathways of triplet-mediated and direct relaxation to the metal centered states.These results demonstrate the ability of 2DES to be a particularly effective tool for elucidating the earlytimescale excited-state dynamics in the class of transition metalbased chromophores (like [Fe(bpy) 3 ] 2+ ) to provide new insights into the ultrafast processes underlying their functionality. 46
Steady-state absorption features
Fig. 1A shows the steady-state absorption spectrum of [Fe(bpy) 3 ] 2+ in methanol.In this frequency range, the dominant peak at ∼19 200 cm −1 is the 1 A 1 / 1 MLCT transition with a tail on the red edge associated with the formally spin forbidden 1 A 1 / 3 MLCT transition. 47Consistent with this assignment, TD-DFT calculations (Fig. 1B) showed that the 3 MLCT states primarily contribute to the lower-energy range of the absorption spectrum while the higher-energy range of the spectrum is dominated by a 1 MLCT transition with a large oscillator strength.The dominant calculated excitation at 21 832 cm −1 seen in Fig. 1B corresponds to a doubly-degenerate 1 MLCT state (Table S9 †).Only minor solvatochromic effects were observed in the absorption spectra of [Fe(bpy) 3 ] 2+ (Fig. S1A and S17 †).
2DES spectra features
To investigate the dynamics of the charge-transfer transitions, 2DES was used to measure a series of spectra that map out the excited-state evolution.Correlation plots of excitation (u s ) and detection (u t ) energies were created as a function of the delay time between excitation and detection events, known as the waiting time (T). 34,39The spectra were measured with ∼10 fs temporal resolution.The nonresonant response (coherent artifact) of the pulse was also characterized spectrally (Fig. S2 †).To minimize contributions from the nonresonant response, the 2D data were analyzed only for T > 47 fs.Representative 2D spectra of [Fe(bpy) 3 ] 2+ in methanol are shown in Fig. 1C.For 2DES experiments performed in the BOXCARS geometry, positive intensity corresponds to ground state bleach/stimulated emission and negative intensity corresponds to excited state absorption.
The 2D spectra contain three primary features.First, the spectra are dominated by a positive peak on the diagonal at u s = 18 500 cm −1 , u t = 18 000 cm −1 (Fig. 1C, red arrow).Second, a positive peak grows in below the dominant peak at approximately u s = 18 250 cm −1 and u t = 16 500 cm −1 at T = 200 fs (Fig. 1C, right, blue arrow, Fig. S13 †).Third, a negative peak is also present, particularly at later waiting times, at approximately u s = 16 500 cm −1 and u t = 17 000 cm −1 (Fig. 1C, right, purple arrow).
Previous studies of [Fe(bpy) 3 ] 2+ using more traditional spectroscopic methods allow us to orient our understanding of these three features.First, the initially-formed (<200 fs) excited state is a 1 MLCT state that can be described in terms of oxidation of the metal center (i.e., Fe(II) to Fe(III)) and the creation of a radical anion associated with the bipyridyl ligand (bpyc − ).This formulation allows for the use of spectroelectrochemistry to approximate the optical signatures that will characterize this initial state (Fig. S4 †). 18These data indicate that the 1 MLCT excited state will consist of two overlapping contributions: rst, a loss of absorption (and stimulated emission at early times <100 fs) due to ground-state depletion and the concomitant change in oxidation state of the metal, 52 which contributes positively to the 2D signal; and second, a new absorption feature associated with the bpy radical anion (bpyc − ), which contributes negatively to the 2D signal.The former appears at the steadystate absorption of the MLCT states (Fig. 1A) whereas the latter manifests as a broad feature starting at 16 000 cm −1 and extending into the UV regime.Owing to the large oscillator strength associated with the charge-transfer band, the overall spectrum will be dominated by the former.Consistent with this picture, the dominant positive feature on the diagonal is approximately at the 1 MLCT absorption in the steady-state spectrum, although the maximum is slightly red-shied due to the spectral prole of the ultrafast laser pulse (Fig. S1B †).Therefore, the dominant positive diagonal feature is denoted as the 1 MLCT ground-state bleach/stimulated emission ( 1 MLCT GSB/SE) peak.Any ESA contribution from the bpy radical anion at T < 200 fs at lower energies (u s < 17 000 cm −1 ) is obscured due to contribution from the nonresonant response signal (Fig. 1C, le).Second, the positive cross peak below the diagonal corresponds energetically to excitation into the 1 MLCT state and detection of population in the 3 MLCT state at early waiting times (T < 200 fs).It is important to note that the contribution of stimulated emission to these features is tied to the persistence of the 1 MLCT state.Further details about this assignment will be discussed in Section 2.3.4.
Aer initial photoexcitation into the MLCT manifold, the molecule relaxes into the ligand-eld excited state manifold within 200 fs. 20,21This relaxation corresponds to the electron in the ligand-based p* orbital transferring back to the metal.Formation of these ligand-eld excited state(s) has two consequences for the absorptive properties of the complex: loss of absorption associated with the bpy radical anion; and the eventual creation of an MLCT excited-state absorption feature associated with the lowest-energy ligand-eld excited state of the molecule.These new net absorptive contributions to the spectrum can be expected to arise from MLCT transitions associated with the excited ligand-eld states, in particular a 5 T 2 / 5 MLCT transition that will persist until groundstate recovery (∼1 ns).The intensity of this band is expected to be roughly an order of magnitude less than that associated with the ground state. 53Its contribution to the overall signal depends on the nature of its overlap with the ground-state bleach.Thus, the ESA feature observed is assigned to the 5 T 2 / 5 MLCT transition, supported by TD-DFT calculations (discussed in more detail in Section 2.3.3) and a nanosecond decay consistent with ground-state recovery (Fig. S5D and Table S3 †).Although previous studies of similar complexes have shown an ESA signature in this region as a result of multi-photon excitation, 54 the intensity of the laser pulse used in this study, is ten-fold below the advent of these multi-photon features.By T = 200 fs, the 1 MLCT peak, which by this time is comprised solely of the ground-state bleach, also shis slightly below the diagonal.Given the broad peak structure of the 5 T 2 ESA peak, the redshi in detection frequency of the 1 MLCT GSB/SE peak is therefore most likely from partial cancellation from the rise of the 5 T 2 ESA.
2.3.Kinetic analysis of 2DES spectra 2.3.1.Early-time evolution of the 1 MLCT state.To investigate the kinetics, a waiting time trace from T = 47-1000 fs for the 1 MLCT peak was constructed by integrating the peak intensity within u s = 18 500-20 000 cm −1 and u t = 18 000-20 000 cm −1 (i.e., the region indicated by the red arrow in Fig. 2A, B, S6, and S18 †) and normalized to the time point with maximum intensity.The asymmetric ranges were selected to minimize the contribution from the nonresonant response at early waiting times, and thus should best capture the dynamics associated with [Fe(bpy) 3 ] 2+ (Fig. S2 †).The waiting time trace was t to a biexponential function (Fig. 2B, solid line) where the where the rst term (which has negative amplitude) tracks the rapid rise (with its time constant called the "rise time") and second term tracks the slow ground-state recovery (Eqn S1, Table S1 †).A biexponential function was used because additional terms did not lead to a signicant improvement in t quality, consistent with previous experiments that reported a monoexponential decay 28 and a monoexponential rise. 32The initial rise in peak intensity occurred on a ∼30 fs timescale.The lower intensity at early times is consistent with spectrally overlapped 1 MLCT GSB/SE and bpyc − ESA signatures generated upon photoexcitation into the MLCT manifold. 18Excited-state evolution from the MLCT manifold to the lower-lying ligand-Fig. 2 (A) Reproduction of the positive, on-diagonal region of the T = 200 fs 2DES spectrum in Fig. 1C with the corresponding linear absorption spectrum on the right.The red arrow indicates the GSB signal of the 1 MLCT (and, at early times, the SE signal).(B) Intensity trace (dashed line) of the 1 MLCT diagonal peak over waiting time T with its respective biexponential fit (solid lines) in methanol.See text for details.(C) Fourier transform of the residuals from the exponential fit depicted in (B) see also Fig. S12.][50][51] eld manifold results in the disappearance of the bpyc − (and therefore loss of the bpyc − ESA signature), leaving only the contribution from the underlying ground-state bleach signal.The increase in magnitude of the ground-state bleach can only be rationalized through the removal of a partial cancellation from an overlapping negative signal.For this reason, the intensity rise in the bleach signal can be assigned to conversion from the charge-transfer to ligand-eld manifold of the compound due to loss of the partial cancellation from the ESA as opposed to relaxation within the charge-transfer band where no such change in partial cancellation would occur.
The waiting time traces also exhibit rapid oscillations.0][51] Fourier ltering and subsequent biexponential tting revealed similar rise timescales as reected in the biexponential t of the unltered data (Fig. S11 †); in order to minimize assumptions made in the kinetic analysis, the unltered data were used.In addition, global kinetic analysis was performed on the region of the MLCT peaks using the method illustrated in Volpato et al. 55 The ESA peak was not included in the analysis as the region is dominated by the nonresonant response signal at early timescales.Consistent with the results from the analysis described above, growth of both the 1 MLCT GSB/SE peak and the 1 MLCT / 3 MLCT cross peak was observed with a sub-100 fs timescale (Section 2.3.4).
2.3.2.Solvent dependent evolution of the 1 MLCT state.As a dicationic species, [Fe(bpy) 3 ] 2+ is expected to be strongly solvated in polar solvents in the ground state.Upon photoexcitation into the MLCT manifold, the solvent must respond to the formation of the bpyc − .DFT calculations predict the excited electron in one of these initially-excited 1 MLCT states to be delocalized over two bipyridyl ligands (Fig. S24B †) with a dipole moment of 3.94 Debye, while the second transition is localized on a single bipyridyl ligand with a dipole moment of 3.97 Debye (Fig. 3A).Any initially-excited delocalized MLCT state is expected to quickly localize on a single bipyridyl ligand, further increasing the dipole moment. 56,57The fully optimized structure of the lowest-energy 1 MLCT, as well as the 3 MLCT state, predicts stabilization of the state that localizes the electron in a p* orbital of a single bipyridyl ligand (Fig. 3B) with an overall dipole moment of 9.0 Debye and 6.7 Debye, respectively (Fig. S25 †).These calculations are consistent with those obtained through Stark spectroscopy. 56he electron placed in the p* orbital of the bpy ligand dramatically alters the nature of the charge density with which the solvent interacts.In the ground state, the solvent organizes around an overall dicationic state wherein the charge is buried on the metal ion whereas, in the excited 1 MLCT state, the charge is localized on the periphery of the complex.Alcohols can respond by simply rotating about the C-O single bond, whereas the rigid-rod nature of nitriles requires at least a partial rotation of the entire solvent molecule.The timing of this molecular rotation is therefore dependent on the moment of inertia of the molecule, 22 which can be as fast as 25 fs. 58,592D spectra were measured in these two classes of solvents, alcohols and nitriles, to examine the effect of these different mechanisms of reorganization in response to the creation of the MLCT excited state.
The kinetics from 2D spectra of [Fe(bpy) 3 ] 2+ in methanol, 1propanol, acetonitrile, butyronitrile, are compared in Fig. 3C, with values from the biexponential t reported in Table 1.The rise times, shown as gray triangles in Fig. 3C, were ∼30 fs for both methanol (carbon chain length R = 1) and 1-propanol (R = 3).The similarity observed can be attributed to the fact that an extension of the aliphatic chain from methanol to 1-propanol should have little effect on the dynamics of rotation about the C-O bond. 28On the other hand, the time constant for the same signal in the nitrile solvents was observed to increase from ∼30 fs acetonitrile (R = 1) to ∼70 fs for butyronitrile (R = 3).This solvent-dependent evolution observed in the nitrile solvents likely originates from the nature of the anticipated solvent response, a rotation of the entire molecule.
To further investigate the nature of this solvent response, 2DES studies were performed in commercially-available nitriles with longer carbon chains, namely pentanenitrile (R = 5) and hexanenitrile (R = 6).The dynamics in propionitrile were not measured because [Fe(bpy) 3 ] 2+ was observed to interact with impurities in the solvent and degrade too fast for 2DES experiments (ESI Section 6.1 †).The bpyc − ESA decay lifetime in all nitrile solvents is as a function of carbon chain length (R) is plotted as black circles in Fig. 3C (black circles).
The reported lifetimes of the bpyc − decay increased to ∼140 fs and ∼180 fs for pentanenitrile and hexanenitrile, respectively.The overall trend follows closely with the trends of the moment of inertia (I) of the solvent, plotted as a light-red dashed curve in Fig. 3C (see also ESI Section 4 †).This clear scaling reects the ability of the surrounding solvent to stabilize the change in charge density upon photoexcitation.
These data represent the rst observation of solvent dynamics coupled to MLCT-state evolution in an Fe(II) polypyridyl complex and moreover suggest that the conversion from the charge-transfer to ligand-eld manifolds may indeed be gated by solvent response.
2.3.3.Properties of the 5 T 2 ESA feature.The early-time kinetics of the negative peak could not be well-characterized as the spectral region contains signicant nonresonant response at T < 100 fs (Fig. S2 †).Instead, the magnitude of the peak intensity was compared to the magnitude of the positive 1 MLCT peak intensity (Fig. 4A and B) to quantify its relative contribution, or effective oscillator strength.The relative magnitude of the ESA peak was averaged for each triplicate data set in each solvent from T = 200-3000 fs to minimize contributions from both the initial photophysics and the nonresonant response.The intensity of the negative peak was integrated over u s = 16 000-17 000 cm −1 and u t = 16 800-17 800 cm −1 and the intensity of the positive peak was integrated over u s = 17 500-19 500 cm −1 and u t = 17 000-19 000 cm −1 .These limits were selected to span the contour lines that denote this feature (Fig. 1C) as no other overlapping contributions are present in this spectral region.The magnitude of the negative peak was ∼10% of the positive peak (Fig. 4D and S16 †), consistent with the order-of-magnitude reduction in intensity expected for a 5 T 2 / 5 MLCT absorption relative to the corresponding transition in the low-spin ground state.To further investigate the relevant states, TD-DFT calculations were performed on the 5 T 2 states (Fig. S23 †).Analysis of the transitions showed 5 T 2 / 5 MLCT transition with a similar energy gap (Fig. 4C), supporting the assignment.Along the waiting times sampled, the system undergoes nuclear equilibration within the 5 T 2 state primarily assigned to be an expansion of the Fe-N bond distance. 28herefore, the differences in intensity of the ESA peak between the solvents studied (Fig. 4D) are likely reective of differences in the nature of the nuclear equilibration due to each type of solvent interactions.The relative intensities in the solvents were within error of each other for the alcohols whereas the relative intensity in acetonitrile was over double that in the longer nitriles.This observation is consistent with previous studies where solventdependent, outer-sphere effects inuenced the dynamics of the ligand-eld 5 T 2 state. 28Specically, the solvent reorganization energy is coupled to the change in the volume of the complex as the system moves between high-spin and low-spin congurations, which in turn affects the oscillator strength of the transition.The effect of solvent on the relative oscillator strength of the ligand-eld excited state can be difficult to quantify using traditional transient absorption spectroscopy experiments as the magnitude of the effect oen falls below the noise threshold.In this experiment, the 2D apparatus utilizes a fully non-collinear, BOXCARS conguration for backgroundfree detection, which vastly improves the signal-to-noise ratio 40,[60][61][62] by almost two orders of magnitude. 63The improved sensitivity was required to resolve the small changes in oscillator strength due to solvent effects.This result both establishes solvent-coupled behavior of [Fe(bpy) 3 ] 2+ in the lower-lying ligand-eld states and highlights the power of 2DES as a tool for understanding excited-state dynamics in transition metal complexes.
2.3.4.Evolution of the 1 MLCT / 3 MLCT cross peak.A cross peak is also present in the spectrum at energetic coordinates corresponding to excitation of the 1 MLCT state and detection of population in the 3 MLCT state (Fig. 5A, purple arrow).Based on these coordinates, along with its near-zero intensity at T = 0, we assign this peak to a 1 MLCT / 3 MLCT intersystem crossing event.At later times (T > 200 fs), the region is dominated by the GSB of the triplet charge-transfer state, as evidenced by both the red-shi in excitation energy (Fig. S14 †) and the nanosecond timescale of the signal decay (Fig. S5C and Table S3 †).The ESA peak is not expected to inuence these features or dynamics as it only exhibits the nanosecond decay of the overall signal (Fig. S5D †) outside of the nonresonant response.Although the triplet charge transfer state has low oscillator strength, features in 2D spectra depend on the oscillator strength of both the excitation and detection transitions, and so the cross peak is visible in part due to the oscillator strength of the singlet state.Furthermore, the decongestion afforded by a second frequency dimension allows otherwise obscured features to be observed. 36,64The background-free nature of the 2D apparatus as noted above also makes possible the resolution of relatively weak signals, 63 such as the 3 MLCT evolution.These features represent some of the advantages that 2DES brings to the study of the ultrafast excited-state dynamics of this class of chromophores.Specically, direct observation of 1 MLCT / 3 MLCT intersystem crossing via a time-resolved absorption measurement had not been achieved previously.
To characterize the evolution of the 1 MLCT/ 3 MLCT cross peak, waiting time traces were constructed by integrating the peak intensity within u s = 17 500-20 000 cm −1 and u t = 15 600-16 600 cm −1 (Fig. 5A, purple arrow, Fig. 5B).The waiting time traces exhibited an intensity increase, which was t to an exponential rise function (Fig. 5B, solid line, Eqn S1 and Table S2 †).A rise was also observed for the diagonal peak, as discussed above, but the associated intensities were different.While the diagonal peak had ∼75% of the nal intensity upon photoexcitation, the cross peak initially had a near-zero intensity, which increased to ∼40% of the diagonal peak intensity at later times (T > 200 fs, Fig. S16 †).The timescales extracted from the ts captured the initial rise and were on the 20-40 fs timescale for all solvents.These values are both faster than the timescales extracted from the diagonal peak (Table 1) and lack the solvent dependence observed for that feature.
The cross peak rise time is expected to contain some contribution from intersystem crossing as well as a rise of the overlapping 3 MLCT GSB signal.The 3 MLCT GSB signal, which is on the diagonal, rises with the loss of the bpyc − absorption, similar to that of the 1 MLCT GSB signal.The fast (<50 fs) relaxation from the 3 MLCT to the ligand eld states likely limits the population accumulated in the 3 MLCT states, 33 which may be the reason this feature does not become dominant in the 2D spectra.The extremely fast timescales and overlapping spectral signatures, however, mean that the intersystem crossing and relaxation to the ligand-eld states cannot be fully isolated.As a result, the global kinetic analysis of the region extracted a timescale that is a mixture of the appearance of both the 1 MLCT GSB/SE peak and the cross peak (Fig. S22 †).The timescale of the cross peak rise alone gives a better approximation of the intersystem crossing rate.The rise timescales extracted through the biexponential t were predominantly ∼20 fs (Fig. 5C), particularly in the solvents with slower relaxation to the ligand eld manifold, where the intersystem crossing is expected to be better isolated.These values are consistent with expectations for an intersystem crossing event in this complex. 65revious work proposed a sub-30 fs (ref.32) intersystem crossing timescale based on uorescence up-conversion measurements with an instrument response of ∼120 fs. 47The ∼10 fs temporal response of our 2D apparatus enabled quan-tication of this extremely fast process, revealing that intersystem crossing occurs on a timescale similar to that reported for the same process in [Ru(bpy) 3 ] 2+ .This suggests that spinorbit coupling is a necessary but not sufficient condition for describing intersystem crossing dynamics in transition metal complexes.Moreover, our data clearly reveal that intersystem crossing within the charge-transfer manifold occurs in competition with direct conversion from the initially formed 1 MLCT state to ligand-eld excited states localized on the metal center as illustrated in Fig. 6.
Concluding remarks
7][68][69] Their smaller ligand eld splitting, however, leads to distinct photodynamics that cannot be interpreted within the framework of their second-and third-row transition metal counterparts. 70Fe(bpy) 3 ] 2+ is the prototypical example of a d 6 photocatalyst with an earth abundant metal center.Similar to others in its class, it has rapid and complex dynamics within the MLCT manifold.Disentangling this complexity to understand why they differ from their second-and third-row counterparts is a key step in the development of these complexes for photochemical applications.For [Fe(bpy) 3 ] 2+ , uncovering the relaxation mechanisms at the early timescales can shed light into how the complex can undergo a formally two-electron relaxation process into the high-spin 5 T 2 state within 200 fs, which is not typically observed in other complexes.
In this study, we observed a 1 MLCT / 3 MLCT intersystem crossing process nearly contemporaneous with direct relaxation from the 1 MLCT state into lower-lying ligand-eld states indicate parallel relaxation mechanisms.These parallel mechanisms indicate that in [Fe(bpy) 3 ] 2+ , electrons both undergo 1 MLCT / LF and 1 MLCT / 3 MLCT / LF relaxation out of the MLCT manifold.Therefore, the previously competing models of energy relaxation may not be mutually exclusive, and in fact may be occurring simultaneously.Furthermore, the correlation between solvent response and relaxation from the MLCT manifold indicates that the solvent interacts with the relative charge associated with these states and may even control the pathway of relaxation.On the timescales of relaxation from the MLCT manifold, solvent dynamics are largely governed by the Edge Article Chemical Science inertial response, 58,59 which can be as fast as 25 fs and so allows the solvent to mediate these ultrafast processes.The high spectral and temporal resolution of 2DES revealed dynamics previously obscured in data measured with more traditional techniques.In particular, the GSB/SE and intersystem crossing peaks in [Fe(bpy) 3 ] 2+ became spectrally separated by simply resolving the excitation dimension, allowing for greater insight into the crowded excited-state landscape of rstrow transition-metal photocatalysts.
In conclusion, the early-time excited-state dynamics of [Fe(bpy) 3 ] 2+ were measured using 2DES in a series of nitrile and alcohol solvents.The ultrafast pulse used in this experiment allowed for the resolution of early-time dynamics, making it possible to observe the effect of solvent dependence on the relaxation of the bipyridyl radical anion.Simultaneously, resolution along the excitation frequency axis allows for direct observation of the intersystem crossing dynamics.The timescale of this event was determined to be on the scale of ∼20 fs.The direct resolution of previously unobserved features in [Fe(bpy) 3 ] 2+ shows the power of 2DES to provide new information on the excited-state dynamics in this class of photocatalysts.
For the 2DES experiments, the sample solutions were prepared by dissolving [Fe(bpy) 3 ] 2+ powder in spectroscopic grade solvents purchased from Millipore Sigma.
Two-dimensional electronic spectroscopy
The 2D measurements were performed in a fully non-collinear, BOXCARS phase-matching geometry.Full details on the setup used can be found in Son et al. 62 The laser spectrum (Fig. S1A †) has a spectral bandwidth (FWHM) of 106 nm (3300 cm −1 ) centered at 540 nm (18 510 cm −1 ).The pulse was compressed with two pairs of chirped mirrors (Ultrafast Innovations) and characterized by transient grating frequency-resolved optical gating (TG-FROG) at the sample position using a 0.1 mm thick quartz cuvette (Starna) lled with acetone. 73The FROG trace revealed a pulse duration of 12 fs (Fig. S1B †).The samples were measured in a 0.1 mm path length quartz cuvette.The optical density of the sample in each solvent was measured to be 0.21 (acetonitrile), 0.27 (butyronitrile), 0.28 (pentanenitrile), 0.25 (hexanenitrile), 0.27 (methanol) and 0.21 (1-propanol) per 0.1 mm at 535 nm.An excitation pulse energy of 68 nJ was utilized, which corresponds to 1.9 × 10 14 photons per cm 2 per pulse.Coherence time (s), the time delay between the rst two pulses, was sampled from −80 to 80 fs in 0.4 fs steps.Waiting time (T), the time delay between the second and third pulses, was sampled every 6.67 fs for T = 0-100 fs, every 25 fs for T = 100-2500 fs, every 500 fs for T = 2500-10 000 fs, every 5000 fs for T = 10 000-100 000 fs, and every 50 000 fs for T = 100 000-700 000 fs.The absolute-value 2D spectra were phased using the projection slice theorem. 64Aer collection of each dataset, the linear absorption spectrum of the sample was measured and compared with the one measured before the 2D experiment to conrm the absence of photodegradation.Each sample was then measured an additional three (for the nitrile series) to four (for the alcohol series) times from T = 0-3000 fs to ensure reproducibility of the data.
For pentanenitrile and hexanenitrile experiments, an ND lter (0.2-0.5 OD) was added to the beam path aer the sample to avoid detector saturation.
DFT calculations
Theoretical calculations on [Fe(bpy) 3 ] 2+ complex were carried out with the Gaussian 16, Revision A.03 soware package. 74eometry optimizations for the singlet and quintet states were performed with the TPSSH functional. 75The 6-311G* basis set was employed for all atoms (C, H, N) 76,77 except for Fe, where the SDD basis sets and its accompanying pseudopotential 78 were used.Solvent effects (acetonitrile) were included in the calculations via the polarizable continuum model (PCM). 79Vibrational frequency analysis was performed to ensure that all optimized structures are true minima with no imaginary frequencies.Natural orbital (NO) analysis was carried out to conrm the metal-centered character of the optimized quintet state. 80The absorption spectra were calculated with linearresponse time-dependent DFT (TD-DFT) [81][82][83] at the same level of theory as described for optimization.The UV-vis spectra were computed at the optimized singlet ground state structure utilizing the singlet reference state (30 lowest-energy singlet and 30 triplet excited states were calculated), as well as at the optimized quintet geometry utilizing the quintet reference state (30 lowest-energy excited states).The stick spectra were broadened using the Lorentzian functions with a half-width-at-halfmaximum (HWHM) of 0.12 eV for the singlet and quintet states.
Fig. 1 (
Fig. 1 (A) Absorption spectrum of [Fe(bpy) 3 ] 2+ in methanol with singlet and triplet metal-to-ligand charge transfer bands ( 1 A 1 / 1 MLCT, red; 1 A 1 / 3 MLCT, purple).The molecular structure of [Fe(bpy) 3 ] 2+ is shown in the inset.(B) Calculated stick spectrum (top, black sticks) and broadened line spectrum (top, black line) obtained from TD-DFT calculations of [Fe(bpy) 3 ] 2+ in acetonitrile.Energy spectrum of the singlet metal-centered (dashed red lines) and 1 MLCT (solid red lines) and triplet metal-centered (dashed purple lines) and 3 MLCT (solid purple lines) transitions, including those with zero oscillator strength, are shown below the plotted spectrum (see Tables S9 and S10 † for full information of the calculated singlet and triplet states, respectively).(C) Phased 2D spectra of [Fe(bpy) 3 ] 2+ in methanol at T = 66 fs (left) and T = 200 fs (right).Positive intensity corresponds to ground state bleach or stimulated emission signals and negative intensity corresponds to excited state absorption signal.Plots are normalized to the maximum and minimum intensities of the T = 200 fs spectrum.Contour lines are drawn at 20% intervals.Arrows denote predominant peaks.
Fig. 3 (
Fig. 3 (A) Electron density difference surface between the ground and the initially-excited 1 MLCT state densities (isovalue = 0.0004 electrons per a.u. 3 ).The 1 MLCT state depicted here corresponds to one of the double degenerate transitions at 21 832 cm −1 shown in Fig. 1B (see Fig. S24 † for both states).Red values indicate an increase in the excited-state electron density relative to the ground state (particle), while blue values indicate a decrease (hole).The excited-state dipole moment (3.97 Debye) is depicted by an arrow pointing in the positive direction.(B) Molecular orbitals associated with excitation of the 1 MLCT state (see Fig. S25 † for depictions of both the singlet and triplet MLCT transitions).(C) Plot of the bpyc − decay lifetime in nitriles (black circles) and alcohols (gray triangles) as a function of carbon chain length (R).Error bars reflect standard error from three replicates.The moment of inertia (I) of the nitrile solvents is also plotted (light red dashed line).
Fig. 4 (
Fig. 4 (A) Region of representative [Fe(bpy) 3 ] 2+ 2DES spectrum in methanol centering the ESA peak (blue arrow) at T = 200 fs.(B) Horizontal slice in detection frequency at u t = 17 000 cm −1 (shown as a blue line in Fig.4A) that shows the presence of the negative ESA peak.(C) Absorption spectrum obtained from TD-DFT calculations in acetonitrile utilizing the fully-optimized lowest-energy quintet state ( 5 T 2 ) of [Fe(bpy) 3 ] 2+ as a reference.Calculated stick spectrum (black sticks) along the broadened line spectrum (half-width at halfmaximum, HWHM = 968 cm −1 (0.12 eV), black line) is shown at the top.Energy spectrum of all calculated transitions (even those with zero oscillator strength) is displayed on the bottom.Blue lines represent transitions from the 5 T 2 state, while the red lines represent singlet transitions from the 1 A 1 state.MLCT transitions are represented by the solid lines while ligand-field transitions are represented by the dashed lines.See TablesS11 and S9† for full information about the transitions for the 5 T 2 and 1 A 1 states, respectively.(D) Relative intensity of the negative peak compared to the positive peak in methanol, 1-propanol, acetonitrile, butyronitrile, pentanenitrile, and hexanenitrile.The error bars are standard errors from three replicates.
Fig. 5 (
Fig. 5 (A) Reproduction of the positive region of the T = 200 fs 2DES spectrum in Fig. 1C.The purple arrow indicates the contribution from 1 MLCT / 3 MLCT intersystem crossing.The corresponding linear absorption spectrum is reproduced on the top and right of the 2DES spectrum for clarity.(B) Intensity trace of the 1 MLCT / 3 MLCT cross peak (dashed lines) over waiting time T along with a biexponential fit (solid lines) in methanol.See text for details.(C) Rise times of the 1 MLCT / 3 MLCT cross peak (purple) in methanol, 1-propanol, acetonitrile, butyronitrile, pentanenitrile, and hexanenitrile extracted from the fits of the intensity traces (Fig. S9 and S20 †).Error bars are the standard error from three replicates.
Fig. 6
Fig. 6 Proposed energy relaxation diagram derived from 2DES experiments.The initially-excited MLCT states relax into a lower-lying excited ligand field (LF) state at a solvent-dependent timescale.The timescales denoted represent the range reported in this study.Singletto-triplet intersystem crossing within the MLCT manifold occurs simultaneously on a timescale of ∼10-20 fs.
Table 1
Kinetics associated with the disappearance of the bpy radical anion | 2023-11-01T15:17:31.035Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "c6c407b1cc6b1f3ec9c218aff5dfd75b13664ab0",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/sc/d3sc02613b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79c75b1e43ffbab9b85d50203cb9c156e6e06bac",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237713419 | pes2o/s2orc | v3-fos-license | Coupling-induced nonunitary and unitary scattering in anti-PT-symmetric non-Hermitian systems
We investigate the properties of two anti-parity-time (anti-PT )-symmetric four-site scattering centers. The anti-PT -symmetric scattering center may have imaginary couplings, real couplings, and real on-site potentials. The only difference between the two scattering centers is the coupling between two central sites of the scattering center, which plays a crucial role in determining the parity of anti-PT symmetry and significantly affects the scattering properties. For the imaginary coupling, the even-parity anti-PT -symmetric scattering center possesses nonunitary scattering and the difference between the reflection and transmission is unity; for the real coupling, the odd-parity anti-PT -symmetric scattering center possesses unitary scattering and the sum of the reflection and transmission is unity. The coupling-induced different scattering behaviors are verified in the numerical simulations. Our findings reveal that a significant difference in the dynamics can be caused by a slight difference between two similar anti-PT -symmetric non-Hermitian scattering centers.
Quantum transport and wave propagation in non-Hermitian systems exhibit intriguing scattering behaviors; for example, coherent perfect absorption [38][39][40][41], unidirectional invisibility [12,42], unidirectional reflectionless [43], and unidirectional lasing at the spectral singularity [44,45]. In a non-Hermitian scattering center, scattering is usually nonunitary and the non-Hermiticity can lead to different reflection * jinliang@nankai.edu.cn and/or transmission for the wave injected in opposite directions. It is worth mentioning that PT symmetry plays an important role. The reflection-PT symmetry protects the transmission to be identical and the axial-PT symmetry protects the reflection to be identical for the wave injected in opposite directions [46].
The scattering properties of a system with PT symmetry are explicit but anti-PT symmetry as a counterpart of PT symmetry has rarely been investigated [47][48][49][50][51][52][53][54][55][56][57]. Recently, imaginary coupling has been demonstrated in atomic vapors [48], electrical circuits [58], and optical waveguides [59]. In the coupled resonator array, the linking resonator with dissipation can be adiabatically eliminated to realize the imaginary coupling between two adjacent resonators [56,60]. Anti-PT symmetry ensures the real part of the energy spectrum either being zero or opposite in pairs; in contrast, PT symmetry ensures the imaginary part of the energy spectrum either being zero or opposite in pairs. The dissipation induces nontrivial topology in anti-PT -symmetric systems [56,57], which greatly differs from the unaffected topology in PTsymmetric systems [61].
In this paper, we investigate the properties of two anti-PTsymmetric four-site scattering centers. The non-Hermitian scattering centers are reciprocal and their Hamiltonians are transpose invariant. The two scattering centers possess different parities of anti-PT symmetry. The parities are fully determined by the central coupling between the central two sites of the scattering centers. If the central coupling is imaginary, the four-site scattering center has even-parity anti-PT -symmetry, the scattering is nonunitary, and the reflection (R) and transmission (T ) satisfy R − T = 1. If the central coupling is real, the four-site scattering center has odd-parity anti-PT -symmetry, the scattering is unitary, and the reflection and transmission satisfy R + T = 1 even though the scattering center is non-Hermitian.
The remainder of this paper is organized as follows. In Sec. II, we present the even-parity anti-PT -symmetric four-site scattering center with all the couplings being imaginary and we demonstrate the nonunitary scattering dynamics. In Sec. III, we demonstrate the unitary scattering in the odd-parity anti-PT -symmetric four-site scattering center by replacing the imaginary coupling between the two central sites with the real coupling. In Sec. IV, we analyze the influence of the coupling between the central two sites, which determines the parity of the anti-PT symmetry. We perform numerical simulation to elucidate our findings and exhibit the dynamic features of nonunitary scattering at the spectral singularity. In Sec. V, we discuss the experimental implementation of the proposed anti-PT -symmetric systems in coupled resonator optical waveguides (CROWs). Finally, we summarize the results and conclude in Sec. VI.
II. NONUNITARY SCATTERING
The discrete lattice model can characterize the continuum systems with periodical potential in the Wannier representation under the tight-binding approximation [62]. In this section, we study the properties of an anti-PT -symmetric non-Hermitian scattering center, which is a four-site scattering center with imaginary coupling. The imaginary coupling can be realized through dissipation in the non-Hermitian systems [56]. The schematic of the anti-PT -symmetric non-Hermitian system is shown in Fig. 1
. The Hamiltonian reads
(1) H lead indicates the input and output leads, which contains two semi-infinite chains with uniform coupling strength J in the form of where | j l is the basis of the lead site and represents the single excitation subspace. The connection Hamiltonian is where |1 c and |4 c are the sites of the scattering center H c that connected to the input and output leads. The scattering center is embedded in a uniform onedimensional chain and consists of four sites, the Hamiltonian reads where V is the real on-site potential and κ is coupling strength, and | j c ( j = 1, 2, 3, 4) is the basis of the center site. The matrix form of H c is The parity operator P is defined as the operation of spatial inversion, in this model The time-reversal operator T is the complex conjugation operation defined as T iT −1 = −i. Under these definitions, the scattering center H c possesses anti-PT -symmetry, which satisfies (PT )H c (PT ) −1 = −H c . Now we calculate the reflection and transmission coefficients of H c for the left and right inputs, respectively. We suppose that the wave function for the left input is denoted as ψ k L and for the right input is denoted as ψ k R for the lead site | j l , where k is the dimensionless wave vector for the input and the incoming plane wave will be reflected and transmitted by the scattering center. The wave functions are in the form of where r L (t L ) and r R (t R ) are the reflection (transmission) coefficients for the left and right inputs, respectively. The lead of the model is a uniformly coupled tight-binding chain, so we obtain the dispersion relation E = −2J cos k from the Schrödinger equations for the lead Hamiltonian H lead . Therefore, the Schrödinger equations for the scattering center H c are For the left input, we set the wave functions as ψ k L (−1) = e −2ik + r L e 2ik and ψ k L (1) = t L e 2ik in Eq. (7); we can obtain ψ k c (1) = e −ik + r L e ik and ψ k c (4) = t L e ik from the Schrödinger equations in the sites | − 1 l and |1 l ; for the right input, as we set the wave functions ψ k R (−1) = t R e 2ik and ψ k ψ k c (4) = e −ik + r R e ik . Substituting the wave functions into the Schrödinger equations, we obtain the reflection coefficients (14) and the transmission coefficients Notably, the reflection and transmission are both reciprocal, i.e., t L = t R = t, |r L | = |r R | = |r|. Furthermore, we obtain |r| 2 − |t| 2 = R − T = 1, which has a nonunitary scattering behavior and the excitation intensity is not conserved. We plot the reflection and transmission in Fig. 2 and we observe the conclusion R − T = 1.
III. UNITARY SCATTERING
In the previous section, we elaborated on a four-site scattering center with imaginary couplings with anti-PT -symmetry and nonunitary scattering behavior. In this section, we investigate an alternative anti-PT -symmetric system, which is similar to the previous model. The only difference is that the coupling between the central two sites is replaced by the real coupling as shown in Fig. 3. In the following, we show the unitary scattering affected by the real coupling. The Hamiltonian H of this model is There is only a slight difference between the scattering centers H c and H c . In this model, the scattering center H c reads and the matrix form of H c is written as The difference between two scattering centers H c and H c is the coupling between sites |2 c and |3 c . This coupling is a non-Hermitian imaginary coupling in H c and is a Hermitian real coupling in H c . Even though the coupling between sites |2 c and |3 c is changed, we notice that the scattering center H c also possesses the anti-PT symmetry that (P T )H c (P T ) −1 = −H c , where the parity operator P is redefined as Notably, P is the generalized parity operator and the phase difference e iπ exists between |2 c and |3 c (as well as |1 c and |4 c ) after the spatial inversion operation P .
To reveal the influence of central coupling, we calculate the reflection and transmission coefficients of H c . We still use the wave function in the form of Eqs. (7) and (8). The Schrödinger equations for the scattering center H c are identical with H c in Eqs. (9) and (12), but Eqs. (10) and (11) Then, we obtain the reflection coefficients (24) Notably, the reflection and transmission in this case are reciprocal, i.e., t L = t R = t, |r L | = |r R | = |r|. However, the reflection and transmission satisfy R + T = 1, which differs from the scattering coefficients for the scattering center H c in the previous section. The scattering dynamics exhibited in this case is unitary, being similar to the dynamics in a Hermitian scattering center [63]. The reflection and transmission of H c are plotted in Fig. 4, the scattering is reciprocal and the excitation intensity is conserved.
Both two anti-PT -symmetric scattering systems under consideration are transpose invariant and have time-reversal symmetry, which protect the symmetric transmission and symmetric reflection, respectively [62,64,65].
IV. THE EFFECT OF COUPLING
From the scattering properties of the two above models, we notice that the coupling between the central two sites of the four-site scattering center plays a crucial role. First, the system is still anti-PT symmetric after altering the coupling between the central two sites. Then, for the non-Hermitian imaginary coupling of the structure in Fig. 1, it supports the nonunitary scattering with R − T = 1. If the non-Hermitian imaginary coupling iκ between |2 c and |3 c alters the Hermitian real coupling κ as illustrated in Fig. 3, the scattering center supports the unitary scattering with R + T = 1, similar to Hermitian systems even though the scattering center is still non-Hermitian. This indicates that non-Hermitian and Hermitian couplings between |2 c and |3 c significantly affect the scattering properties of anti-PT -symmetric structures and induce the nonunitary and unitary scattering, respectively.
We emphasize that the essential point of the different scattering behaviors exhibited in the two models is the parity of the anti-PT symmetry, affected by the central couplings iκ and κ. The two types of anti-PT symmetries are distinguished from the parities of the PT operators. From Eq. (6), we notice (PT ) 2 = I, where I is the identical matrix. Thus, the scattering center H c shown in Fig. 1 has even-parity anti-PT symmetry because of (PT )H c (PT ) −1 = −H c . From Eq. (19), we note (P T ) 2 = −I and the scattering center H c shown in Fig. 3 has odd-parity anti-PT symmetry because of (P T )H c (P T ) −1 = −H c . The even-parity anti-PT -symmetric scattering center has nonunitary scattering and the odd-parity anti-PT -symmetric scattering center the unitary scattering. Now we perform numerical simulations with a Gaussian wave packet injection. The initial excitation is a normalized Gaussian wave packet in the form of The Gaussian wave packet is centered at the site N c , where 0 = j e −( j−N c ) 2 /σ 2 is the normalization factor, k c is the wave vector of the Gaussian wave packet, and the half width of the Gaussian wave packet is 2 √ ln 2σ and characterizes its size. The time evolution of the Gaussian wave packet is where H is Hamiltonian of a 100-site lattice including the four-site scattering center and two finite uniformly coupled leads connected to the scattering center. The wave propagating velocity v = dE /dk is 2J sin k c . As H and H are both non-Hermitian, the evolution of Gaussian wave packet under e −iHt is non-unitary. In Fig. 5, a Gaussian wave packet is initially centered at the site N c = −25. The wave vector for the Gaussian wave packet is k c = π/2 and the velocity for the wave propagation is v = 2J. After the incident wave packet being scattered by the embedded scattering center around the site j = 0, it is split into a reflected wave and a transmitted wave that propagate in opposite directions. The intensity |φ(t, j)| 2 of reflected and transmitted waves represent the reflection R and the transmission T , respectively. For the scattering center H c shown in Fig. 5(a), the intensity difference between the reflected and transmitted waves is unity; for the scattering center H c shown in Fig. 5(b), the intensity of the reflected and transmitted waves add up to unity. Therefore, the numerical simulations verify our analytical results R − T = 1 for the even-parity anti-PT -symmetric scattering center and R + T = 1 for the odd-parity anti-PT -symmetric scattering center.
In the following, we explore the dynamics of nonunitary scattering and show the dynamic features at the spectral singularity. Notably, the spectral singularity cannot exist in the unitary scattering, where the scattering exhibits Hermitian behavior. According to Eqs. (13)- (15), the divergence of R and T indicates the existence of the spectral singularity. We note that it occurs only at the point κ 2 − V 2 = J 2 for states with k = ±π/2 and κ/J > 1 is the necessary condition for the existence of the spectral singularity. The corresponding wave functions are The physics of two states are clear, representing self-sustained emission and complete absorption of two oppositely propagating waves [23,66]. The intriguing feature of the spectral singularity is that two degenerate states for the incidences from the left and right merge into one state. We perform the numerical simulations for the scattering dynamics at the spectral singularity. First, we consider a general solution for the wave injected in the left lead, with 1 < j N. The reflection and transmission go to infinity for the momentum k = π/2. The solution corresponds to the wave emission dynamics of the initial state: The infinity of r and t should exhibit in the dynamics of the wave packet. Second, the solution ψ −π/2 L,R corresponds to the dynamics of two counter propagating wave packets initially centered at ±N c : The profiles of the evolved states |φ(t ) are plotted in Fig. 6.
In Fig. 6(a), an incident wave packet stimulates two counterpropagating emission waves with the amplitude ratio 1 : (iV − J )/κ. As |(iV − J )/κ| 2 = 1; the emission waves have the same amplitude. The intensities of reflected and transmitted waves increase linearly as time, which is a dynamical demonstration of the infinite reflection and transmission coefficients. Notably, the intensity difference between reflected and transmitted waves remains unity at the spectral singularity. In Fig. 6 is shown. Two incident waves with matching amplitudes and relative phases are fully absorbed after scattering.
CROWs are composed of primary resonators and linking resonators. All the resonators are evanescently coupled together and the primary resonators are indirectly coupled through the linking resonators. The primary resonators have an identical resonant frequency ω 0 except for the resonators |2 c and |3 c , which have frequencies ω 0 + V and ω 0 − V , respectively. The dynamics in the proposed CROWs are described by the scattering Hamiltonians discussed in the previous sections as schematically illustrated in Figs. 1 and 3. Notably, each dot in the schematics represents a primary resonator and each line between the two neighbor primary resonators represents the effective coupling induced by the linking resonator after adiabatically eliminating the linking resonator. The effective coupling induced by the off-resonance passive linking resonator without dissipation/gain is real and Hermitian [67]; in contrast, the effective coupling induced by the on-resonance dissipative/active linking resonator with dissipative/gain is imaginary and non-Hermitian [56,60]. In the two situations, the effective coupling strength is approximately equal to the product of the two couplings between the linking resonator and its two adjacent primary resonators divided by the off-resonant frequency or the dissipation/gain rate of the linking resonator. The dynamics in the CROWs are, respectively, described by the Hamiltonians of the two anti-PT -symmetric scattering systems.
VI. CONCLUSION
We have investigated the scattering properties of two anti-PT -symmetric non-Hermitian four-site scattering centers. The even-parity anti-PT -symmetric scattering center possesses nonunitary scattering with R − T = 1; in contrast, the odd-parity anti-PT -symmetric scattering center possesses unitary scattering with R + T = 1. The significantly different scattering dynamics is solely induced by the coupling between the central two sites of the scattering center, which determines the parity of the anti-PT symmetry. We emphasize that our conclusions are still valid for scattering centers with an even larger size as long as the scattering center structures H c and H c keep unchanged with their difference being the couplings iκ and κ between the central two sites. Our findings deepen the understanding of anti-PT symmetry and its application in non-Hermitian physics, and the proposed systems can be verified in many experimental platforms including coupled waveguides, photonic crystals, and electronic circuits. | 2021-09-27T20:55:28.800Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "bc15f8732ea393350a1c027b5da6a905c34243c9",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevA.104.012218",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "759883bc55e4b5ab74b6b953045cc65487611f31",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
21722562 | pes2o/s2orc | v3-fos-license | Estimated prevalence of Hepatitis C Virus infection in Canada , 2011
Background: Prevalence estimates contribute to our understanding of the magnitude of a particular health condition and in planning appropriate public health interventions. Objective: To estimate the prevalence of chronic Hepatitis C virus (HCV) infection, anti-HCV-positive status (antiHCV) and the proportion of undiagnosed HCV infections in Canada. Methods: A combination of back-calculation and workbook methods was used. The back-calculation method estimated prevalent chronic HCV infection and the proportion undiagnosed using the Canadian Cancer Registry’s data on hepatocellular carcinoma reported between 1992 and 2008 and the Canadian Notifiable Disease Surveillance System’s data on Hepatitis C virus (HCV) cases reported between 1991 and 2009 in a Markov multistate disease progression model with parameters adjusted to Canada. The workbook method divided the total population of Canada into population subsets and developed estimates of population size and anti-HCV prevalence for each. Sub-population size estimates were multiplied by anti-HCV prevalence measures to calculate the prevalence of anti-HCV by sub-population. A measure of spontaneous clearance was used to estimate the number of persons with chronic HCV from estimates of the number of anti-HCV-positive persons. Results: The back-calculation method estimated the prevalence of chronic HCV infection at 0.64% and the proportion of undiagnosed chronic HCV infection at 44% in 2011. The workbook method estimated the anti-HCV prevalence at 0.96% (plausibility range: 0.61% to 1.34%) and chronic HCV infection at 0.71% (0.45 – 0.99%). Interpretation: By combining mid-point estimates from both methods, it is estimated that between 0.64% to 0.71% of the overall Canadian population was living with chronic HCV infection in 2011 and 44% of these individuals were undiagnosed.
Introduction
Chronic Hepatitis C virus (HCV) infection affects an estimated 3% of the world's population (1).Approximately three out of four persons with acute HCV infection will not clear the virus spontaneously within six months and will develop chronic HCV infection with an array of long-term sequelae (2).The diagnosis of HCV infection is usually based on identifying antibodies to HCV (anti-HCV) and/or the viral material (i.e., HCV ribonucleic acid) (3) alongside certain liver function enzyme tests (4).A positive anti-HCV test result indicates past or current HCV infection since the HCV antibodies may remain after the virus has cleared.A positive HCV-RNA test suggests a current infection which may be acute or chronic, with chronic HCV infection being defined as a positive HCV-RNA test for more than six months since the presumed infection date.
National estimates of prevalence contribute to our understanding of the magnitude of a particular condition and can help in planning appropriate public health interventions (5).The prevalence of anti-HCV-positive persons in Canada was estimated at 0.78% of the total Canadian population in 2007, of whom 21% were considered not diagnosed at the time (6).Based on the data from Cycles 1 and 2 of the Canadian Health Measures Survey (2007)(2008)(2009)(2010)(2011), Rotermann and colleagues estimate anti-HCV seroprevalence in the range of 0.3% to 0.9% with a mid-estimate of 0.5%.Approximately 70% of persons who tested anti-HCV-positive reported that they did not have Hepatitis C (7).
However, the Canadian Health Measures Survey did not cover non-household populations with a higher HCV burden (8) (e.g., prison inmates, homeless persons and residents of health care facilities) and, with a response rate of just above 52% (7), the Survey may have under-sampled household populations that were highly affected by HCV (e.g., people who use injection drugs (IDU), chronically ill persons on haemodialysis and immigrants who do not speak English or French).Therefore, the analysis by Rotermann and colleagues (7) likely underestimated the true anti-HCV seroprevalence in Canada.
Given the length of time since the last HCV prevalence estimates were developed in Canada (6) and the potential limitations of the analysis by Rotermann and colleagues (7), this review sought to update estimates of the prevalence of chronic HCV infection, anti-HCV-positive persons and the proportion of undiagnosed cases of chronic HCV infection in Canada.
Methods
Estimates of the prevalence of chronic HCV infection and anti-HCV-positive persons and the proportion of undiagnosed cases of chronic HCV infection in Canada were developed using a combination of back-calculation (9) and workbook (10) methods.
Back-calculation uses the observed occurrence of subsequent events to make inference about the incidence of the initiating events in the past that lead to them.This method was recently adopted to estimate the incidence of HCV infection in France (11) and England (12), where reported data on HCV-associated hepatocellular carcinoma and a Markov multi-state disease progression model were used to back-calculate the historical HCV incidence.We used a back-calculation method with Canadian Cancer Registry's data on hepatocellular carcinoma reported between 1992 and 2008 and a Markov multi-state disease progression model with parameters adjusted to Canada (13,14) to estimate chronic HCV infection prevalence and the proportion of undiagnosed HCV infections in 2011.The prevalence of chronic Hepatitis C per 100 population was estimated with data stratified by 5-year birth cohort according to the date of birth.The overall prevalence was estimated using the same model with all birth cohorts combined.Another back-calculation process with data from the Canadian Notifiable Disease Surveillance System (CNDSS) on HCV cases reported between 1991 and 2009 was run in parallel to ensure the reliability of the estimates of the former.As record-level HCV data from CNDSS was only available for six Canadian provinces and territories that account for 88% of the Canadian population, estimates from the backcalculation were extrapolated to the whole Canadian population.
A workbook method was used to estimate the number of prevalent and undiagnosed anti-HCV-positive persons in Canada in 2011.Using this method, population size estimates were multiplied by anti-HCV seroprevalence measures (anti-HCV) to produce estimates of prevalent anti-HCV-positive persons.A value of 26% was used to describe the population's spontaneous HCV clearance and to estimate the chronic HCV infection prevalence from an estimate of anti-HCV-positive persons (15).Then, estimates of prevalent anti-HCV-positive persons were multiplied by the proportion of undiagnosed chronic HCV infection from the back-calculation method to produce the numbers of potentially undiagnosed persons.
The total population of Canada was divided into population subsets and size and anti-HCV prevalence estimates were developed for each subset population.Population size estimates were developed using data from published literature and a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data.Public Health Agency of Canada, available from the author upon request).Sources are referenced in Table 1.
The MEDLINE, EMBASE, GLOBAL HEALTH, SCOPUS and PROQUEST PUBLIC HEALTH databases were searched for anti-HCV prevalence measures in the populations of interest in Canada and other developed countries through relevant papers published from 2000-2013 in English or French.Bibliographies of identified studies were also searched for relevant articles in addition to the electronic resources of Statistics Canada, Citizenship and Immigration Canada, Correctional Service Canada, the Public Health Agency of Canada (PHAC) and the Internet.Requests for information were sent to Canadian experts working in the fields of migration health, prison studies, substance abuse, mathematical modelling and Hepatitis C epidemiology.
During the review, anti-HCV seroprevalence measures were ranked as "under-estimates", "over estimates" or as "appropriate estimates" based on a subjective assessment of how representative the study sample was of the population of interest from the description of the study design in the methods section of the reviewed paper.The study outcomes assessed as likely over-or under-estimates were used to bound plausibility ranges of the appropriate estimates (10).
A number of population groups were assessed to have appropriately representative studies of anti-HCV prevalence, including foreign-born persons aged 14-79 years old, current and former injection drug users, homeless persons who do not use injection drugs, federal and provincial inmates and residents of long-term healthcare facilities.For these groups, prevalence estimates from the group of studies ranked as "appropriate estimates" were chosen if they were from cohort studies or systematic reviews, or, in the absence of such, from studies with more accurate geographical representation.For foreign-born persons, a range of anti-HCV prevalence measures at 1.90% (95%CI: 1.30-2.60)(suggested by Greenaway and colleagues) were used (16).For current injection drug users, including those of Aboriginal origin and homeless people who use injection drugs, a range of anti-HCV seroprevalence at 63%-69% were used (Unpublished data.I-Track: Enhanced Surveillance of Risk Behaviours among People who Inject Drugs, Phases 1-3.PHAC.2013).For former injection drug users, a range of anti-HCV seroprevalence at 28.5% (95%CI: 10.8-46.3)from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey were used (Unpublished data, PHAC 2013).For homeless people who do not use injection drugs, anti-HCV seroprevalence measures in the range of 0.8% (Unpublished data E-SYS: Enhanced street youth surveillance system, Phase 6 (2009Phase 6 ( -2011)).PHAC, 2013) to 3.70% were used (17).For inmates of the federal penitentiaries, a point estimate of anti-HCV prevalence at 24.0% provided by the Correctional Service Canada for 2011 (Unpublished data.Correctional Service Canada, 2013) and a range of measures from a publication by De and colleagues at 18.10%-37.10%were used (18).For inmates of the provincial penitentiaries, a range of published measures of anti-HCV seroprevalence between 18.5% (19) and 28.0% were used (20).For residents of long-term care facilities, a range of published measures of anti-HCV seroprevalence between 1.4% (21) and 4.5% were used (20).
For the remaining population groups (including Aboriginal people who do not use injection drugs and Canadianborn persons of non-Aboriginal ancestry aged 0-13 and 80+ years old who do not use injection drugs), mid-point estimates and plausibility ranges were derived using indirect evidence on anti-HCV prevalence measures as compared to the anti-HCV prevalence estimates in populations with reliable estimates, such as rate ratios or higher or lower position in relation to anti-HCV prevalence rates measured in the comparison populations.Thus, the lower bound of the prevalence estimate (0.03%) for 14-44 years old from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data.PHAC,2013) was used as the upper bound for children 0 to 13 years old, while the lower bound was assigned as 0.01% and the mid-point estimated as the average of the two.For senior residents aged 80+ years old, the mid-point and range of prevalence in 14-44 years old from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data PHAC, 2013) (0.16% (0.03% to 0.29%) was assigned with the understanding that it should be lower than the prevalence in 45-79 years old (0.93% (0.33%-1.53%) but higher than that in the age group of 0-13 years old (0.02% (0.01%-0.03%) (as is evidenced from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data.PHAC, 2013).
For Canadian-born persons of non-Aboriginal ancestry who do not use injection drugs aged 14-79 years old, anti-HCV prevalence measures from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data PHAC, 2013) (0.20% (95%CI: 0.10-0.30%)were used.For Aboriginal persons who do not use injection drugs, a multiple of 2.5 (a coefficient found in the study of Uhanova and colleagues (23) times the seroprevalence rate from a custom tabulation of the Cycle 1 and 2 data from the Canadian Health Measures Survey (Unpublished data, PHAC, 2013) in Canadian-born persons of non-Aboriginal ancestry who do not use injection drugs aged 14-79 years old was used.Due to very limited data on anti-HCVpositive status awareness, a point estimate of undiagnosed chronic HCV infection from the back-calculation method was applied to the point-estimates of persons with chronic HCV infection from the workbook and backcalculation methods to calculate the range of undiagnosed persons with chronic HCV infection.
Results
The overall prevalence of chronic HCV infection (as estimated from the back-calculation) was 0.64% or 220,697 persons in 2011.In the previous 20 years, the country's prevalence of chronic HCV infection had changed in the range of 0.6% to 0.7% (Figure 1).The highest prevalence of chronic HCV infection occurred in the birth cohort 1955-59 (1.5%), followed by the birth cohorts 1950-54 (1.25%), 1960-64 (1.2%), 1965-69 (1.1%) and 1970-74 (0.8%).The prevalence of chronic HCV infection among those born before 1949 has declined from approximately 1% to below the overall prevalence rate in the past 20 years.The prevalence of chronic HCV infection among those born after 1965 has increased from below the overall prevalence rate to above it.The prevalence of chronic HCV infection in those born between 1950 and 1964 has remained above the overall prevalence rate throughout the 20 year period.The back-calculation method also estimated that 44% of those with chronic HCV infection were not diagnosed in 2011.
Figure 1: Estimated prevalence of chronic HCV 1 infection (per 100 population) in Canada from a backcalculation model 2
1 HCV = Hepatitis C virus 2 Note: Solid lines were used to reflect the birth cohort specific prevalence when it was above the overall prevalence.Dotted lines were used for when the prevalence was below the overall estimate. 3
CHC = chronic Hepatitis C
The workbook method estimated the anti-HCV prevalence in Canada in 2011 at 0.96% with a plausibility range of 0.61% to 1.34% (Table 1).This range translates into an estimated 332,414 persons (plausibility range (persons): 210,753 to 461,517) who were anti-HCV-positive in 2011 (Table 1).After adjusting for an HCV clearance rate of 26%, the workbook method estimated 0.71% (plausibility range (%): 0.45 -0.99) or 245,987 persons (plausibility range (persons): 155,957 to 341,522) had not cleared the virus and were considered living with chronic HCV infection in 2011.
Discussion
By combining mid-points from both methods, between 0.64% to 0.71% of the overall Canadian population (from 220,697 to 245,987 persons) were living with chronic HCV infection in Canada in 2011.44% of these individuals (ranging from 97,107 to 108,234) were likely undiagnosed.The estimated number of anti-HCV-positive persons was 332,414 (about 1% of the Canadian population) with a plausibility range from 210,753 to 461,517.
"Hidden" populations such as former and current injection drug users and homeless people (approximately 1% of the total Canadian population) account for almost 44% of total anti-HCV-positive persons.Foreign-born populations comprise an additional 35% of estimated anti-HCV-positive persons in Canada in 2011.
When compared with an estimated prevalence from a modelling exercise by Remis (6), which, like the backcalculation method used a Markov' multi-state disease progression model, mid-estimates of the prevalence of anti-HCV and chronic HCV infection changed from 0.8% (6) to 1.0% and from 0.6% (6) to 0.7% respectively.This suggests that between 2007 and 2011, changes anti-HCV and chronic HCV infection prevalence (if any) occurred in a narrow range and that the majority of anti-HCV-positive persons and those with chronic HCV infection were within a few key populations in Canada.
These prevalence estimates are comparable with estimates from an analysis of US data (24).In addition, the estimate of hidden populations accounting for 44% of total anti-HCV-positive persons is generally comparable with the estimate of 34% for comparable populations in the US (24).
Other important findings of this analysis include that the birth group of 1950-1970 currently encompasses the bulk of chronic HCV infection in Canada and that the new estimated proportion of persons with undiagnosed chronic HCV infection in 2011 was 44%.The two back-calculation processes used together provided an opportunity to internally calibrate the model outputs to improve the fit with the reported data on HCV cases and cases of hepatocellular carcinoma.The CNDSS data allowed for a more accurate estimate of recent trends, younger birth cohorts and the overall magnitude of the epidemic.The hepatocellular carcinoma data allowed for a more effective model of historical trends, older birth cohorts and disease progression, in a manner similar to that used by other researchers in the field (14).The use of the two data sets through an iterative process improved the overall model and made it less dependent on the limitations of any one data set.We also cross-validated the annual HCV prevalence predicted by the model with the prevalence from independent data sources, including the Canadian Health Measures Survey 2007-2011 data (7) and the reported HCV infections among healthcare patients from the CIHI Discharge Abstract Database (29).While absolute measures of HCV prevalence differed between the above data sources (possibly due to the differences in methodology and in how outcomes and geographic representation were defined) there was general agreement in the distribution of predicted/estimated HCV prevalent cases by year of report and birth cohort as well as for temporal trends.
These estimates may be affected by both data and methodological limitations such as under-reporting of outcomes; combining anti-HCV and HCV-RNA test results into a single outcome measure; and using record-level data from six Canadian jurisdictions to make inferences about HCV prevalence for all of Canada.Other limitations are due to value judgements in grading and choosing outcome measures for specific populations; the largely English-language focus of the review; not adjusting the back-calculation model for the effect of HCV treatment; and the many assumptions used in the estimation process.The methods used to develop HCV prevalence estimates described in this paper make maximum use of available data, are based on independent data sources and, when used jointly and iteratively, may compensate for individual deficiencies.Nonetheless, we anticipate the prevalence estimates will change as new and improved data on HCV prevalence in Canadian populations becomes available.
Table 1 : Estimated prevalence and the number of anti-HCV 1 -positive persons with associated plausibility ranges by key population in Canada in 2011
(28) new estimate is approximately twice as high as the one estimated by Remis for 2007 at 21%(6)and it falls between the estimates for populations which are expected to have high rates of HCV testing such as injection drug users at 20%-43% undiagnosed (Unpublished data.I-Track: Enhanced Surveillance of Risk Behaviours among People who Inject Drugs, Phases 1-3.PHAC.2013)andpopulationsexpected to have lower rates of HCV testing such as hospital patients at 56% undiagnosed(25)and the weighted estimate for the Canadian household population of 14-79 years old at 69.5% undiagnosed(7).The estimate of 44% undiagnosed is also within the ranges of the proportion undiagnosed found in Canada in inmates (28-50%)(26), first time blood donors (42-58% undiagnosed)(27)and men who have sex with men (44-75% undiagnosed) (Unpublished data M-Track: Enhanced Surveillance of Risk Behaviours among Men who Have Sex with Men, Phases 1-2 PHAC 2013).It is also comparable with the US estimate of the proportion of undiagnosed with anti-HCV-positive status at 50.3%(28). | 2018-05-21T21:28:04.221Z | 2014-12-18T00:00:00.000 | {
"year": 2014,
"sha1": "0e09be6639198bec5216289651dcd26e350e629b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14745/ccdr.v40i19a02",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0e09be6639198bec5216289651dcd26e350e629b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7546291 | pes2o/s2orc | v3-fos-license | Normal Serum Aminotransferase Levels and the Metabolic Syndrome: Korean National Health and Nutrition Examination Surveys
Increasing evidence suggests an association between elevated serum aminotransferase level and the metabolic syndrome. However, the significance of relatively low levels of aminotransferase in relation to the metabolic syndrome has not been fully investigated in the general population. We investigated the association between serum amiontransferase level and the metabolic syndrome using data from a nationwide survey in Korea. We measured serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels and metabolic conditions among 9771 participants aged 20 or more in the 1998 and 2001 Korean National Health and Nutrition Examination Surveys. Metabolic syndrome was defined according to NCEP-ATP III criteria with a modified waist circumference cutoff (men > 90 cm; women > 80 cm). Serum aminotransferase level, even within normal range, was associated with the metabolic syndrome independent of age, body mass index, waist circumference, smoking, and alcohol intake. Compared with the lowest level (< 20 IU/L), the adjusted odds ratios (95% CI) for an AST level of 20-29, 30-39, 40-49 and ≥ 50 IU/L were 1.10 (0.85-1.42), 1.37 (1.02-1.83), 1.62 (1.08-2.43), and 2.25 (1.47-3.44) in men, and 1.18 (0.99-1.41), 1.43 (1.29-1.83), 1.71 (1.09-2.68), and 2.14 (1.20-3.80) in women, respectively. Corresponding odds ratios for ALT levels were 1.27 (0.99-1.63), 1.69 (1.28-2.23), 2.17 (1.58-2.99), and 2.65 (1.96-3.58) in men, and 1.44 (1.22-1.70), 1.65 (1.26-2.15), 2.94 (1.93-4.47), and 2.25 (1.54-3.30) in women, respectively. In conclusion, elevated serum aminotransferase levels, even in the normal to near normal range, are associated with features of the metabolic syndrome.
INTRODUCTION
Non-alcoholic fatty liver disease (NAFLD) is an increasingly recognized condition that has accompanied the recent increase in obesity. [1][2][3] and is also known to be associated with various cardiovascular risk factors including central obesity, type 2 diabetes, dyslipidemia, and high blood pressure. 2,[4][5][6][7][8][9] Moreover, it has been suggested that NAFLD can be considered a hepatic consequence of the metabolic syndrome. [10][11][12][13][14] Patients with fatty liver disease usually have elevated serum aminotransferase activity, and so aminotransferase assays are widely used to monitor liver function in people with metabolic syndrome. 1,2 It has been suggested, however, that the use of current definitions of normal serum aminotransferase levels may underestimate the prevalence of liver disease, because normal aminotransferase levels cannot rule out the existence of liver disease. [15][16][17] The significance of serum amiontransferase level, including within the normal to near normal range, needs to be reviewed in relation to the metabolic syndrome, and the need for a full investigation in the general population still remains. Accordingly, we investigated the independent association between serum aminotransferase level and the metabolic syndrome in a representative Korean population.
Study population
The Korean National Health and Nutrition Examination Surveys were conducted in non-institutionalized Korean civilians in 1998 and 2001. A stratified multistage probability sampling design was used, with selection made from sampling units based on geographical area, sex, and age groups using household registries. A total of 13,451 individuals (7945 in 1998 and 5506 in 2001) aged 20 years or more completed the health examinations. Among them, 11,282 people fasted for at least eight hours before their blood samples were obtained. We excluded 549 people who tested positive for Hepatitis B (HBsAg), and 95 pregnant women. We also excluded 389 people who reported consuming at least 50 g of alcohol per day. In the end, 9771 people (4019 men and 5752 women) were eligible for our analyses.
Data collection
Anthropometric measurements including height, weight, and waist and hip circumference were conducted by well-trained examiners on individuals wearing light clothing. Waist circumference was measured to the nearest 0.1 cm at the midpoint between the lower borders of the rib cage and the iliac crest. A standard mercury sphygmomanometer was used to measure the blood pressure of each individual in a sitting position after a 10-minute rest period. Systolic and diastolic blood pressures were measured at phase I and V Korotkoff sounds, respectively. Two readings of systolic and diastolic blood pressure were recorded and the average was used for data analysis. Blood samples were obtained after an overnight fasting period and subsequently analyzed at a central certified laboratory. Serum glucose, total cholesterol, triglyceride, high density lipoprotein (HDL)-cholesterol, aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels were measured by an autoanalyzer (Hitachi 747, Tokyo, Japan). Smoking and alcohol consumption habits and current medication status were determined by a self-administered questionnaire. Non-drinkers were people who reported that they never or almost never (less than once a month) consumed alcoholic beverages. For drinkers, daily alcohol intake was calculated by multiplying the frequency of drinking and the amount consumed in one sitting.
Definition of the metabolic syndrome
The metabolic syndrome was defined according to the National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP III), except for abdominal obesity by waist circumference. 18 We used a modified waist circumference cutoff of > 90 cm in men and > 80 cm in women. 19 Hence, in the present study, individuals having three or more of the five following criteria were defined as having the metabolic syndrome: 1) high blood pressure ( 130/85 mmHg) or anti-hypertensive medication, 2) elevated fasting blood glucose ( 6.1 mmol/L) or anti-diabetic medication, 3) hypertriglyceridemia ( 1.7 mmol/L), 4) low HDL-cholesterol (men, < 1.0 mmol/L; women, < 1.3 mmol/ L), and 5) abdominal obesity by waist circumference (men, > 90 cm; women, > 80 cm).
Statistical analysis
We compared clinical and biochemical characteristics between men and women using student's t-test or 2 test. Spearman correlation coefficients among the metabolic risk factors and aminotransferase levels were calculated in men and women separately. The sex-specific prevalence of abnormal metabolic conditions and the metabolic syndrome were calculated by the level of AST and ALT. Independent association between serum aminotransferase level and the metabolic syndrome was investigated using serial logistic regression models. First, we estimated the ageadjusted odds ratio for the metabolic syndrome by serum aminotransferase levels. Second, we adjusted for age, body mass index, cigarette smoking and alcohol intake. Finally, waist circumferences were adjusted in addition to all of the variables in the second model. In order to assess the possible confounding effects of alcohol intake, we investigated the association between ALT and the metabolic syndrome by drinking status.
RESULTS
Clinical characteristics and laboratory data of the 4019 men and 5752 women are shown in Table 1. Mean age was 44.7 years for men and 45.6 years for women (p = 0.009), but mean body mass index was the same for both sexes. Compared with women, men had higher waist circumference, blood pressure, fasting glucose, triglyceride, AST and ALT levels, but lower HDL-cholesterol levels. Among the study participants, 6.6% were taking anti-hypertensive medications, and 2.6% were taking anti-diabetic medications. Of men, 61.5% were current smokers, as were 5.7% of women. Regular alcohol drinkers comprised 64.8% of men and 28.1% of women.
Serum aminotransferase levels were correlated with most metabolic risk factors, except there was no association (p = 0.982) between AST level and HDL-cholesterol level in women. Compared to AST level, serum ALT level was more closely associated with body mass index, waist circumference, and fasting blood glucose level ( Table 2). Prevalence of central obesity, high blood pressure, high fasting blood glucose, and hypertriglyceridemia increased according to the increase of serum AST level. The prevalence of low serum HDL-cholesterol was negatively associated with AST level in men, but not in women. On the contrary, serum ALT level was positively associated with all five metabolic abnormalities in both sexes (Table 3). Age-adjusted odds ratios for the metabolic syndrome increased progressively according to serum aminotransferase levels. The positive association between aminotransferase level and the metabolic syndrome was observed even within the normal range of aminotransferase levels. When adjusted for age, body mass index, smoking and alcohol intake, the association between aminotransferase level and metabolic syndrome was attenuated but still highly significant. Even after additional adjustment for waist circumference, the association between aminotransferase level and metabolic syndrome still remained (Table 4). We assessed the association between ALT and the metabolic syndrome in non-drinkers and drinkers separately. The positive association between serum ALT level and the metabolic syndrome was not different by alcohol intake (Fig. 1).
In a further analysis, we compared the relative significance of individual liver enzymes by including AST and ALT in the same model. In that model, the metabolic syndrome was significantly associated only with ALT level (p < 0.001 for both sexes), but not with AST level (p = 0.272 for men and 0.240 for women).
DISCUSSION
We found a positive association between serum aminotransferase level and the metabolic syndrome in Korean adults. Recent epidemiologic studies have also reported an association between aminotransferase elevation and the metabolic syndrome. [20][21][22][23][24][25][26][27][28][29][30] This association has been observed in various populations, including obese people, 26,27 elderly men, 28 postmenopausal women, 29 and even adolescents. 30 Although similar findings have already been reported in some Korean populations, 22,29,30 the present study showed a significant association between aminotransferase level and the metabolic syndrome in a nationally representative Korean population. Another more important finding of this study is that the association was observed even in the normal to near normal range of aminotransferase levels, in a dose-related manner.
The most probable explanation for the association between serum aminotransferase level and the metabolic syndrome is NAFLD. NAFLD is the most common cause of unexplained aminotransferase elevations in Western populations, and even in some Asian populations.
2,31 A growing body of evidence supports an association between NAFLD and the metabolic syndrome. 3,7,[10][11][12][13][14][30][31][32][33][34][35] Recent studies have added evidence that insulin resistance, a key component of the metabolic syndrome, may contribute to the development of NAFLD. 7,13,36,37 Central obesity may be an underlying cause of insulin resistance and can also contribute to the development of NAFLD. 11,[37][38][39] Compared with adipose tissue in other sites, visceral adipose tissue is more resistant to insulin, and the associated relative hyperinsulinemia promotes lipogenesis in the liver. 40,41 Visceral adipose tissue is also known to be a potent modulator of insulin action on hepatic glucose production and gene expression. 42 In the present study, serum aminotransferase levels had a doserelated association with metabolic syndrome even after adjustment for body mass index and waist circumference. The significance of central obesity in the connection between liver dysfunction and the metabolic syndrome may differ by ethnicity, as differences in the adverse effects of obesity between different ethnicities have been reported. 43,44 Further studies are required to inves- tigate ethnic and regional differences in the relationship between liver disease and the metabolic syndrome.
Other conditions which can increase aminotransferase activity may also be associated with the metabolic syndrome. 31 The serum aminotransferase assay is a sensitive screening tool for the detection of liver disease, but it cannot provide information on the underlying causes of liver damage. Alcohol-related liver disease can be associated with the metabolic syndrome. Heavy drinkers tend to have an adverse cardiovascular risk profile, including high blood pressure, central obesity, and unhealthy lifestyle. We did not include current heavy drinkers for the analysis, and past history of alcohol consumption was also not considered. We could not exclude other causes of aminotransferase elevation, such as C-viral hepatitis and iron overload. It is unlikely that other causes of liver disease severely affected the relationship between aminotransferase levels and the metabolic syndrome. A recent study found that the association between metabolic risk factors and ALT elevation was similar in subjects with and without identifiable causes of chronic liver disease, including viral hepatitis, excessive alcohol use, and increased iron saturation. 45 Some systemic diseases and medications may also elevate serum aminotransferase levels. Accordingly, we performed further analysis after excluding 963 people who were receiving hormone replacement therapy or medications for hypertension, diabetes mellitus, liver disease, and/or renal disease during the month before examinations. We still found a strong association between serum aminotransferase level and the metabolic syndrome, even in the normal to near normal range of aminotransferase. Compared with levels below 20 IU/L, the adjusted odds ratios for an ALT level of 20-29, 30-39, 40-49 and 50 IU/L were 1.38, 1.76, 2.24 and 3.01 in men and 1.49, 1.73, 3.05 and 2.40 in women, respectively (p < 0.05 for all). These results demonstrate that other causes of aminotransferase elevations did not alter the outcomes of this study.
In the present study, serum ALT level was more closely associated with the metabolic syndrome than AST level. In addition to that, only ALT level was significantly associated with the metabolic syndrome when both enzymes were simultaneously investigated in a single model. This finding is in agreement with previous studies, and can be explained by higher specificity of ALT to liver disease.
Serum aminotransferase assays have been widely used to identify NAFLD and other liver diseases. But the cutoff level that discriminates between healthy and diseased livers has not been clearly defined. The upper normal limit of serum aminotransferase is set on average at 40 IU/L, ranging from 30-50 IU/L. 17,46 However, this normal range was calculated from a supposedly healthy reference population, which probably included people with mild to moderate chronic liver disease. Several previous studies have demonstrated that serum aminotransferase level, even within the normal range, may be associated with morbidity and mortality. [15][16][17]22,29,47 The progressive linear association between serum aminotransferase level and the metabolic syndrome suggests that people with high normal aminotransferase levels may need further investigation for the presence of fatty liver disease. Adjustment of the normal limit of serum amiontransferase should be considered for the monitoring of liver function in people with the metabolic syndrome. In most Western regions, the aminotransferase assay has been less useful as a mass-screening tool, because the prevalence of liver disease is lower than in East Asian or African countries. However, considering the increasing prevalence of the metabolic syndrome and its association with liver dysfunction, the significance of the serum aminotransferase assay needs to be reevaluated.
The present study also has several possible limitations. First, we had only one measurement of aminotransferase. Aminotransferase activity is a sensitive marker of liver dysfunction, but its specificity in detection of liver diseases is low because it can be elevated temporarily in various conditions. This limitation would serve to attenuate the magnitude of the association toward the null; thus, our results can probably be considered as conservative estimates. Second, there is the possibility of confounding effects by viral and alcoholic liver diseases. We excluded people who were positive for HBsAg, but could not rule out the effects of other types of viral hepatitis. We did not include heavy drinkers in the analysis and statistically controlled the effects of alcohol consumption, but the possibility of residual confounding influence still exists. Finally, this study could not investigate the causal relationship between serum aminotransferase levels and the metabolic syndrome because of its cross-sectional design. Further prospective studies are needed to establish the temporal relationship between aminotransferase level and the metabolic syndrome, and to discover the underlying mechanism of the relationship.
In conclusion, serum aminotransferase levels even in the normal or near normal range are associated with features of the metabolic syndrome in Korean men and women. | 2018-04-03T03:36:24.874Z | 2006-08-31T00:00:00.000 | {
"year": 2006,
"sha1": "755a00d15236cd679122493942628dd933a9b2e7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2006.47.4.542",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "755a00d15236cd679122493942628dd933a9b2e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225258802 | pes2o/s2orc | v3-fos-license | The Development Process of the Ecological Education in Independent Kazakhstan
Today, the world community understands that one of the main reasons for the emergence of the global ecological crisis is the low education level, including the ecological one. On the 21st century threshold, the individual's ecological development is becoming a priority and a meaning-forming factor in state education policy. In many ways, it acts not only as a means of preserving nature but also of human civilization as a whole. This study recommends theorizing and establishing ecological education among learners. The future teacher needs to connect diverse thoughts about ecology and attempt to understand effective practices to cultivate a space of reflection for students and devise effective ways and methodologies to foster ecological education skills thought among student learners. The purpose of the present study was to collect and analyze environmental education undertaken with various subjects. For systematic analysis, selected databases and journals were analyzed across pre-determined criteria. The close examination resulted in 11 studies reporting the effects of the interventions (e.g., hands-on practices, field trip activities) and 4 studies reporting participants' views on the effects of the interventions in general. Later, these studies were subjected to content analysis to present the trends and to synthesize the common findings of the selected studies. The techniques and instructions used as the intervention in these selected studies were observed to contribute to the development of participants' gains associated with knowledge of the environment and nature, perception of nature, environmental effect, responsible environment behaviors, and conception and understanding of science.
INTRODUCTION
The new strategy for the development of civilization is associated with the eco humanitarian paradigm, the humanization of public consciousness, the priority of spiritual values, and an increase in the development of humanity's ecological education of society.
Ecological education, in the narrow sense, is defined as rational nature management. In a broad sense, this concept constitutes a new content of universal human education, based on the priority of universal values -Earth, Nature, Life, Man, Health. Ecological education, manifesting itself in the system of value orientations and motivating ecologically sound behavior, determines the nature and quality level of relations between a person and the socio-natural environment [1].
The specificity of universities in Kazakhstan that incorporate an ecological education consists in following, they implement an ecological module as a mandatory and they included the program of educational work with students which contains a mandatory block "Ecological education of the individual" and involves the student's involvement in *Address correspondence to this author at the Kazakh State National University of Al-Farabi, Almaty, Kazakhstan; Tel: +77073851878; E-mail: zhanat_2006@mail.ru ecological activities on practice, by activities like (ecological actions, ecological conferences, ecological festivals, work with schools to promote ecological education); as well as the activities like wide involvement of the pedagogical community and public organizations. Future teachers, participating in activities on ecological education, on the one hand, form personal qualities that manifest ecological education. On the other hand, the future teacher learns the methodological work of organizing and conducting ecological education in practice.
The most important social institution that forms an ecological education and improves its inheritance mechanisms is the system of education for children. Institutions promoting ecology education, integrate basic ecological education and education on speciality, this have a significant potential for the ecological development of an individual, capable of perceiving and implementing in their life the ideas of co-evolution of nature and society, focused on continuous creative self-development, capable of not only adapting to rapidly changing civilization conditions but also providing priority of eco humanistic values in the system "man -society -nature".
When it comes to talking about the importance of environmental education, students often remain confused because they don't actually know how a degree in environmental science can benefit them. If you are currently studying biology, chemistry, or another similar subject, you might develop an interest in environmental science in the coming months or weeks. In case you don't know what its core advantages are, this is the right place for you to gain some useful information.
One of the major benefits of environmental education is that it helps kids develop critical thinking skills. This is because they are tasked to do different things daily, weekly, or monthly. If you are a student interested in this field, let us tell you that you will have to complete a couple of projects and eventually boost your critical thinking, learning, and writing capabilities. This will prepare you for a challenging yet prosperous tomorrow.
Research shows that this education is really important. It is not only well-paying but also gives you a sense of responsibility and prepares you to take more care of the people, animals, plants, and other things around you than ever.
As a student, you can learn the value, significance, and importance of STEM. It means you can learn different subjects at a time, such as science, technology, engineering, and mathematics, and can gain more and more skills. This will make you a multiskilled person.
LITERATURE REVIEW
In the present-day situation, when the number of classes decreases, the most accessible approaches remain the infusion and the modular one. The focus is laid more on the informative character of ecological education during the educational process. To include the messages of the ecological ethics to finalize the ecological education is a means to efficiently make ii more accessible which can be applied from the earliest ages, with a major condition that the educated should have a philosophical culture in this respect.
The recognition of ecological problems humankind has faced refers to the second part of the 20th century, but to this very day, the problems have not been solved. Moreover, they continue to become worse. The development of mankind in the way occurring nowadays leads to the exacerbation of the ecological crisis, which is accomplished by the imbalance between society and nature. Muravjeva considers the low level of a modern technocratic society culture as one of the most important reasons for the ecological crisis. The largest value is thought to be a technological infrastructure compared to harmonic coexistence with nature. The author also believes that the solution of ecological problems is of great importance since these problems affect the bases of civilization processes from which the survival of mankind directly depends on [1].
Environmental education is a conservation strategy that creates such synergistic spaces, facilitating opportunities for scientists, decision-makers, community members, and other stakeholders to converge. Environmental education foregrounds local knowledge, experience, values, and practices, often in place-based settings; in this way, it encourages numerous groups, including those that may be marginalized, to interface productively with research [2].
By definition, environmental education encompasses approaches, tools, and programs that develop and support environmentally related attitudes, values, awareness, knowledge, and skills that prepare people to take informed action on behalf of the environment [3].
It focuses on outcomes at various scales, including at the individual level (e.g., an individual's environmental attitudes or behavior), societal level (e.g., community capacity-building), and ecosystem level (e.g., number of an endangered species). Based on a growing body of research foregrounding behavioral complexity, environmental education has moved away from suggesting a linear path from environmental attitudes to knowledge to action, now emphasizing a dynamic, complex ecosystem of relationships that influence behavior rather than earlier ideas derived from an information-deficit perspective [4].
Despite the achievements of pedagogical science in the development of the methodology and theory of environmental education, as well as the fact that the problem of preparing a teacher for environmental education is currently recognized as relevant, there are very few studies concerning the content and methods of forming the preparedness we have indicated.
The formulation and solution of this problem are associated with the enrichment of education goals, the development of a new methodology, structure, content, technologies, methods, and techniques of teaching and upbringing. The new methodology of an ecological education is largely associated with its humanization and humanitarization, strengthening of the upbringing function, orientation towards the development and selfdevelopment of the individual, the transition from the reproductive educational model to the activitytransforming one, from the extensive nature of education to the intensive one [2].
It has already become quite obvious that the ecological education of a society, having exhausted its capabilities, led to the destruction of the fragile balance of the human environment, and unable in some issues to maintain the level of relations necessary to ensure the stability of civilizational development.
Overcoming the current situation is possible under the conditions of the transition to new civilizational models of the formation of relations in the "man-naturesociety" system based on a co-evolutionary strategy that allows achieving synchronous, sustainable development of nature, society, education, and human consciousness. The implementation of a coevolutionary strategy is associated with the formation of civilizational-ecological thinking, which determines the relationship of human society with nature and the surrounding world, based on the ecological imperative's requirements. Fulfillment by each person of imperative ecological prescriptions is possible if the principle of co-evolutionary unity of nature and society is adopted as the meaning-forming basis for building their activities [3].
The ecological education of the future student is presented as an integrative quality of the personality, which manifests its attitude to nature, society, man and oneself through ecological literacy (inclusion in the ecological, political, cultural life of society both independently and through participation in the work of educational and educational organizations, showing ecological knowledge, skills and cognitive activity), ecological upbringing (the aspiration of the individual to develop ecological education, following the law and showing responsibility, kindness and love for nature (the environment, for people and for oneself), ecological awareness (active life position, showing initiative in organizing and conducting socially significant ecological events, self-improvement based on the formation of value orientations and attitudes, motivation, meaning, conviction in the value of nature and in a respectful attitude towards it), ecological activity (focus on participation in socially valuable ecological activities to improve coexistence in the "man-nature" system, showing reflection, proactivity, available arsenal for achieving goals) [4].
The purpose of this research mainly to improve ecological education and give the methodological basis for its enhancement. As it was mentioned before, ecological education implements several tasks as it is very useful to the general knowledge so to the specific aspects as it is improving health, environment, and so on.
METHODS
The methodology for the formation of the ecological education of future teachers (using the example of universities in Kazakhstan) includes: content (a set of academic disciplines, activities carried out in the process of teaching and upbringing at the university and in the Ecological Center, aimed at the formation of a teacher as a subject of ecological education and upbringing, translating the values of ecological literacy, ecological education, ecological awareness, ecological activity), forms and methods of professional training and education of future teachers, taking into account factors affecting ecological safety, regional features, as well as diagnostic tools, providing control over the development of this process.
In this research, we used different statistical methods for analyzing the process of ecological education, and also we used comparative methods to Students were provided with some brief online material that gave some details of ecology and was advised to review the previous lecture material before the session. Of the 85 registered students, 50 participated in the activity. Students worked in groups of 4-6 individuals and were provided with a material and a guidance worksheet. After a brief 5-minute introduction where students were given the details of their task and offered the opportunity to ask any questions, they were allotted 40 minutes to undertake the tasks outlined below in a self-guided walk with intermittent supervision. Students were asked to undertake the following tasks:
1)
Describe the ecology 2) What evidence for human influence do you observe?
3) What makes this an urban rather than a rural or natural environment?
4)
What ecosystems are present?
5)
What niches are present?
6)
Are there any ecosystems or niches that are novel to this environment?
After the self-guided session, with staff support, students returned to the seminar room for a 20-minute discussion on the walk. Students discussed the ecosystems and niches that they had identified and what key features of the environment were in nature, sharing ideas and photos. A week after the exercise, students were asked to complete a short questionnaire to assess the methodology's perceived usefulness. Answers were given in the form of a five-level scale from 1. Not at all useful to 5. Very useful.
RESULTS AND DISCUSSION
Thirty-two students returned completed questionnaires. The exercise was very well-received, with all students recommended that the activity be run again next year. Over 60% of students agreed that the activity had helped them better understand some of the key concepts introduced in lectures. Over 50% agreed that it had helped them develop their understanding of ecology. Simultaneously, in the current traditional system of ecological education, which usually uses the transfer from the educational process of the school not only the content but also the forms, methods, and technologies of working with children, such integration is not possible.
Besides, it should be noted that the ecological development of students in the system of education, despite the ongoing innovative and pedagogical efforts, continues to remain technocratic in nature: the new methodology and philosophy remain largely a declaration, the content of natural science education, as before, is developed within the framework of technocratic approach, forming fragmentary, disparate, often unrelated ideas about the world and not providing the proper conditions for the development of a personality with high ecological education.
Overall, the students enjoyed the activity, and one student commented that there were 'good use of outside space for additional learning.' In addition, students agreed it helped them identify links between the ecosystem and the lectures' theoretical information. Indeed, one student commented that 'learning the theory behind processes in lectures and then understood them in practice in the lab and field.' Staff also found that they had a greater opportunity to speak to small groups of students and explain or address the material's misunderstandings. Talking with small groups of students in sessions such as these is likely to have a knock-on effect with students finding staff more approachable and asking more questions in typical teaching sessions. In fact, the module's final session several weeks after this session is a revision session, and the lecturers felt that students were more willing to ask questions this year than previously. This could simply be the cohort in question, but the more informal session may also have helped to make students more comfortable asking questions.
Analysis of the monitoring data of the approbation of regional programs of a new generation in ecological education, the design and creative learning technologies in various types of institutions for the ecological education of learners suggests that their use in the eco humanistic model of ecological education has a positive effect on the development of personality, harmonization of all its spheres, and in general success of the development of ecological education.
An integrative approach based on introducing a culturological component into their content contributes to the development of the emotional-sensory sphere, higher needs (cognitive, aesthetic, self-knowledge, and self-realization) and is a motivator of ecological creative activity and behavior of students in a socio-natural environment [5].
Thanks to the innovative content and design and creative technologies of the educational process, there is a deeper understanding by students of their role and place in the system of the Universe than when using traditional approaches; your attitude to the world around you; intrinsic value, and uniqueness of all living things; the need to comply with moral and ecological imperatives [6].
It is better to mention that there were some works in the ecology education of different scientists in comparison with their research. We paid more attention to the application of ecology education in education process.
The introduction of a humanitarian component into natural science knowledge, an appeal to history, philosophy, art, religion, traditions contribute to the synchronization of the process of teaching and upbringing, which contributes to the formation of students' own system of values and motives for creative ecological activity. This tendency has a formative effect on the formation of such personality traits as humanity, mercy, responsiveness, frugality, responsibility, activity, citizenship, education of communication and behavior, etc.
The effectiveness of the influence of the content of ecological education on students' ecological education indicators significantly increases if it is supported by design and constructive technologies focused on the harmonious development of the student and his practical activities in the socio-natural environment [7].
The combination of the content of ecological education and the corresponding innovative teaching technologies makes the learning process interesting and personally significant for the student. This condition has a huge impact on the development of all basic indicators of the individual's ecological education and the formation of an eco humanistic worldview [8].
We can consider that the programs and innovative technologies developed by us are an important means of developing students' ecological education. Under the conditions of their development in the eco humanistic model of ecological education, the formation of diagnosed indicators is more efficient than under the conditions of traditional programs and ecological education technologies for children [9].
The results of monitoring provide convincing evidence of this statement.
The content of ecological education, which has developed within the technocratic paradigm framework, needs a conceptual restructuring in modern sociocultural conditions. It is quite relevant to study the problems of educational and methodological support of ecological education for children, the specifics of training education teachers that meet the requirements of the eco humanistic imperative [10].
There is a need for the formation of a new field of scientific knowledge -"pedagogical ecology", coordinating pedagogical research in the plane of solving the problems of meaningful renewal of ecological education, identifying the role of nature in spiritual enrichment and moral and ethical development of the individual in conditions of ecologization of the life of society [11].
Ecological education from the designated positions acts as a new content of the third millennium's universal human education, which is based on humanistic orientations and universal values. Humanistic orientations in the context of the actualization of ecocentric intentions determine the common interests of man and nature as equal partners of co-evolutionary development and are defined in the study as eco humanistic [12].
The structure of the ecological education of the individual includes: an eco humanistic worldview (the core of ecological education); personal experience of interaction with the outside world; ecological knowledge, skills, and abilities, which together form a holistic nature-oriented picture of the world and determine the creative nature of the ecological activity of students (the ability to model ecological situations and predict their development, the ability to make ecologically responsible decisions), Mastering eco humanistic knowledge in the conditions of application of design-creative pedagogical technologies actualizes the perceiving capabilities and cognitive abilities of students, thanks to which natural curiosity develops faster than usual into a sustainable interest, motives and needs for ecologically sound activities [13].
Ecology of Education is one of the trends of human ecology with its own history. As an interdisciplinary trend of research, educational ecology offers a wide range of research opportunities in the field of education. There exist the following conceptual approaches of research in the field of educational ecology: the interdisciplinary approach, the ecological approach, the humanistic approach, which complementary supplement each other, ensuring the holistic approach, first of all, systems approach, in the educational research [14].
CONCLUSION
Ecology of Education as a research trend has a scientifically substantiated philosophicalmethodological basis. It is important in the educational research to move from the ecological paradigm to the modern educational process and research practice, the substantiation of which would be based on scientific conceptions, theories, as well as the environmental models developed by scientists.
It is essential to base the development of environmental models on the systems approach, showing respect to the environmental taxonomy principle, as well as the classifications of environmental components/contexts. Ecology of Education offers several research perspectives, the substantiation of which we can find in the two main trends:
1)
Ecology of human development; 2) Ecology of systems development. One of the research perspectives is related to studies of sustainable development. | 2020-10-28T19:12:39.212Z | 2020-09-14T00:00:00.000 | {
"year": 2020,
"sha1": "ce59365756e0e569bfa8f9c1bb0a01ac0f6ac141",
"oa_license": "CCBYNC",
"oa_url": "https://lifescienceglobal.com/pms/index.php/jiddt/article/download/6818/3623",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "28d5f223403daccde7d9472f729de7cc5639afab",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
219506061 | pes2o/s2orc | v3-fos-license | Cost–Benefit Analysis of Measures to Reduce Windstorm Impact in Pure Norway Spruce ( Picea abies L. Karst.) Stands in Latvia
: Wind is one of the major natural forest disturbances in Europe, and reduces the total economic (including carbon sequestration) value of forests. The aim of this study was to assess the financial benefit of silvicultural measures in young, pure, planted Norway spruce stands by reduction in the impact of wind damage over the rotation period. The analyzed measures are promptly applied precommercial thinning and low-density planting with improved plant material. Spatial information on factors affecting wind damage—wind climate and soil—were gathered and combined with the local growth model and empirical data from tree pulling experiments in Latvia to assess the economic value loss due to wind damage over a rotation period. Timely precommercial thinning and lower-density planting with improved plant material would ensure a positive net present value with an interest rate of 3%, using conservative estimates. The financial benefit is highest in windier (coastal) regions and for the planting, followed by moderate thinning. The results demonstrate that, even without changing the dominant tree species, a considerable reduction in wind-damage risk can be achieved.
Introduction
Wind damage is a major disturbance in managed and natural forests in Europe, resulting in a drastic reduction in tree biomass and consequent loss of other forest ecosystem services, including economic and carbon sequestration value [1,2]. An increase in extreme weather events, directly or indirectly, causing large-scale damage in Europe's forests, has been observed in recent decades [2][3][4][5][6][7][8]. This trend is predicted to continue in the future [9], with windstorms being the primary cause for most stand damages with varying severity [10]. The impact of storms, especially in terms of damaged timber, is likely to rise in the future due to changes in storm tracks and the frequency and/or intensity of storms, and also due to changes in forest characteristics [11,12] and additional impacts of climate change. The supplementary impacts will vary depending on the geographical location, local soil and topographic conditions [13]. One of the impacts in hemiboreal and boreal forests, especially for forest stands growing on peat soil, is the reduction in frozen soil conditions during winter [14], leading to increased stand vulnerability to wind. Such an effect is especially significant for tree species that are more susceptible to wind damage (uprooting or breaking), like Norway spruce (Picea abies L. Karst.), because of its shallow root system and susceptibility to various other hazards (insects, diseases, drought) [15][16][17]. If no specific measures are taken, the current forest composition and proportion of this tree species is predicted to decrease notably by the end of this century [18].
Despite the available forest insurance for forest owners in some European countries [19], the change in tree species composition to more wind resistant species as a preventive measure has been suggested and sometimes even actively sought by government. For example, in Sweden, economic subsidies to change tree species (to species other than Norway spruce) have been provided after major storms [19][20][21][22]. In Latvia, support is also available for the restoration of stands affected (damaged) by natural disturbances [19,23]. However, after a decade, this support policy has not led to the desired result (change of tree species to more resilient types). Even in afforestation of windblown areas (where the impact of the storm is recent and can be clearly seen by the forest owner), no trend towards the use of more wind-resistant species (e.g., birch, pine) has been observed [20,21,24]. Such a result is partly attributed to the fact that owners focus on other, more immediate threats to young stands, like browsing damage, and partly to the estimated higher profitability of less windresistant alternatives [16,25]. Thus, it is essential to find and apply silvicultural measures that simultaneously addresses the immediate risks (like browsing), provide a desired financial outcome and increase stand wind resistance. One of such measures can be the thinning of young stands (precommercial thinning) [1,16,[26][27][28][29][30][31][32].
Precommercial thinning has a positive effect on stand stability and increment in the long term [1,16,31,32]. It also reduces the need for commercial thinning (when the stand height noticeably exceeds 10 m), which in itself is a significant factor, increasing the risk of wind damage for the following three-to-five years [16,33]. This is especially the case if heavy thinning (in terms of removed share from the total standing volume) is applied [18]. Trees that are suddenly open to the influence of the wind need time to adapt, in particular to develop root systems to ensure their stability [34]. The use of selected (improved by tree breeding) planting material ensures a gain in the volume of growth (increment) of around 10%-25%, or even higher in comparison to naturally regenerated stands [35,36]. A combination of improved planting material and lower initial spacing has a positive cumulative effect on the radial increment, and thus on the time when the target diameter, defined for the final harvest, can be reached [29,30]. Such a combination could reduce the cumulative probability of a windstorm striking the stand during the period when it is more prone to wind damage.
The economic implication of wind damage has been analyzed from different perspectives such as via an economic evaluation of dominant species change [37], management of pure and mixed stands [28], logging productivity and costs [38], and the impact of other forest ecosystem services (recreation, hunting) [39]. However, information on the potential effects of adaptation measures from an economic perspective in the context of forest policy decisions is still limited. Such information is needed to make efficient decisions for the allocation of public funds or use of other tools to increase overall forest adaptation (i.e., to reduce the impact of windstorms) over a long period on a national scale. The aim of this study was to assess the financial benefit of silvicultural measures in young, pure, planted Norway spruce stands by a reduction in the impact of wind damage over the rotation period. Specifically, we tested whether additional precommercial thinning (leading to two different stand densities) or low-density stand establishment with selected (improved) planting material ensured financial profitability (indicated by net present value), given the influence on the reduction in wind damage probability, over 50-and 80-year timespans.
Materials and Methods
Methods to reduce financial impact of wind damages were modelled on an area (per ha) basis for the hemiboral vegetation zone [40], based on the example of Latvia (55°60'-58°10' N, 20°70'-28°50' E). This country, like other territories in the Baltic Sea region within the same vegetation zone, is characterized by a flat relief [41] and a notable proportion of land (52%) covered by forests [42]. Norway spruce is an economically significant, widespread tree species in this region (e.g., in Latvia its stands cover 19% of the forest area [43]), regenerated almost exclusively by planting, typically on good quality (fertile, fresh or drained) soils. Data (spatial allocation of different forest types) from the State Forest Service (year 2005) [44] and historic agriculture soil inventories [45] were combined and converted to soil types, relevant for the wind stability of trees [46]. The results demonstrate that soils suitable for Norway spruce and with different properties in relation to wind stability are located across Latvia (Figure 1). The financial impact of wind damage depends on the probability of its occurrence and the amount of damaged timber. The probability of occurrence in the mechanistic model is determined by soil, wind climate, forest (landscape and stand) as well as tree parameters [16]. Freely drained mineral soils and site indices (SI) 32 are used as the bases for the study, since most spruce stands would have SI values of 36 or 32 (34% and 39% from all spruce dominated stands, respectively). The wind climate was characterized by Weibull distribution A parameter values, as required by the conceptual framework of tree wind resistance assessment by Quine [47]. A is the Weibull distribution scale parameter in m s −1 , a measure for the characteristic wind speed of the distribution-it is proportional to the mean wind speed. The values of this parameter, calculated based on fundamental wind speed and data from long-term meteorological observations from the Latvian Environment, Geology and Meteorology Centre, range from 3.3 to 6.7 ( Figure 2). The highest values primarily are in the western part of Latvia, near the Baltic Sea. For this region (hereafter referred to as "coastal"), a Weibull A parameter value of 5.0 was used and for the rest of the territory (hereafter referred to as "inland") a value of 4.2 was used. Both stands without new open edges (e.g., due to the clear cutting of neighbouring stands) and the short-term impact of commercial thinning were not included in the calculations, thus representing a conservative estimate of the probability of the occurrence of wind damage [16]. Tree and stand parameters at a certain age, depending on the selected management regime, were calculated by local growth models [48]. Empirical data, obtained in tree pulling experiments in Latvia [49,50], including the volume of the root-soil plate [51], relative crown height, slenderness, and conceptual framework as defined by Quine [47], were used to determine the wind speed in gusts (critical wind speed) needed for a tree with a certain dimension to be snapped or uprooted. The five-year cumulative probability of the occurrence of critical wind speed (for the average tree in the stand with certain management regime) was assumed to be equal to the proportion of damaged area where salvagelogging is required. For example, if the cumulative probability of the occurrence of critical wind speed in a 5-year period is 4%, then it was assumed that, in this 5-year period, salvage logging would be carried out in 4% of the area. In terms of salvage-logging, 10% of the most valuable assortments were assumed to be damaged and classified as firewood; it also resulted in higher logging costs (Table 1) and smaller dimensions of harvested trees. The volume of the wood assortment, obtained in any type of harvesting, was determined based solely on the dimensions of the trees in the stand in accordance with the equation developed by Ozoliņš [52] and modified by Donis [53].
The costs in the financial calculations were based on Central Statistical Bureau (CSB) [54] information (Table 1). Additional costs included the real estate tax of 5 EUR ha −1 year −1 and management (administration) costs of 10 EUR ha −1 year −1 . Income from the harvest (planned and salvage-logging) was based on the volume and prices of the assortments ( Table 2). Possibilities to reduce the financial impact of wind damages in planted pure Norway spruce stands were modelled on an area (per ha) basis, evaluating two potential silvicultural measures: I. The additional (second) precommercial thinning of young (height is 4-6 m) stands to a low density (600-900 spruces ha −1 ) or moderate density (1000-1300 spruces ha −1 ). To evaluate the influence of this precommercial thinning, a comparison was made with un-thinned (density 1400-1700 spruces ha −1 ) stands (control). The financial value of these stands over 80 years was modelled. To obtain the range of potential outcomes, and thus to obtain a mean result and a measure of variance (±95% confidence interval), for each of the alternatives, we used four densities after precommercial thinning (i.e., for low density-600, 700, 800 and 900 spruces ha −1 ) and, for each of these densities, three different commercial thinning regimes (when the stand age was 20-55 years old) applied: 1) thinning carried out, when the relative stand density reaches 0.95 and it is then reduced to 0.7; 2) thinning is done when the relative stand density reaches 0.95 and it is then reduced to 0.45; and 3) thinning is done when the relative stand density reaches 0.95, and in the first commercial thinning it is reduced to 0.45 and in the second thinning to 0.7. Thus, the result of each of the three precommercial thinning alternatives (low density, moderate density and un-thinned) was a mean from twelve model runs (four densities x three commercial thinning regimes). Commercial thinning time and the amount of harvested assortments in each of the model runs summarised in Supplementary Table S1. In all alternatives, the initial planting density was 2000 spruce ha −1 , followed by two tendings (weed controls) and one pre-commercial thinning. Thus, the differences in costs arise from a single (second) precommercial thinning, which was either carried out (to low or moderate density) or not (unthinned); II. The establishment of lower-density (1000 spruces ha −1 ) stands with selected (improved) planting material. To evaluate the influence of this measure, it was compared to standard density (2000 spruces ha −1 ) plantation, established using unimproved plant material (control). The same initial costs were used for both alternatives, assuming that improved plants (progenies of second generation seed orchards, demonstrating 20% higher volume increment [36]) would be more expensive (and thus compensate for extra costs of 1000 unimproved plants ha −1 needed in control). In both alternatives, two tendings (weed controls) and one pre-commercial thinning was planned, leading to 600-900 spruces ha −1 (assuming some natural mortality) for low-density plantation and, with extra precommercial thinning (no costs assessed), to 1000-1300 spruces ha −1 in the control. To model further stand development, the three abovementioned (I) different commercial thinning regimes were applied. Therefore, this leads to twelve model runs (four different densities at the time of second precommercial thinning in control alternative x three commercial thinning regimes) for each alternative to obtain the mean and the variance of the outcomes.
The financial value of each measure was expressed as the net present value (NPV) with a 3% discount rate: where R = net cash flow, i = discount rate (3%), and t = number of time periods.
The results of the model runs and calculated NPVs were used to determine the statistical significance of the differences between the alternatives, using a single-factor analysis of variance.
Results
Net present values (r = 3%) of planted Norway spruce stands without consideration of wind damage, independent of precommercial thinning alternatives (to low or moderate density or unthinned), reached their peak on average at the age of 65 years, when the highest NPV was 846 ± 94.2 EUR ha −1 (Figure 3). As the stands aged, all analysed precommercial thinning alternatives followed the same trend of gradually decreasing financial value. The highest value was for un-thinned (density 1400-1700 spruces ha −1 ) and moderately thinned (1000-1300 spruces ha −1 ) stands. The differences between these alternatives and the one with the lowest density after precommercial thinning (600-900 spruces ha −1 ) were significant most of the time, except at the age of 50-65 years. The average difference in NPV in an undamaged stand and a stand that needs to be salvage-logged due to wind damage increased with time, reaching its peak at the age of 50-55 years. NPVs of the planted Norway spruce stands with the consideration of wind damage were higher for stands where precommercial thinning (to low or moderate density) was carried out, due to faster diameter growth and lower damage probability. The differences were higher in coastal regions with higher wind speeds (Weibull distribution A parameter values) than inland. However, this was mostly dependent on the precommercial thinning intensity: at the age of 50-60 years, stands with moderate Average NPV difference in undamaged and damaged (salvage-logged) stands, EUR ha -1 Net present value, EUR ha -1 Age, years Average NPV difference low moderate unthinned thinning had higher NPVs by 98.1 ± 3.9 EUR ha −1 and 122.4 ± 20.3 EUR ha −1 than un-thinned stands (in coastal and inland wind climate, respectively), though the difference between stands, thinned to low density and un-thinned, was notably smaller: 17.7 ± 5.5 EUR ha −1 and 37.8 ± 7.5 EUR ha −1 , respectively (Figure 4). Similar NPV differences between precommercial thinning alternatives and wind climates were also found at the age of 70-80 years: moderate thinning ensured a 109 ± 3.2 EUR ha −1 to 119 ± 5.4 EUR ha −1 higher value than un-thinned stands and those thinned to a low density-by 20 ± 4.2 EUR ha −1 to 68 ± 24.4 EUR ha −1 -in inland and coastal wind climates, respectively.
The influence of different initial planting densities and tree improvements on the NPV of the planted Norway spruce stands with the consideration of wind damage was even more pronounced than the influence of precommercial thinning ( Figure 5). Low-density stands (1000 trees ha −1 , regenerated with improved planting material) at the age of 50-60 years had a 166 ± 40 EUR ha −1 to 297 ± 39 EUR ha −1 (in inland and coastal wind climate, respectively) higher NPV than the control stand (2000 trees ha −1 , regenerated with unimproved planting material). At the age of 70-80 years, the values were slightly lower, 124 ± 33 EUR ha −1 and 276 ± 32 EUR ha −1 , respectively, but the differences were similar.
Discussion
Quantitative information on differences in wind climate is relevant for strategical planning and policy decisions at the country scale, providing incentives to reduce the overall economic impact of damage. Such information is also important to the forest owner for the planning of forest management (from selection of tree species, to thinning schedule, to length of rotation period) in order to reduce wind damage probability to an acceptable level [1,16,27]. Thus, at both a countrywide and property scale, it is necessary to balance an economic assessment of value risk against the potential gain from timber production. Spatial data from this study (Figures 1 and 2) provides the basic information for such an assessment.
Strategic planning for reduced tree damage is no longer of importance solely for the forest sector. Since forests and peatlands are, at present, the only two true large-scale carbon sinks, policies related to climate change mitigation (and EU targets in this respect) also need to consider the damage risk, which is predicted to have an increasing negative effect on carbon balance [2,55,56]. Thus, a practically applicable option for risk reduction is needed. Such options (low-density stands and young stand thinning) were assessed in our study, growing on freely drained mineral soils with the highest site indices (32) for each measure, and not considering the change in dominant tree species.
Although the area subjected to precommercial thinning in private forests has almost doubled during the last decade in Latvia, there is still a considerable amount of un-thinned young stands on fertile soils [57]. A similar situation can be observed in numerous countries in Europe [58], due to the decreasing interest and profitability of forestry as an economic activity for numerous reasons. To increase the area of thinning of young stands, considerable investments are required ( Table 1). The study results showed that in undamaged stands where three different thinning intensities are applied, the highest NPV is found for un-thinned stands. However, in un-thinned stands, the wind damage probability is the highest and it can cause major financial losses in Norway spruce stands ( Figure 3). Wind damage probability increases with forest stand age and our study results are in accordance with previous studies [26,59]. The calculations showed that different thinning intensities increase the financial value of the stand in comparison to un-thinned stands and the most productive thinning regime is moderate thinning (Figure 4). Thinning to low density (heavy thinning) increases the risk of wind damage in comparison to moderate thinning, because the stands are more exposed to a higher wind load, to which they have not been adapted [22,60]. Income at the end of the rotation period is significantly higher in coastal wind regions than inland, even with wind damage. A final harvest at 80 years of age can achieve slightly higher values than a final harvest at 50 years of age; however, a significant difference was only seen in the inland wind region with a moderate thinning intensity.
Forest stand replacement also requires high (and increasing) investment, and therefore selfregeneration is used in most of the areas in Europe [61]. There is a similar tendency in Latvia, but even so, the annual planting area in private forests has increased over the last four years [57], and seedling demand currently exceeds availability [62]. The implementation of low-density stands would require changes in the legislation; the associated costs are not included in the assessment, since it is considered that reduced plant material costs would compensate for them. The calculations of our study show that, in all cases, originally established low-density stands reach significantly higher NPV (Figures 4 and 5) in comparison to other silvicultural measures applied. Even with windstorm damage, the financial benefit is higher than in originally established high density stands (2000 trees ha −1 ) with different precommercial thinning intensities. Low-density stands reach larger tree dimensions faster and tend to have better individual tree stability than high-density stands [1,29,63]. Similarly, as in thinned stands, higher NPVs between low-density stands were found in coastal regions ( Figure 5). A final harvest at 50 years is more profitable than at the age of 80 years; however, the differences are not statistically significant. Additionally, in such stands, the admixture of other tree species can be expected. The admixture of other tree species may reduce (at least temporarily) the risk of wind damage [22,64,65].
Wind damage-related expenses in our analysis are the higher costs of salvage-logging (by 10-30% in comparison to logging of undamaged areas [38]) and a 10% reduction in the amount of the most valuable assortment, due to timber damage such as snapping and stem cracks. These estimates are rather conservative; more damage to assortments can be predicted [66] and, in the case of largescale storms, a lack of available logging teams can raise the price of this operation even more. Moreover, storm damage creates a peak in timber supply, thus reducing the price [11]. However, this effect would depend on the scale of the storm and the overall economic situation and is difficult to predict. In our study, additional positive effects of the proposed measures on the wind stability of the stands (e.g., from early adaptation of trees to stronger winds or reduction in the need for commercial thinning) were not considered. The potential to use only optimal soils (replacing other species) was also not considered. Thus, our estimate is conservative. Further research could be applied for different soil types, tree species or even the retention of trees after regeneration cutting.
Conclusions
The study was carried out to provide actual information about and a financial evaluation of silvicultural measures to reduce wind damage risk. Wind climate and soil type maps demonstrate noticeable differences in vulnerability, even within a relatively small and flat area such as Latvia, and are important tools for forest policy and strategic planning.
Even without changing the dominant tree species, a considerable reduction in risk can be achieved with forest management. Timely precommercial thinning and lower-density planting, using conservative estimates, ensure positive NPVs with an interest rate of 3%. Low-density stands reach higher NPVs in all wind regions in comparison to precommercial thinning and final felling at 50 years is more profitable than at the currently defined 80-year rotation period. The negative financial impact of wind damage can be reduced by establishing stands with wider initial spacing (lower density) and reducing the length of the rotation period (changing the time of the final harvest by the target diameter).
Supplementary Materials:
The following are available online at www.www.mdpi.com/1999-4907/11/5/576/s1, Table S1: Commercial thinning schedule (assortment volume, m 3 ha −1 ) in stands with different initial densities (trees ha −1 ) and thinning criteria: KKC1-thinning is done when the relative stand density reaches 0.95 and is then reduced to 0.7; KKC2-thinning is done when the relative stand density reaches 0.95 and is then reduced to 0.45; KKC3-thinning is done when the relative stand density reaches 0.95, and in the first commercial thinning it is reduced to 0.45 and in the second thinning to 0.7. Funding: This study was funded by the European Regional Development Fund project Development of decision support tool for prognosis of storm damages in forest stands on peat soils (No. 1.1.1.1/16/A/260). | 2020-05-28T09:09:09.024Z | 2020-05-21T00:00:00.000 | {
"year": 2020,
"sha1": "23264fb33f2235bd3f1d895316f914222a01c4d1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/11/5/576/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "719b85e9d725c548fb00e9317bd04960945a09a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
212408075 | pes2o/s2orc | v3-fos-license | Limb lengthening history, evolution, complications and current concepts
Limb lengthening continues to be a real challenge to both the patient and the orthopaedic surgeon. Although it is not a difficult operative problem, there is a long and exhausting postoperative commitment which can jeopardize early good results. I aim to review the history, evolution, biology, complications and current concepts of limb lengthening. Ilizarov’s innovative procedure using distraction histeogenesis is the mainstay of all newly developing methods of treatment. The method of fixation is evolving rapidly from unilateral external fixator to ring fixator, computer assisted and finally lengthening intramedullary nails. The newly manufactured nails avoid many of the drawbacks of external fixation but they have their own complications. In general, the indications for limb lengthening are controversial. The indications have been extended from lower limb length inequality to upper extremity lengthening, including humeral, forearm and phalangeal lengthening. A wide range in frequency of complications is recorded in the English literature, which may reach up to 100% of cases treated. With developing experience, cosmetic lengthening has become possible using external or internal lengthening devices with an acceptable rate of problems. Level of evidence: V.
Introduction and history
Alessandro Codivilla of Bologna was the first to apply skeletal traction for bone lengthening. He used acute forced lengthening for short distances under narcotics. He described another technique for larger distances by continuous extension, using distraction with a calcanean pin and oblique osteotomy, followed by traction of 25-30 kg. More lengthening can then be achieved by applying more traction in stages [1]. One-stage lengthening was developed by Fassett using an osteotomy, inserting a bone graft and fixating with a plate. However, this procedure was followed by many serious complications [2].
In 1932, Abbot presented his experience with lower limb lengthening of 73 patients (45 tibial lengthenings) at the Shriners' Hospital for Crippled Children in St. Louis.
The basic principles stated in this paper were traction and counter traction through the bone, slow continuous traction to overcome the resistance of the soft tissues and accurate contact and alignment of bone ends. He described in detail the basic principles of tibial lengthening, including the application of two pins above and below the osteotomy, connected to a special apparatus. The drill pins were made of stainless steel, not ordinary steel, as it is less irritating to the soft tissues. The operative steps were: lengthening of the Achilles tendon, osteotomy of the fibula, insertion of the pins, application of the apparatus, osteotomy of the tibia and closure of the wound with drainage. Tibial osteotomy had to be performed with minimal soft tissue dissection to keep the blood supply to the bone and guard against infection. The surgeon had to wait for 1 week until the swelling had gone down before distraction. This was the first description of the waiting period before the Ilizarov era. The average distraction rate was 1.6 mm per day and the period of traction was 4 to 5 weeks. The apparatus was kept in place for 8 to 10 weeks followed by removal and application of a plaster cast. Follow-up X-rays were taken every 2 to 3 weeks to check the bone formation. The age of patients ranged from 8 to 19 years. The magnitude of tibial lengthening ranged from 3.81 to 8.89 cm. They reported excellent results with tibial lengthening but less favourable results with the femur and a higher rate of complications [3]. Then, Dickson and Diveley reported on an apparatus that used Kirschner wires rather than larger diameter pins to minimize soft tissue damage [4]. The method developed by Wagner gained popularity in Europe and the US; the method consisted of 3 operations. The first operation was the application of unilateral external fixation and a diaphyseal osteotomy. There was no waiting period, so acute operative lengthening for 5 mm was performed, followed by daily distraction of about 1.5 mm. The second operation was plating and bone graft. The third operation was plate removal and casting. However, a high rate of complications was recorded [5,6].
Most of our contemporary knowledge of bone lengthening comes from the Ilizarov method. Ilizarov started his work in 1951 by treating a patient with a bone defect using a circular frame and transfixation tensioned wires. Then he discovered the biological law of tension stress or distraction histeogenesis and applied this principle to treat a wide variety of conditions such as nonunion, osteomyelitis, dwarfism, congenital deformities, some bone tumours, bone defects, fractures and bone shortening [7]. Recently, hexapodal computer assisted circular frames such as the Taylor Spatial Frame have gained in popularity. The next step in development was the application of self distraction motorized nails (a magnetically driven, titanium intramedullary nail) to avoid the complications of external fixation and gain rapid rehabilitation. However, Ilizarov's principles still are the cornerstones of all bone lengthening procedures.
Biology of limb lengthening
The current basic principles of bone lengthening are derived from the general biological law of tension stress. Gradual traction on living tissues creates stresses that can stimulate and maintain the regeneration of active growth of certain tissues. With adequate blood supply, steady gradual traction of the tissues activates proliferative and biosynthetic functions. The regeneration develops along the axis of the applied traction ( Fig. 1). Experimental studies revealed the importance of soft tissue preservation during corticotomy and fixator stability, and the osteogenic power in the regeneration area depends upon the degree of bone marrow damage, periosteum and nutrient vessels. With distraction, new blood vessels develop in the transverse or longitudinal direction according to the tension vector. Under the tension-stress effect neovascularization occurs not only in bone but also in the soft tissue [8][9][10].
The biology of bone lengthening includes 3 stages: the latency phase, the distraction phase and the consolidation phase (Fig. 2). The process starts with a corticotomy, which is similar to a closed low-energy fissure fracture, and secure fixation of the two ends. Distraction can be of the callus or physis according to the site of application of the tension-stress effect [11].
The action of all types of external fixators, whether unilateral or circular, and internal medullary lengtheners, are based upon the law of tension stress [12,13].
Stimulation of the regeneration area by systemic or local measures has been the mainstay of experimental and clinical research to enhance callus formation and decrease the time the fixator needs to remain in place. Systemic administration of bisphosphonates, high doses of alendronates, calcitonin and nerve growth factor have been administered in experimental trials with variable degrees of success. Local augmentation using a wide range of cells and growth factors such as BMP-2 and BMP-7, TGF-B, platelet rich plasma and stem cells is being researched [14].
Evolution of bone lengthening devices
Limb lengthening devices have evolved in the last 100 years. The first trials just used skeletal traction. The unilateral fixator was the standard method of fixation for a long time. The advances in fixator design included the application of half pins in more than one plane and addition of hinges which allowed joint movement during distraction [15][16][17][18]. Then, with the advent of Ilizarov's revolutionary ideas, the principle of the circular fixator spread all over the world. The invention of the hexapodal frame had similar results and introduced the ability of lengthening and managing all deformities simultaneously without the need to change the frame. Computer assisted correction with the Taylor Spatial Frame, which is formed of two rings and six struts (each one connected with two universal hinges), was a real step forwards in improving the accuracy of lengthening and deformity correction [19]. In order to shorten the external fixation period, other methods were developed, such as lengthening over a small diameter nail and lengthening followed by nailing or plating. In children, flexible intramedullary nails were used to avoid physeal injuries. Over time, the incidence of fracture in the regenerated bone after removal of external fixation was reduced [20][21][22][23]. Finally, in the last two decades, internal bone lengthening nails without the need for external fixation have become popular. The Albizzia nail was designed by Guichet; it has a ratchet assembly and limb rotation is required to induce distraction [24]. In the United States, the ISKD (Intramedullary Skeletal Kinetic Distractor) was cleared for marketing in 2001. However, follow-up revealed a high rate of complication due to uncontrolled distraction and it was withdrawn from the market [15,25,26]. Currently, the motorized lengthening nails Fitbone and PRECISE, which do not require rotation for distraction, are becoming popular [27][28][29][30][31].
Indications for limb lengthening
In general, the indications for limb lengthening are controversial. Classic teaching classifies shortening into 3 categories: less than 2 cm, which can be ignored; 2-4 cm, with the possibility of lengthening; and more than 4 cm where lengthening is needed to avoid possible complications of lower limb length inequality such as pelvic obliquity and scoliosis. Also, a discrepancy of about 5 cm between leg lengths can be treated by epiphysiodesis in growing legs, or shortening of the longer leg at an appropriate time. However, this classification did not take into consideration the patient's height, heel size, tolerability of the shoe lift, family opinion and psychological aspects. With growing experience of the new advances in limb lengthening, these factors usually play an important role in decision making [32,33]. The aetiology of bone shortening and associated deformities is important for planning. The cause may be congenital deficiencies such as fibular hemimelia (Fig. 3), tibial hemimelia or congenital short femur, old poliomyelitis, bone tumours such as hereditary multiple exostosis [34] or past trauma. Bilateral lengthening may be indicated in cases of dwarfism caused by achondroplasia, especially if it is accompanied by deformities such as genu varum.
Complications
Results of limb lengthening are significantly affected by the clinical experience of the operating surgeon [35]. year. Most teaching courses and programmes only teach frame application, which is just the start in a long course of treatment.
The most common complication of external fixation is pin track infection, with a variable incidence which may reach 100% of treated patients. There are many variables which affect the frequency of this complication, such as duration of fixation, material of the wires or half pins, surgical procedure and wound care. Many pin site care programmes are designed to prevent the development of infection but are not supported by reliable evidence. Treatment usually starts with oral antibiotics and increasing the frequency of pin site cleaning in mild cases and ending up with removal of the pin in severe cases. The use of hydroxyapatite coated pins can reduce the incidence of pin site infection significantly [44].
Poor regeneration is a serious problem during limb lengthening and results from many systemic or local causes. It is important to modify the rate and frequency of distraction according to regeneration. Once delayed regeneration has been diagnosed, alternate cycles of compression distraction can solve the problem [45].
Axial malalignment can develop with distraction as there is variable resistance of the different muscles surrounding the limb (Fig. 4). This can be corrected by changing the connecting rods of the construct in the outpatient clinic, using hinges. Joint subluxation or dislocation is a serious complication with increasing likelihood with unstable joints such as in congenital shortening. Management of joint abnormality or instability has to precede lower limb lengthening, and sometimes extending the frame to cross over the joint can guard against the development of this complication; however, this increases the possibility of stiffness. Premature consolidation of the regenerated bone has been reported due to irregular distraction of the osteotomy, especially in children. The treatment may be re-osteotomy or continuation of the distraction until the building force exceeds the resistance of the consolidation and the osteotomy opens once again, with severe pain. The resultant gap has to be closed and distraction starts again after a few days.
The incidence of complications is affected by the aetiology of the shortening and magnitude of the regenerated area. Extensive bone lengthening can adversely affect growth in children and increase the possibility of joint contractures [46,47].
Achondroplasia
Achondroplasia is the most common skeletal dysplasia characterized by disproportionate dwarfism. The strategy of lengthening may be transverse, including both tibias, or both femurs, in each stage [48,49]. Most authorities adopt the transverse strategy as the patient can stop lengthening at any stage of treatment. The deformities seen in achondroplasia can be corrected simultaneously; lumbar hyperlordosis with an extension osteotomy of the femur, varus deformity of the leg, and a disproportionately long fibula may be reduced to normal length during the lengthening process.
The soft tissues in achondroplasia are usually redundantly long [50]. The magnitude of lengthening usually ranges between 25 and 30 cm which can be gained after stages or extensive lengthening. In cases with knee and ankle deformities, bifocal tibial lengthening (Fig. 3) can restore the normal mechanical axis and achieve more lengthening with less time in the fixator [51][52][53]. In our institution, our protocol starts early, between 4 and 6 years, by differential tibial lengthening for a short distance to correct the varus deformity and get the fibular head in position with indirect tightening of the lateral ligament. Femoral lengthening is performed between 9 and 11 years. Then another tibial lengthening (mostly bifocal) starts at age 13-14. Finally, humeral lengthening is done at an age of 15 or 16. However, many patients are referred to our institution late and consequently we cannot follow this protocol (Fig. 5).
Cosmetic lengthening
Bone lengthening for aesthetic reasons for normal or short stature has been reported recently. The ethical principles and psychological factors have to be taken into consideration. Psychiatric evaluation is mandatory for all patients to exclude body dysmorphic disorder [54]. It is mandatory to have detailed preoperative psychological analysis to rule out any psychiatric illness which might affect the patient's ability to make a sensible decision. A single counselling session of limited time may not be enough to have fair appraisal of patient sanity. It would be wise to arrange several meetings between the patient and previously treated patients as part of the preoperative preparation programme to give them a real example of the difficulties to be expected before reaching their goal [55]. The first method used for cosmetic limb lengthening was the Ilizarov method, with a high rate of self satisfaction and improved level of social activities (96.7% of patients) [56]. Bilateral tibial lengthening, monofocal or bifocal, was the most common procedure, with a few cases having femoral lengthening as well. Trunk limb proportions may limit the magnitude of lengthening to 5-7 cm. In 2014, Novikov et al. published the largest series of cosmetic lower limb lengthening treated by Ilizarov apparatus at the Ilizarov institute, including 131 patients. The ages of patients ranged from 16 to 67 years, with a mean lengthening of 6.9 cm. At last follow-up there was one poor result (0.77%) with a rate of complications about 37% [42]. The authors were able to manage most of the complications successfully without affecting the final results. However, the patients were kept in the hospital for the whole period of treatment, allowing close monitoring and early management, which is not available in other institutions [55]. The time in the fixator was reduced by using the lengthening over nail technique with a rather moderate rate of complications [57,58]. Intramedullary limb lengthening has developed as an alternative to external fixation which is quite attractive to patients; it has a lower rate of complications and higher costs [30]. Recently, there has been a considerable desire for cosmetic lengthening surgery around the world. In spite of the extensive experience of the treating surgeon, many soft tissue and bone problems are possibly expected. Safety of the patient has to be more important than gaining more length [42]. For example, if a weak regeneration zone develops, which is not responsive to cycles of compression distraction, the surgeon has to reduce the expected area of lengthening by gradual compression to improve the regeneration and avoid nonunion. In our opinion we think that this procedure has to be undertaken by a surgeon with great experience in the field to handle the potential complications.
Upper extremity lengthening
There are few indications for upper extremity lengthening, but they include achondroplasia, hereditary multiple exostosis with shortening of the forearm bones, physeal growth arrest, amputation, infection and shortening from trauma. The reason upper extremity operations are not attempted as often as lower extremity operations might be due to reports of a high rate of complications and the possibility of functional deterioration [59,60]. However, with developing experience we think that bone lengthening has no harmful effect on the upper extremity. Hybrid fixation minimizes the incidence of neurovascular injury (Fig. 6). Increasing the magnitude of lengthening in the lower limb more than 20% of the original bone length mostly raises the incidence of complications. However, we did not face this problem with lengthening up to 100% of the limb length in upper extremities. Preoperatively, there was some sort of abnormality of the shoulder joint as dysplasia of the articular surfaces in unilateral cases which did not affect the final outcome. Bracing for 1 month after fixator removal was advised to guard against fracture of the regenerated bone [61,62]. Intramedullary lengthening nails were successfully applied for humeral lengthening in 6 cases [63]. The primary indication for forearm lengthening is discrepancy between the radius and ulna, congenital longitudinal deficiency and trauma. The rate of distraction has to be modified according to the degree of regeneration to avoid the reported complication of delayed bone formation [64,65]. There are a few papers in the English literature reporting a small number of cases of short bone lengthening. Distraction histeogenesis (callotasis) was applied in a single stage and gradual lengthening for congenital and traumatic phalangeal shortening or amputation was achieved with excellent outcomes [66]. Two-stage treatment was also used including osteotomy and gradual distraction followed by bone graft [67].
Conclusions
Limb lengthening is a rapidly developing field of orthopaedic surgery. Currently it is a standard procedure with predictable results, and indications have been extended to include the upper extremities and cosmetic lengthening. I think experience has a great impact on the results of the different procedures because follow-up and management of expected complications are cornerstones of treatment strategy. Unfortunately, the English literature has many papers with relatively small numbers of patients operated on by many surgeons over a long period. This means that the experience of the individual surgeon is based on only one or a few cases per year. Sometimes it is difficult to get valid conclusions from the reported mixed data. In spite of the introduction of the promising intramedullary lengthening nails and computer assisted external fixation, we still count on Ilizarov's biologic laws. Advances through research to stimulate regeneration and reduce the period of treatment will be the real revolution in limb lengthening surgery. | 2020-03-05T15:48:53.424Z | 2020-03-05T00:00:00.000 | {
"year": 2020,
"sha1": "c06d46acb7b609d08c7fc4d8d51a90de88b69b87",
"oa_license": "CCBY",
"oa_url": "https://jorthoptraumatol.springeropen.com/track/pdf/10.1186/s10195-019-0541-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c06d46acb7b609d08c7fc4d8d51a90de88b69b87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250665757 | pes2o/s2orc | v3-fos-license | Spatially resolved XRF, XAFS, XRD, STXM and IR investigation of a natural U-rich clay
Combined spatially resolved hard X-ray μ-XRF and μ-XAFS studies using an X-ray beam with micrometer dimensions at the INE-Beamline for actinide research at ANKA and Beamline L at HASYLAB with those from scanning transmission soft X-ray microscopy (STXM) and synchrotron-based Fourier transform infrared microspectroscopy (μ-FTIR) recorded with beam spots in the nanometer range are used to study a U-rich clay originating from Autunian shales in the Permian Lodève Basin (France). This argillaceous formation is a natural U deposit associated with organic matter (bitumen). Results allow us to differentiate between possible mechanisms leading to U enrichment: likely U immobilization via reaction with organic material associated with clay mineral. Such investigations support development of reliable assessment of the long term radiological safety for proposed nuclear waste disposal sites.
Introduction
Investigations of actinide geological transport in the context of nuclear waste disposal are especially challenging; an accurate prognosis demands process understanding over a geological time domain over a wide range of spatial dimensions. One strategy for meeting this challenge is through investigations of natural analogues, or geological formations with characteristics considered similar to those for proposed nuclear waste disposal repositories. The sedimentary sequence found in the uranium ore-deposit (Lodève Basin, Massif Central, southern France) investigated in this study is similar to that for proposed spent fuel repositories in claystone formations, making it potentially suitable as a natural analogue. Samples originating from this formation are subject of this investigation. The goal of these studies is to identify and characterize determinant processes leading to uranium immobilization at this site. 4 To whom any correspondence should be addressed.
The sample studied originates from the Mas d´Alary faulted zone of the Lodève Basin, exhibiting elevated uranium content. More information on the geology of the area can be found in [1][2][3][4]. The uranium is closely associated with organic matter (mostly marine type III), mainly present in the form of bitumen occurring in the reservoir facies [5][6][7][8]. In the particular case of Lodève basin the conceptual model is that oxidized U(VI)-bearing fluids reacted within fault zones with the strongly reducing environment of the oil reservoir rocks to form/precipitate bitumen-type material and uranium ore [5]. Oils acting as reductants to precipitate U(IV)-phases from U-bearing hydrothermal fluids circulating in oil reservoirs has also been proposed, based on the organic chemistry as well as mineralogy, e.g., for the Oklo natural reactors [8].
The goal of this study is to characterize the oxidation state and speciation of uranium found in Uenriched regions of a sediment sample from the Lodève Basin and to elucidate if organic matter was directly responsible for uranium reduction or if mineral phases formed by anaerobic degradation of the organic matter may have been responsible for uranium immobilization and accumulation. For this purpose, investigations with micro-to nanoscale-resolution are applied; these are X-ray fluorescence and X-ray absorption fine structure with micro-focused hard X-ray beams (µ-XRF and µ-XAFS), scanning transmission X-ray microscopy (STXM) in the soft X-ray regime and synchrotron-based Fourier transform infrared microspectroscopy (µ-FTIR).
Experimental
The uranium ore sample is provided by CREGU (Nancy, France) and originates from the contact zone adjacent to the fault zone of the Autunian shales, collected in the breccia facies at the Mas d´Alary uranium mine located near Lodève. The clay fraction (<2µm) is shown by X-ray diffraction to be composed mainly of illite and chlorite minerals. A photograph and autoradiographic image of the sample is shown in figure 1. The autoradiograph is recorded with a Cyclone Phosphor Scanner (Packard BioScience, Dreieich, Germany). Quantification of U-rich hot spots, visible as dark grey areas in the autoradiograph, shows they contain ~25 mg 238-U/g material.
µ-XRF and micro-X-ray absorption near edge spectroscopy (µ-XANES) measurements are recorded at Beamline L at the Hamburger Synchrotronstrahlungslabor (HASYLAB). A confocal irradiationdetection geometry is used, providing added depth information and allowing probing sample volumes below the surface, thereby avoiding any surface oxidation artifacts caused by cutting and polishing of the clay sample. The principle of this technique is described elsewhere [9,10]. Poly-capillary half lenses are used for both focusing and collimating optics. The focal spot diameter is approximately 16 µm. µ-XRF measurements are recorded using a band pass of wavelengths with an average weighted energy of 17.6 keV using a Mo/Si ultilayer pair (AXO Dresden GmbH, Germany) and a Si drift detector (Vortex, SII NanoTechnology USA Inc., Northridge, CA). U L3 µ-XANES are recorded using monochromatic X-rays at selected sample volumes of high U concentration identified in the µ-XRF maps at Beamline L. Both µ-XANES and extended XAFS (EXAFS) spectra are registered with a high purity Ge detector (Canberra) at the INE-Beamline at the Ångströmquelle Karlsruhe, ANKA [11]. U L3-edge EXAFS are measured at positions of high U concentration identified by line scans of windowed U Lα counts using the primary beam coming from the focusing mirror. The measured primary beamspot diameter for EXAFS is 300 µm. A focused secondary beam of around 30 μm is obtained using a poly-capillary lens for the µ-XANES measurements. Si(111) and Ge(422) crystals are used in the double crystal monochromator (DCM) at HASYLAB and ANKA, respectively. The DCM energy is calibrated relative to the first inflection point in the K XANES of a Y foil (defined as 17.038 keV) at both beamlines. incident beam energy, experimental geometry and detector parameters are calibrated against the pattern measured for LaB 6 (lattice parameter= 0.415690 nm, ICSD data sheet 340427). Experimental details and analysis are found in [12].
Scanning transmission X-ray microscopy investigations are conducted on the X1A1 undulator beamline at the National Light Source Synchrotron (NSLS), operated by the State University of New York at Stony Brook. The principle of this technique is described in detail elsewhere [13]. For STXM and µ-FTIR measurements, transmission mode ~100 nm thick sulfur embedded ultra-microtomes mounted on copper TEM grids are prepared (MVA, Inc., Norcross, GA, USA). Carbon K and potassium L-edge spectra are recorded at an undulator gap of 36.8 mm. The Fresnel zone plate used at X1A1 has a diameter of 160 µm and an outermost zone width of 45 nm. The spherical grating monochromator energy is calibrated using the CO 2 gas absorption band at 290.74 eV [14].
µ-FTIR measurements are performed at beamline U10B (NSLS) using a Nicolet Magna 860 Step-Scan FTIR instrument coupled to a Spectra-Tech Continuum IR microscope, which is equipped with a 32x Schwarzschild objective and a dual remote masking aperture [15]. Data acquisition is controlled with the Atlµs software (Thermo Nicolet Instruments), using a 8 µm x 8 µm aperture and 1024 scans per point in the mid-IR range (600 to 4000 cm -1 ), under transmission mode with 4 cm -1 spectral resolution. The background signal is measured in sample-free regions of the TEM grid. Step size = 5 µm x 5 µm.
µ-XRF, µ-XAFS and µ-XRD results
The elemental distributions extracted from measured Kα intensities for K, Ca, Ti, Fe and Zr and Lα signals of U in the µ-XRF data recorded in areas with the high radioactivity (dark regions in the autoradiogram, figure 1) are displayed in figure 2. Inspection of these images reveals two general observations: 1) the U distribution is often correlated with lighter weight element distributions, especially notable in the round features in the upper right corner of the maps and 2) the U distribution seems to be inversely correlated to areas rich in Fe. These observations are corroborated in the correlation maps displayed in figure 3; there is no correlation between Fe and U, but a large number of pixels exhibit a linear correlation between K and U. In order to determine the U valence state, U L3 µ-XANES (figure 4) and µ-EXAFS (figure 5) are recorded at volumes and areas with high U Lα intensity. The energy position of the most prominent absorption peak in the XANES (the white line, WL) measured for different sample regions at two different beamlines all lie below that of the U(VI) reference. Furthermore, no multiple scattering feature around 10 eV above the WL indicative of U(VI) [16] is found. We conclude that U is likely present in the sample in the tetravalent state. The U L3 EXAFS for the sample also indicate U(IV) present (figure 5), as no short U-O distance expected for the U(VI) uranyl moiety is observed [16]. The EXAFS data is well fit with an structural model similar to uraninite, UO 2 [17]. Best fit results are obtained with 4-5 O atoms at 2.29 Å with σ 2 = 0.013 Å 2 and 2-3 U atoms at 3.78 Å with σ 2 = 0.008 Å 2 . The distances are 2-3% smaller than expected for UO 2 and the intensities lead to a much smaller coordination number than expected (N(O) = 8; N(U) = 12). This may indicate that the UO 2 -like phase is present as a nanoparticulate material with large surface area having relaxed distances at the surface. The decrease in interatomic distances is also reflected in the µ-XRD patterns measured for a thin section sample area, where µ-XRF spectra indicate high U concentration. The 1D diffractogram extracted from a powder ring pattern from such a U-rich area is shown in figure 6. The expected 2Θ positions for uraninite are also indicated. The (111), (002) and (022) peaks for the UO 2 in the clay sample are shifted to higher 2Θ values, indicating approximately a 2% shortening of the lattice parameter and associated shortening of interatomic distances to values similar to those determined in the EXAFS analysis.
STXM results
The results of principle component analysis (PCA) and cluster analysis of STXM data (figure 7) reveal two areas marked yellow and red differing in their optical density (OD) and K content. The red areas exhibit a significantly higher OD below the carbon K-edge and absorption bands at K L 2,3 -edge energies. This indicates that this organic material is associated with clay minerals possibly of illite-type. These areas also show a relatively large edge indicating that the clay is associated with rather large amount of organic material. The yellow areas are additional organic material not directly associated to these mineral phases. They have low OD below the carbon edge, which might indicate that they are of almost pure organic nature. The average cluster C(1s)-edge spectra extracted from both regions are generally similar, with yellow areas appearing to have a higher aromatic content (absorption at 285 eV) and the illiteclay associated organics in the red areas a higher aliphatic character or metal complexation (absorption in the 287 eV region [18]).
Conclusions
In summary, correlations between elements obtained using different techniques (µ-XRF, STXM and µ-FTIR) reveal that the U distribution in our sample is positively correlated to the distribution of lighter weight elements, notably K. We deduce a correlation between clay minerals of illite-type and organic matter in the sample based on both the STXM observation that K (as indicator element) is found associated with organic carbon and the observed spatial coincidence between µ-FTIR clay 'OH' vibrations and vibrations of organic functional groups. From the µ-XAFS analysis, we find that the U is present in its tetravalent form, likely as a nano-particulate oxide. From these observations we put forward a tentative hypothesis for the mechanism of uranium immobilization. Due to the lack of the correlation between uranium and Fe, we exclude Fe-minerals as the dominant reductant during immobilization of groundwater dissolved U(VI) to less soluble U(IV). Combining the knowledge that uranium is found associated with potassium and that clay minerals of illite-type are associated with organic matter we conclude that organic material associated with clay minerals might have been the reducing agent. This hypothesis remains to be scrutinized and a number of open questions remain. For example, what role did the clay play? Did it act as a catalyst [19] or was it merely an anchor for the organic material? To help refine our understanding of the redox partner involved in the reduction of U(VI) in this sediments further combined µ-XRD/µ-XRF studies are planned. [17].
We note a critical aspect of investigation with micro-and nano-focused beams: the small areas and volumes actually probed may not be representative of the macro-scale system of interest. In this study the area probed with µ-XRF is on the order of mm 2 ; the STXM measurements are three orders of magnitude smaller. With this in mind, the correlations made in this study are surprisingly consistent with one another. | 2022-06-28T04:07:26.274Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "9aa30aa0fd814d269dd499dc0026b97f5eb5ba33",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/190/1/012187",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9aa30aa0fd814d269dd499dc0026b97f5eb5ba33",
"s2fieldsofstudy": [
"Geology",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
260188616 | pes2o/s2orc | v3-fos-license | Empirical cumulative function of distribution studies of the dynamics of state the gas medium in the fire heat sources occurrence
A technique empirical cumulative distribution function of the dynamics of increments of the state vector of the gaseous medium is substantiated. The technique is bussed on non-recurrence of increments of the state vector. Explored function on intervals for before and after a heat source fires.
Introduction
Fires are one of the factors that violate the sustainability of the development of civilization. Fires cause significant damage not only to people, but also to the environment. The number of fires in the world has a hazardous trend of annual growth. About 90 thousand people die each year [1]. Therefore, the prevention of fires is one of the urgent problems of any state. It is known that hazardous acid rain and pollution of aquifers may result from a fire. Therefore the identification of hot spots before their transition into fires is one of the important problems of modern times.
І. Literature review and problem statement
From the analysis of the modern literature, it follows that the dynamics of hazardous parameters of the gaseous environment (GEP) in the event of a fire from various heat sources have a complex nonlinear, and chaotic character, depending on many uncertain random factors. Currently, various methods are known for identifying heat sources. Most of the methods are complex, have limited sensitivity and efficiency on identifying heat sources of fire. Methods of non-linear dynamics [2] should be considered the most constructive of those known for identifying heat sources based on the dynamics of hazardous parameters of the GEP. However, at present, there are no articles that would consider methods for identifying heat sources of fire, based on a selective cumulative distribution function of increments in the state of the gaseous environment of premises. These methods, being non-parametric, make it possible to identify heat sources of fire under conditions of great uncertainty. Therefore, an important and unsolved part of the problem of identifying heat sources of fire is the lack of studies on the selective cumulative distribution function of the dynamics of increments in the GEP state when such sources appear.
ІІ. The aim and objectives of the study
The aim of the work is to study the empirical cumulative distribution function of the dynamics of increments in the state of the GEP when heat sources of fire appear in the room. The results of the study will allow in practice to quickly detect the appearance of heat sources of fire and prevent the occurrence of a fire in real conditions.
To achieve the goal of the work, the following tasks were set: theoretically substantiate the methodology for studying the empirical cumulative function of the distribution of the dynamics of increments in the state of the GEP when heat sources of fire appear in the room; to investigate the empirical cumulative function of the distribution of the dynamics of increments of the state of the GEP at two fixed time intervals before and after the appearance of test heat sources of fire in the laboratory chamber.
ІІІ. The study materials and methods
The research materials include measurements of dangerous parameters of the state of the gaseous medium in a laboratory chamber for various heat sources firealcohol, wood, cellulose, and textiles [3]. Smoke density, mean volume temperature, and carbon monoxide concentration were measured [4]. The measurements were carried out at discrete times i=0, 1, 2, ,...., 400. The interval between discrete measurements was 0.1 seconds. The set of dangerous parameters at the moment number i determined the vector xi of the state of the gaseous medium at the moment i. The smoke density was measured TGS2442 sensor (Japan), the average volume temperature was measured with a DS18B20 (Germany), and the carbon monoxide concentration was measured with an MQ-2 (China). Ignition of test heat sources in the chamber was carried out 20 -25 seconds after the start of measurement. Research methods were based on the representation of the gaseous medium as a complex dynamic system. The state of such a system depends on many unknown parameters and factors. For example, on the parameters of a heat source, a room, as well as various interfering factors. The probabilistic properties of state increments of the gaseous medium are studied by the method of sample cumulative distribution function [5].
IV. Substantiation of the methodology for studying the empirical cumulative distribution function of the dynamics of increments in the state of the gaseous medium
The technique is based on the representation of the state of the gaseous medium in the form of a random event associated with the appearance of a heat source of the fire. This random event is the result of many random causes. The laws of operation of these causes are usually unknown. Therefore, it is impossible to predict in advance whether a given event will or will not occur. However, when measuring the state of a gaseous medium, that event is associated with the appearance of a real random variable. For example, it can be the state vector of the gaseous medium itself or its increments [3]. From a probabilistic point of view, any random event is completely described by the integral probability distribution function of random variables associated with this event. In English-language literature, the integral probability distribution function is often called the cumulative distribution function. The technique includes sequential execution of seven procedures. The first procedure is to measure the hazardous parameters of the GEP. This procedure is carried out using measuring sensors. Based on the measurement results, each sensor generates the current values of the GEP state vector i x , where i=0, 1, 2, ...., Ns-1, Ns the maximum number of discrete measurements performed by each of the sensors. The second procedure includes the generation of the state vector i x the increments for pairs of (similar or recurrent) elements of the space , for which the metric j i, d is less than a given value . Were function is determination characteristic function. The fifth procedure consists in calculating for each moment i and a given value of the function: Function (1) determines the sample probability of recurrence (similarities with the accuracy of the value ) for increments of the state vector of the gaseous medium up to the moment i inclusive. The sixth procedure is to determine the current probability of the opposite eventthe probability of non-recurrence (dissimilarity) of the increments of the state vector of the gaseous medium. This probability is calculated based on (4), following the expression: The seventh procedure is to calculate the sample cumulative distribution function based on (2), for an arbitrary interval of measurement moments Calculation of the sample cumulative distribution function (3) allows one to study the features of the dynamics of the probability of non-recurrence (dissimilarity) of increments of the state vector of the gaseous medium. Thus, the described technique includes the execution of a sequence of calculations (1) -(3) based on measurements of dangerous parameters of the gaseous medium and allows us to study the selective cumulative distribution function of the dynamics of increments of the state vector of the gaseous medium for arbitrary time intervals. This allows the sample cumulative distribution function to be used to detect heat sources fire in a room in real time.
V. Results of the study of the empirical cumulative distribution function of the dynamics of increments of the state vector of the gaseous medium
The studies were carried out on the basis of experimental measurements of dangerous parameters of the gaseous medium in a laboratory chamber for test heat sources the fire. The measurements were performed on two different time intervals before and after the appearance of test heat sources the fire in the laboratory chamber. The duration the intervals, was determined 100 discrete measurements. The beginning of the first interval was determined 100-th discrete measurement. The beginning of the second interval was determined 200-th discrete measurement. These intervals were determined by the reliable absence and presence of a heat source the fire in the chamber. Obtained empirical cumulative distribution functions of the probability dynamics of non-recurrence (dissimilarity) of increments of the state vector of the gaseous medium over the studied intervals fire of alcohol, cellulose, wood and textiles, corresponding the value 01 , 0 . The results take into account real errors in measurements by sensors of dangerous parameters of the gaseous medium in a laboratory chamber. The sensors applied in the experiment are used in existing fire detectors. Therefore, the results obtained can be considered reliable.
VI. Discussion of the results of the study
The research results are explained by the complex nature of the real dynamics of the vector of increments of the state of the gaseous medium in the laboratory chamber for test heat sources of fire. Following the results, the possible values of the empirical cumulative distribution function of the probability dynamics of non-recurrence (dissimilarity) of the increments of the state vector of the gaseous medium in the chamber in the absence and presence of heat source of fire have different values. For example, the lower bound, for non-recurrence probability values are different. For alcohol, this lower bound is determined by the probability values in the region centered at 0,5. For cellulose, wood, and textiles, this boundary is defined by areas centered at 0,35, 0,27, and 0,4, respectively. Such a scatter of boundaries is explained by the different quality of the restoration of the state of the gaseous medium in the chamber after each study. Cumulative distribution functions have characteristic areas of increase and constancy of functions. The main feature of the cumulative distribution functions in the case of the appearance of test heat sources of fire is the decrease in the probability value for fixed sections of function values compared to the case of the absence of heat sources. For example, for alcohol this probability decreases from 0,58 to 0,15, and for cellulose from 0,61 to 0,29. For wood this probability decreases from 0,71 to 0,28, and for textiles from 0,68 to 0,44. Following the wellknown property of cumulative distribution functions, for intervals of fixed values the probability that the random variable under study falls into these intervals is equal to zero. This means that the values of the probability of non-recurrence for alcohol and cellulose in the ranges from 0,53 to 0,9 and from 0,39 to 0,9, respectively, are equal to zero. The values of the probability of nonrecurrence for wood and textiles are equal to zero in the intervals from 0,33 to 0,93 and from 0,45 to 0,85, respectively. Areas of increasing cumulative distribution functions in define those possible intervals of non-recurrence probability values for which the probability is non-zero. This probability is determined by the difference between the values of the corresponding cumulative distribution function at the boundary points of the interval of non-recurrence probabilities that are different from zero. Thus, the features of the empirical cumulative distribution functions of the probability dynamics of non-recurrence (dissimilarity) of increments of the state vector of the gaseous medium when thermal fire sources appear allow early fire detection. The main sign of this is the decrease in the values of the empirical cumulative distribution function on fixed sections of functions. For the studied test sources of the fire the values of the empirical cumulative distribution function on fixed sections of functions lie in the range from 0,15 to 0,44. The minimum value of 0,15 is typical to the heat source of the fire in the form of alcohol. The maximum value of 0,44 is typical for source fire in the form of textiles. This is explained by the fact that these sources fire have a maximum and minimum ignition rate of the material. The limitations of the study include a finite set of test sources of fire and the use of experimental data parameters of the gaseous environment in the laboratory chamber.
Conclusions
A method for studying the empirical cumulative function of the distribution of the dynamics increments of the state vector the GEP with the appearance of heat sources of fire is substantiated. The technique includes the implementation of seven sequential procedures. Sequential execution of these procedures makes it possible to investigate the sample cumulative distribution function of the probability dynamics of the non-recurrence of increments of the state vector of the gaseous medium. This makes it possible to use a sample cumulative distribution function for early detection of sources of fire in a room. The empirical cumulative distribution function of the probability dynamics of non-recurrence of GEP state vector increments on two fixed time intervals of equal duration was studied. The studies were performed for two time intervalsbefore and after the appearance of test heat sources of fire in the laboratory chamber. It has been established that the features of the empirical cumulative distribution functions of the probability dynamics of non-recurrence (dissimilarity) of increments of the state vector of the gaseous medium allow early detection of a fire. The main sign of this is the decrease in the fixed values of the empirical cumulative distribution function. It is determined that for test heat sources of fire, the values of the empirical cumulative distribution function (for fixed areas) lie in the range from 0,15 to 0,44. These probabilities are determined by different ignition rates of test heat sources of fire. In general, the research results indicate the possibility of using the features of empirical cumulative distribution functions of the probability dynamics of non-recurrence (dissimilarity) of increments of the state vector of the gaseous medium at different intervals for early detection of a fire. | 2023-07-27T15:10:44.905Z | 2023-06-26T00:00:00.000 | {
"year": 2023,
"sha1": "824fb09a2f9f79c747de4783731532dcb221319d",
"oa_license": "CCBY",
"oa_url": "https://openreviewhub.org/sites/default/files/cte/4487/pospelov-rybka-kornienko-tezi.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "74b805e531b134628227e2238e7c78880cdd0eb9",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
259038348 | pes2o/s2orc | v3-fos-license | Low ‐ cost Fermentation of Polyhydroxyfatty Acid Esters
: With the depletion of traditional fossil energy and environmental problems, there is an urgent need for alternative materials. Biosynthesis not only achieves low-carbon green economy, but also reduces energy consumption. Polyhydroxyfatty acid ester is a kind of material with excellent performance, but the production cost is high, mainly due to the high cost of fermentation carbon source, fermentation process and other factors. Using waste as substrate not only reduces fermentation costs but also solves environmental problems. On the other hand, mixed strains can be used simultaneously to reduce the cost of fermentation.
Structure Classification of Polyhydroxyfatty Acid Esters (PHA)
In 1926, PHB were first identified in microorganisms, the first PHA to be identified.Fifty years later, new members of the PHA family are being synthesized in different microbial bodies [1].Polyhydroxy-fatty acid ester (PHA) is a kind of macromolecular biolyester, which is an ester polymer in chemical nature.Under the action of microbial polymerase (PHAC), hydroxy-fatty acids with a certain length of carbon chain are interlinked to form ester bonds to form various types of PHA polyesters [2].
According to the different length of monomer carbon chain, PHA can be divided into short chain PHA and medium chain PHA.The length of monomer carbon chain of short chain PHA is generally 3 to 5, while the length of monomer carbon chain of medium and long chain PHA is between 6 and 14.According to different arrangement of monomers, they can be divided into homopolymers, random copolymers and block copolymers [3].
Material Properties of Polyhydroxyfatty
Acid Ester (PHA) The diversity of PHA material structure is due to the diversity of PHA monomer type, polymerization proportion, polymerization form, molecularweight of polymerization, resulting in the diversity of material properties, such as thermodynamic properties and biodegradability, which has a broad application prospect [4] .It is also biodegradable and biocompatible, and can be degraded into water and carbon dioxide by many microorganisms in the natural environment [5].Different PHAs composed of a single monomer have completely different thermal properties.The thermal properties of PHA composed of different kinds of short chain PHA monomers are also different.When the temperature exceeds the melting temperature, the middle and long chain PHA will exhibit viscosity [6].Adding other polymers to the medium-long chain PHA can increase the melting point temperature and speed up the crystallization rate, which is helpful for polymer processing.The copolymer formed when a small amount of long chain PHA monomer is added to the PHA shows better flexibility and thermal stability in conventional thermoplastic processing [7].
Reduce Fermentation Cost through Substrate
At present, the main way to study the production of pha is through single microbial fermentation.Although the yield is high, the cost of fermentation substrate is relatively high.Meanwhile, single microbial fermentation requires strict aseptic environment in the fermentation process, which makes its production cost three times higher than that of traditional plastics and loses its competitiveness in the market [8].Among the substrates required for PHA production, the cost of carbon source is the most expensive, accounting for 30% of the total cost, which is the main factor causing the high production cost of pha and limiting the large-scale application of PHA [9].To be a stable carbon source, the quantity supply should be ensured and the quality should be relatively stable.Biomass waste is the waste generated by human production and consumption in the process of utilizing biomass.If it can be used as a carbon source, it is expected to reduce the cost of large-scale production of PHA.The use of biomass waste as a carbon source can not only reduce the expensive cost of microbial fermentation carbon source, but also make better use of the waste.The available carbon is mainly the following types: food processing waste, kitchen waste oil, lignin, etc.
Food Processing Waste:
Food processing waste is a kind of waste rich in nutrients in life.In most Chinese cities, its treatment not only does not make full use of energy, but also brings additional pressure to waste disposal.With the increase of people's awareness of environmental protection, the harmless treatment of waste becomes very important, and the research of resource utilization is being carried out constantly.Anaerobic digestion can pretreat the waste under the condition of high solid content and high-water content.The requirement on the concentration of organic matter is not very strict, but the operation control is more complicated.If the proportion of oil in food processing waste is high, it will be treated with oil removal before anaerobic fermentation, but the oil component is difficult to be removed.In the process of anaerobic fermentation, long-chain fatty acids such as oil are organic matter that are difficult to degrade.In the process of fermentation, calcium fatty acid solids are easy to be produced, which will form massive precipitation in the fermentation and affect the fermentation effect.The solids may also clog the pipes in the fermentation system.In addition, the salt content of kitchen waste is generally relatively high, and the microorganisms that need to be enriched can synthesize PHA by using the hydrolytic acid liquid of kitchen waste containing certain salt as carbon source, and the salt-tolerant bacteria should be screened for synthesis under the condition of high salt content of garbage leachate.The byproduct of fermentation in this system, biogas residue, has comprehensive nutrition and rich nutrition.After the next step can be further processed into fertilizer.Therefore, it is urgent to develop the corresponding resource utilization of kitchen waste.After environmental protection treatment, kitchen waste can obtain volatile fatty acids through anaerobic fermentation, hydrolysis and acidification and other procedures.These products can be used as carbon sources for bacteria to synthesize PHA, which not only treats kitchen waste but also reduces the production cost of pha.Rathika R [10]took diluted sugarcane molasses as carbon source, and after 48 h fermentation by Bacillus subtilis, the biomass reached 9.5g /L, and the PHAs content reached 70.5% of the biomass.WenQX [11] used food fermentation broth as substrate to produce PHA with a yield of 44.8% of dry weight.
Kitchen Waste Oil
Oil is an indispensable component in people's normal life diet.It not only provides people with calories, but also contains fatty acids and various fat-soluble vitamins that the human body cannot synthesize but needs very much.Our country consumes a huge amount of edible oil every year, while the waste oil quantity is about 6 million t.The figure is also expanding constantly.At present, oils and fats can be further processed into surfactants and biodiesel in the industry.In the production process, a large amount of waste oils and other by-products will be discarded, and the continuous accumulation will bring great pressure to the environment and human beings.If a large number of oils and fats are not handled properly, the harmful substances will enter our living environment, which will be harmful to the environment and human body to a certain extent.Fats contain fatty acids and phytosterols.Fats and fats are hydrocarbons that can be used as carbon sources by microorganisms.Biosynthesis of biodegradable plastic-PHA from waste oil can solve the problem of environmental pollution caused by improper disposal of waste oil on the one hand, and realize the recycling of waste resources.Palm oil in edible vegetable oil is a kind of oil that people produce and consume a lot at present.However, after the production of palm oil, the wastewater still contains a lot of low-quality palm oil sludge spo, which is rich in nutrients.SPO mainly contains long-chain fatty acids, which is an ideal substrate for pha synthesis.The pha synthesis efficiency of long-chain fatty acids obtained from oil treatment is higher than that of other carbon sources, and more than 1gpha can be synthesized per gram of vegetable oil.This is because the long-carbon fatty acids contained in oil are easily decomposed into many short-chain fatty acids, which are used through β-oxidation pathway to directly generate acyl coA, the precursor of PHA.However, many PHAs are synthesized by the dehydrogenation of acyl coenzyme A to enyl-coenzyme A, which is then processed by enyl-coenzyme A hydrase to generate hydroxyalkyl coenzyme A, and then further catalyzed to synthesize pha [12].Ward PG [13] found that the yield of PHA produced by Pseudomonas using spo was similar to that by using fatty acids, avoiding the use of delicate carbon sources and greatly reducing the cost of substrate.Mohamad AH [14] used waste glycerol production as substrate to produce PHAs that reached 80% of dry weight.
Lignocellulose:
There are about 900 million tons of agricultural and forestry wastes produced in our country every year.The incineration of agricultural and forestry wastes is an important factor in the formation of haze.It not only wastes part of the energy that can be used, but also has a great impact on our environment.Therefore, it is of great significance to properly utilize agricultural and forestry wastes and develop clean utilization technology.The lignocellulose in agricultural and forestry wastes is wood cutting residues and planting wastes in industrial processing.The cheap lignocellulose in wastes cannot be effectively used because there is no suitable way for a long time.However, the compact structure of most lignocellulose prevents enzymes from breaking it down, so it is important to use it for good pretreatment.A lignocellulosic polymer containing oxypropanol that is depolymerized and deoxidized into monomers before it can be used by microorganisms for normal metabolic activities.In the process of lignin biodepolymerization, wood rot fungi and other microorganisms were found to be efficient in lignin biodepolymerization. White rot fungus in wood rot fungi mainly produces Laccase, Lignin peroxidase, and a variety of extracellular oxidoreductases as well as its synthesized free radical auxiliary enzyme system to depolymerize lignin [15].After the above reaction, wood rot fungi completed the biodepolymerization of lignin, in which carbohydrate complexes, lignin side chains and aromatic ring structures were cracked.The biodepolymerization process is a REDOX reaction using lignin degrading enzyme system.The degradation rate of lignocellulose is very important when using microorganisms such as wood rot fungi.On the other hand, the cellulose structure after degradation also plays a crucial role in the later utilization.Li D [16] used fermented wood fermentation broth as carbon source to produce PHAs up to 50.3% of dry weight.Kumar [17] used lignocellulose after microbial fermentation as substrate for fermentation, and the final yield of PHA reached 11.1g/L.
Reduce the Cost of Fermentation through Strains Mixed Flora Fermentation:
Compared with the energy-consuming production method of chemical synthesis of PHA, PHA synthesis by microbial fermentation has mild conditions, simple operation and wide application, which is more in line with the green development concept of low energy consumption.Microbial synthesis of biodegradable plastics has become a hot research direction and has a good development prospect and broad application market.A variety of PHA-synthesizing bacteria can be isolated in a variety of environments, and they can use different carbon sources for efficient fermentation, such as: Pseudomonas, Alcaligenes, et al [18].Rhodospirillum rubrum is a purple non-sulfur bacterium, belonging to the class of α-protein bacteria, which is known for its metabolic diversity and can undergo autotrophic or heterotrophic metabolism.rubrum is able to grow using aerobic respiration as well as anaerobic photocooperation using light as an energy source Often, excess carbon is supplied with nitrogen, phosphorus, oxygen, potassium, or other essential nutrients, this unbalanced growth condition is conducive to the accumulation of PHA in rubrum [19].But at present, the high production cost of PHA limits the large-scale application of PHA [20].
A variety of bacteria in nature have the ability to synthesize PHA, so the synthesis of PHA can also be carried out by using mixed bacteria community.Fermentation in open environment saves costly sterilization and aseptic fermentation environment, and mixed bacteria community reduces the production cost of PHA.To make the mixed bacteria produce PHA, first determine the fermentation substrate, using biomass waste can significantly reduce the cost of PHA production, and conduct the pretreatment of substrate.For the enrichment and screening of mixed strains, the treated substrate is used to enrich PHA synthetic bacteria.The enrichment process is usually carried out by the satiety and starvation method.Firstly, the carbon source is sufficient so that all microorganisms can grow rapidly, and then the carbon source is restricted so that the bacteria that have not accumulated PHA cannot continue to survive, so that the bacteria with the ability of PHA synthesis can be successfully enriched.The enrichment flora is used for the accumulation and production of PHAs, and the biomass of the flora will affect the production level of PHAs.Huang [21] proposed the method of extended culture for PHA production.By adding carbon sources and nutrient elements other than carbon sources into the substrate, the bacteria obtained carbon sources and accumulated PHAs in one fermentation container, and obtained other growth elements in another container, and grew with in vivo PHA.After repeated culture, the biomass of enriched bacteria was continuously improved, and the bacteria with high PHA synthesis ability was preserved.Under this method, microbial biomass was increased 52 times to 17.22 g/L cell concentration, and the PHA synthesis efficiency was 0.49 g /g (CDW).
Conclusion and Prospect
With the depletion of fossil fuels and environmental problems, it is important to seek reasonable solutions.Polyhydroxyfatty acid esters are widely studied for their good properties.More and more kinds of polyhydroxyl fatty acid esters with excellent properties have been developed and can be used more widely.But his mass production became a problem.A lot of research has been done on its high production costs.The use of cheap carbon sources, such as kitchen waste, kitchen waste oil and lignocellulose, can not only solve the problem of waste disposal, but also provide help for the production of polyhydroxyfatty acid ester, reducing the cost of production.On the other hand, genetic engineering technology uses strains to optimize and transform, using mixed bacteria and halophilic bacteria, etc., to better synthesize more polyhydroxyfatty acid esters under nonsterilization conditions.PHA, as a bioplastic, will save petrochemical resources, reduce environmental pollution and achieve sustainable development of resources.In future research, polyhydroxyfatty acid ester materials with more comprehensive performance and lower cost will be developed to replace the existing materials with pollution or poor performance, and really play a role in our life.
Figure 1 .
Figure 1.General structure of PHA | 2023-06-03T15:14:41.690Z | 2023-05-20T00:00:00.000 | {
"year": 2023,
"sha1": "b46a0baea1d0eeab033b34dd28a555efeb5b3f1a",
"oa_license": "CCBY",
"oa_url": "https://drpress.org/ojs/index.php/ijbls/article/download/8665/8434",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "16e2306dda79c0826e3f92b1425ab049ed1ee4f1",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
4933113 | pes2o/s2orc | v3-fos-license | Irritability is Common and is Related to Poorer Psychosocial Outcomes in Youth with Functional Abdominal Pain Disorders (FAPD)
Functional abdominal pain disorders (FAPD) are associated with increased emotional problems which, in turn, exacerbate functional impairment. However, irritability, which relates both to internalizing and externalizing problems, has not been specifically examined in these youths. Irritability may be common and adversely impact functioning in pediatric FAPD, particularly for males who are more likely to experience such symptoms. The current study examined the relationship between irritability and psychosocial and pain-related impairment in youth with FAPD. Data were gathered as part of a larger study examining a psychological treatment for youth with FAPD and were compared to previously published data on irritability in healthy controls and in youth with severe emotional dysregulation. For the current study, participants (ages 9–14) with FAPD and caregivers completed measures of child irritability, pain-related and psychosocial functioning, and parent functioning. Pearson correlations revealed significant positive associations between irritability and anxiety, depressive symptoms, pain catastrophizing, and caregiver distress. Results also indicated that parents reported significantly greater irritability in males, but males and females reported similar rates of irritability. Gender moderated the relationship between child-report of irritability and anxiety only. Future research may include tailoring of behavioral intervention approaches for pediatric FAPD to specifically target symptoms of irritability.
Introduction
Functional abdominal pain disorders (FAPD) affect up to 12% of children and adolescents between the ages of 4 and 18 [1][2][3]. Research in adult populations indicates that FAPD (e.g., irritable bowel syndrome subtype) is associated with significant psychosocial impairment and mood problems, including symptoms of anxiety such as increased worries, [4], symptoms of depression such as increased feelings of sadness, loss of interest/pleasure, etc. [5], and difficulty with regulating emotions, manifested as increased irritability [4,6,7]. In youth with FAPD, significant research suggests that symptoms of anxiety and depression (i.e., internalizing symptoms) are frequently observed [2,8,9] and associated with increased pain-related impairment (e.g., decreased physical and academic functioning [2,10]). While the literature generally suggests comparable rates of externalizing problems between youth with FAPD and healthy controls [11][12][13][14], it is unknown if specific symptoms that may manifest in both internalizing (i.e., mood) and externalizing (i.e., behavioral) disorders, such as irritability which may be evident through increased anger, becoming easily annoyed, losing temper easily, etc. [15]), uniquely impact youth with pediatric FAPD, or if differences emerge by gender. Irritability, in particular, may be an important construct to examine as 1) it is a hallmark symptom of both internalizing disorders, such as Major Depressive Disorder (MDD), and externalizing disorders such as Oppositional Defiant Disorder (ODD) and 2) more recently has also been demarcated as an important symptom in relation to the newer mood diagnosis of Disruptive Mood Dysregulation Disorder (DMDD; characterized by significantly irritable or angry mood and frequent temper displays, etc.; Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V [16])).
Further examination of the prevalence of irritability symptoms as a proxy for both externalizing symptoms and mood dysregulation may be acutely relevant to youth with FAPD, given the high rates of psychological comorbidities that may involve or be associated with mood symptoms that may manifest as irritability, including anxiety and depressive symptoms [9,17], in this population. Recent findings suggest that the presence of psychological problems that are related to irritability such as anxiety [18] can significantly and negatively impact psychosocial treatment outcomes for pediatric FAPD [19], though no one has systematically investigated the unique role of irritability in relation to functioning in youth with FAPD. Further, youth with FAPD and other recurrent pain syndromes may experience increased parent/caregiver stress within the family system [20,21], in addition to the added stress of coping with their medical condition, which may in turn increase irritability and magnify pain-related disability.
Although a risk categorization system incorporating child reports of anxiety, pain levels, and disability has been developed to identify youth with FAPD who are at risk for persistent disability [8], the psychosocial and emotional factors like irritability that may underlie the relationship between FAPD and increased pain-related and psychosocial impairment remain poorly understood. It may be that presence of irritability, which has been found to be associated with a variety of issues including anxiety, depression, and emotion regulation difficulties [4,17,22,23], accounts for clinical impairment in a subset of youth with FAPD and additional risk factors such as high levels of anxiety.
Further, it may be important to understand irritability from both patient and parent perspectives, given that youth may underreport such symptoms due to perceived stigma (social desirability response bias) or other social factors [24,25], similar to what is observed when children are asked to report on their own externalizing symptoms, and often underreport such symptoms as compared with their caregiver [26]. Furthermore, research on irritability in non-pain populations also indicates that there are gender differences in self-report. Specifically, males may report higher rates of irritability and associated externalizing symptoms [27] as opposed to anxiety or depression, when compared to females [22,28]. However, gender differences in rates of irritability have not been explicitly examined in FAPD. Learning more about the incidence and associated characteristics of increased irritability in youth with FAPD, in addition to specific variations based on gender, may serve to enhance understanding of which youth may be at increased risk for poor outcomes and may benefit from a tailored psychosocial intervention.
The current study aimed to (1) examine rates of irritability in youth with FAPD and (2) investigate how increased irritability may relate to psychosocial and pain-related outcomes. It was hypothesized that (1) increased irritability will be common in youth with FAPD and (2) increased irritability would be significantly associated with greater psychological, family-related, and pain-related impairment in functioning. (3) Based on the adult literature, it was also hypothesized that males would experience higher rates of irritability than females.
Participants
Participants included youth with FAPD between the ages of 9-14 presenting for treatment at one of several pediatric gastroenterology clinics at a children's hospital. During the screening process for study eligibility, a trained research coordinator had the referring physician complete a checklist based on Rome IV criteria [29]. Furthermore, at each patient's baseline visit, they were administered a comprehensive functional gastrointestinal disorders (FGID) interview based on the Rome III [30] by a trained postdoctoral fellow or clinical research coordinator. Based on this interview, it was confirmed that all participants met criteria for FAPD. These data were also compared to previously published data on irritability in healthy controls and in youth with severe emotional dysregulation [15], due to this measure's lack of use in other pediatric pain populations. The current study is approved by the Cincinnati Children's Hospital Medical Center (CCHMC) IRB (IRB # 2015-1388; Date of Approval: 9 March 2015).
Procedures
Data were gathered as part of a larger study examining the effect of a psychological intervention to target pain and co-occurring anxiety in youth with FAPD. Data were collected (between 2015 and 2017) in person by a trained clinical research coordinator during a pediatric gastroenterology office visit at Cincinnati Children's Hospital Medical Center in Cincinnati, OH. All study procedures were approved by the hospital Institutional Review Board. After receipt of informed consent and assent for participation of both children and a primary caregiver (to complete questionnaires about their own/child's functioning and engage in the psychological intervention), youth were asked to complete screening questionnaires (i.e., Functional Disability Inventory; FDI) to determine eligibility for the primary study. If eligible for the primary study (more than minimal score of >7 on the FDI for two weeks or greater and a physician-confirmed diagnosis of FAPD), participants and their caregivers were invited to complete a baseline assessment, where measures of parent-and child-reported child irritability, pain-related impairment, and psychosocial impairment were obtained. Parents also completed a measure of their own distress. The Affective Reactivity Index is a validated measure of irritability for ages 5-17 [15]. For both parent and child report, respondents are asked to rate the child's level of irritability on six items (e.g., "gets angry easily, "often loses his/her temper") based on the past week. Items are rated on a 0-2 scale (0 = "not true at all"; 1 = "somewhat true"; 2 = "certainly true"). The total score consists of the sum of six items, with higher scores indicative of greater presence of irritability. The internal consistency of the ARI for the current sample were excellent for both child-report (0.90) and for parent-report (0.92).
Measures of Psychological, Family-Related and Pain-Related Functioning
Functional Disability Index (FDI, Parent and Child Report) The FDI, a self-report questionnaire, has been validated for use in youth chronic pain populations between the ages of 8 and 17 and is used to assess difficulty in completing various activities due to health symptoms [31,32]. Available responses for each of the 15 items range from 0 (no trouble) to 4 (impossible). Item responses are summed to create a total disability score (range = 0-60) which is interpreted as follows: no/minimal disability = 0-12; moderate disability = 13-29; severe disability = 30+. The internal consistency of the FDI for the current sample was 0.83, which is considered very good.
Pain Catastrophizing Scale (PCS, Child Report) The child version of the PCS contains 13 items related to thoughts and feelings about pain experienced by the child. Response items range from not at all (0), mildly (1), moderately (2), severely (3), and extremely (4). Total scores range from 0-52, with higher scores reflecting greater catastrophizing. Total catastrophizing scores were used for analyses in this study. The PCS-C has been validated in pediatric pain samples between the ages of 8 and 16 [33]. The internal consistency of the PCS for the current sample was excellent (α = 0.93).
Child Depression Inventory (CDI-2, Child Report)
The Child Depression Inventory (CDI-2) [34] is a 28-item self-report questionnaire that assesses symptoms of depression in children and adolescents. It has been consistently validated for use in children/adolescents between the ages of 7 and 17. Items, scored on a 3-point scale, are summed to derive a total score, with higher scores indicating greater severity of depressive symptoms (range 0-56). Of note, for the purposes of this study, the two overlapping irritability items on the CDI-2 were removed (e.g., "I feel cranky . . . ", etc.) in order to minimize overlap with the irritability measure. The internal consistency of the CDI-2 for the current sample is 0.90, which is considered excellent.
Screen for Child Anxiety-Related Disorders (SCARED, Parent and Child Report)
The SCARED is a widely used screening instrument for clinically significant anxiety symptoms in youth [35]. It has 41 items, is based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), and has been validated for use in children ages 8-18 [36]. The SCARED has been validated in a pediatric pain sample [37] and in clinical samples of youth with abdominal pain conditions [1,38,39]. Youth are asked to report frequency of anxiety symptoms over the past three months. Responses include: "not true", "sometimes true", and "often true". Total scores range from 0 to 82, with higher scores reflecting greater levels of anxiety. The internal consistency of the SCARED for the current sample was excellent (α = 0.93).
Depression Anxiety Stress Scales (DASS-21, Parent Report)
The DASS-21 assesses symptoms of depression, anxiety and stress in adults (parents) using a 21-item questionnaire [40,41]. Each item is rated on a 4-point Likert scale ranging from 0 (did not apply to me) to 3 (applied to me very much or most of the time). The internal consistency of the DASS for the current sample was excellent, at 0.93.
Statistical Analyses
Data were analyzed using SPSS v.23 [42]. Measures of central tendency and variability were performed for all study measures using visual inspection. Internal consistency reliability for each study questionnaire was examined by utilizing the reliability analyses function in SPSS. These results pertaining to irritability rates were then compared to prior validation samples as detailed in the participants section above using two one-way ANOVAs (for child-and parent-report) with post hoc testing (Tukey HSD) to examine individual group differences. In order to examine the relationship between parent-and child-report of irritability and pain-related (e.g., functional disability), psychosocial (e.g., anxiety and depressive symptoms, pain catastrophizing), and family-related (e.g., parent-functioning) outcomes, Pearson product moment correlations were performed.
Next, gender differences in irritability levels were explored. First, independent samples t-tests were performed with gender as the grouping variable and irritability (separately for parent-and child-report) as the dependent variables. Following this, data were separated by gender and Pearson product moment correlations were performed again to examine the association between irritability in each of the above identified outcomes. Lastly, for any significant associations that were found among one gender but not the other, gender was explored as a moderator (separately for parent and child report) of the relationship between irritability and the identified clinical outcome using hierarchical linear regression. In the first step of each regression, gender and either the parent-report or child-report of irritability were included as the independent variables (IVs), with the interaction term (gender x parent-report/child-report of irritability) included in the second step. A False Discovery Rate (FDR; [43]) Type 1 error control was used for all analyses. Specifically, three separate sets of analyses were conducted to obtain Benjamini-Hochberg values for the full sample, and then for males and females separately [43]. All p values cited in the current study are Benjamini-Hochberg p values.
Participant Characteristics
Participants included 69 youth (26 males, 43 females) between the ages of 9 and 14 (mean age = 11.5). The current sample population was predominantly Caucasian (89.9%), which aligns with previous research in pediatric chronic pain samples [38]. A minority of the sample was males (37.7%), which is consistent with previous studies in similar populations [13,44]. At baseline, participants reported a mean pain intensity of 3.4 (on a 0-10 scale) and a mean FDI score of 18.3 (moderate disability). Through visual inspection, it was found that no norms of central tendency or variability were violated for any variables of interest. Please see Table 1 for additional information on sociodemographic characteristics of the sample.
Irritability in Relation to other Psychosocial and Pain-Related Outcomes
Pearson product moment correlations were performed to examine the overall association between child-and parent-report of youth irritability with pain-related (i.e., functional disability), psychosocial (i.e., anxiety, depressive symptoms), and family-related (i.e., parent functioning) outcomes. These analyses revealed moderate correlations between parent-and child-report of irritability (r pearson = 0.484, p < 0.001); however, parent-and child-report of irritability differentially related to clinical outcomes. For child-reported irritability, these analyses revealed significant positive associations with child-report of their own anxiety, (child) pain catastrophizing, and (child) depressive symptoms. No significant associations were found between child-report of irritability and functional disability (either parent-or child-report) or any caregiver distress outcomes (i.e., stress, anxiety, depression). For parent-reported irritability, results revealed significant positive associations with child-report of their own anxiety, child depressive symptoms, and parent/caregiver stress, parent/caregiver anxiety, and parent/caregiver depressive symptoms, no significant associations were found between parent-report of irritability, functional disability (either parent-or child-report), or parent-report of child anxiety, or child pain catastrophizing. Please see Table 2 for complete details.
Irritability by Gender
Significant gender differences in irritability were revealed for parent-report of irritability (t (64) = −2.168, p = 0.036), with males displaying higher levels of irritability as compared to females (M males = 5.12; M females = 2.85). No significant differences were found between genders on child-report of irritability (M males = 5.35; M females = 5.00).
When the relationship between pain-related and psychosocial outcomes were separately examined by gender in males, child-report of irritability was found to be significantly associated with higher levels of (child) anxiety (r pearson = 0.593, p = 0.006), (child) depressive symptoms (r pearson = 0.627, p = 0.006), and pain catastrophizing (r pearson = 0.482, p = 0.044). No significant correlations were found between child-report of irritability and any parent-reported outcomes (i.e., parent-report of child anxiety, parent-report of functional disability, parent functioning items) or child functional disability. No significant correlations were found between parent-report of irritability and any child-reported (i.e., child anxiety, child depressive symptoms, pain catastrophizing, functional disability) or parent-reported (child anxiety, functional disability, parent/caregiver distress) outcomes, aside from parent-report of child anxiety (r pearson = 0.533, p = 0.021), parent/caregiver stress (r pearson = 0.587, p = 0.011), and parent/caregiver depressive symptoms (r pearson = 0.517, p = 0.025), which were found to be significant. Please see Table 3 for complete details on these analyses. Table 2. Association between parent-and child-reported irritability and psychosocial and pain-related outcomes in the overall sample. In females, significant correlations were found between child-and parent-report of irritability and (child) depressive symptoms (child-report: r pearson = 0.574, p < 0.001; parent-report: r pearson = 0.524, p = 0.009). No significant correlations were found between irritability and child anxiety (either parentor child-report) or any pain-related or family-related outcomes
Moderator Analyses
Outcomes found to be associated with parent-and child-report of irritability in one gender but not the other (i.e., child-report of anxiety) were examined in a separate model with each outcome as the dependent variable (DV). The results of the hierarchical linear regression model with child anxiety as the outcome indicated that the inclusion of the interaction term (gender x child-report of irritability) accounted for a significant amount of the variance in child anxiety (∆R 2 = 0.059, ∆F (3, 65) = 5.069, p < 0.001, t (68) = 2.251, p = 0.28). Full details of these analyses are included in Table 4. Gender is coded dichotomously (0 = females; 1 = males); * Child-report of irritability as measured by the Affective Reactivity Index (ARI).
To further examine this effect, separate post hoc linear regression analyses were employed for males vs. females. Results of these analyses revealed that as child-reported irritability increases, (child-report of) child anxiety increases in males only (R 2 = 0.35, F (1, 24) = 13.013, p = 0.001, t (25) = 3.607). Please see Figure 2 for a graphical representation of these results. In females, significant correlations were found between child-and parent-report of irritability and (child) depressive symptoms (child-report: rpearson = 0.574, p < 0.001; parent-report: rpearson = 0.524, p = 0.009). No significant correlations were found between irritability and child anxiety (either parentor child-report) or any pain-related or family-related outcomes
Moderator Analyses
Outcomes found to be associated with parent-and child-report of irritability in one gender but not the other (i.e., child-report of anxiety) were examined in a separate model with each outcome as the dependent variable (DV). The results of the hierarchical linear regression model with child anxiety as the outcome indicated that the inclusion of the interaction term (gender x child-report of irritability) accounted for a significant amount of the variance in child anxiety (ΔR 2 = 0.059, ΔF (3, 65) = 5.069, p < 0.001, t (68) = 2.251, p = 0.28). Full details of these analyses are included in Table 4. To further examine this effect, separate post hoc linear regression analyses were employed for males vs. females. Results of these analyses revealed that as child-reported irritability increases, (child-report of) child anxiety increases in males only (R 2 = 0.35, F (1, 24) = 13.013, p = 0.001, t (25) = 3.607). Please see Figure 2 for a graphical representation of these results.
Discussion
This is the first study to our knowledge that examines the incidence of increased irritability and its association with psychosocial and pain-related impairment in a pediatric chronic pain population. Irritability has been previously found to be increased in adults with FAPD and is associated with poorer psychosocial and pain-related outcomes, such as increased issues with mood/anxiety or greater disability [6,7]. The current study's preliminary results expand upon these findings by examining the rates of irritability in pediatric FAPD, and in exploring the relationship between heightened irritability and psychosocial and pain-related outcomes. Study findings suggest that individuals (and perhaps males in particular) with FAPD may struggle with increased irritability that corresponds to poorer global functioning. This is important because while the majority of youth with FAPD are female, a subset are male [13,44]. These results imply that males who are at increased risk for both pain-related and psychosocial impairment may have unique clinical profiles characterized by increased irritability, and as such, may have specific treatment needs geared towards targeting such symptoms. Interestingly, comparison to validation samples also indicates that youth with FAPD report comparable levels of irritability with youth who have been diagnosed with significant mood dysregulation issues such as bipolar disorder. This is of particular relevance as evidence suggests that youth with FAPD may struggle significantly in several areas of their daily lives including social/interpersonal and academic functioning, as observed in youth with severe mood regulation issues [45,46]. Consistent with our study hypothesis, parent-report of child irritability did reveal that males with FAPD experience significantly higher rates of irritability than females, which confirms the importance of obtaining a parent-report of such symptoms versus a child-report where such symptoms may be minimized. Results from the current study also suggest that, unlike child anxiety which tends to be better captured with a child-report [39], the parent-reported measure of irritability may be more sensitive/clinically meaningful for males [47]. This is similar to studies reporting increased rates of externalizing behaviors (such as higher rates of reported irritably in males than females when assessing for oppositional defiant disorder) via parent-report than child-report [26]. Given that irritability may be a component of mood (i.e., anxiety) and behavioral problems, it may be that both parent-and child-report of symptoms are important to gather in order to get a more comprehensive picture of psychological functioning [39,48]. These findings also suggest that parents experience increased distress when male children with FAPD display increased irritability in contrast to females where internalizing issues such as anxiety or depressive symptoms are more commonly reported [49]. More research is needed on the variations in parental response to youth distress in pediatric FAPD and other chronic pain conditions. Further, correlational analyses revealed that child-reported irritability in males was significantly associated with greater psychosocial impairment in youth, including higher rates of anxiety, depressive symptoms, and pain catastrophizing, while irritability in females was only associated with increased depressive symptoms. Caregiver distress (e.g., anxiety, depressive symptoms, stress) was also notably associated with increased irritability in males only (no significant associations were found in females). In order to expand upon these findings, gender was specifically examined as a moderator in the relationship between irritability and psychosocial impairment in youth and parent outcomes. Significant moderation was found between child-reported irritability and anxiety, indicating that as irritability rates in males with FAPD increase, rates of anxiety increase as well. This is particularly notable given how detrimental the presence of anxiety can be on general pain-related and psychosocial functioning [38,39,50] as well as on psychological treatment outcomes [19] in youth with chronic pain. It is also consistent with research in other youth populations indicating a strong connection between irritability and anxiety symptoms [51].
In addition to anxiety, results generally suggest that the presence of increased irritability in males is significantly more impactful on child well-being (e.g., depression, pain catastrophizing) than when females report or exhibit elevated irritability. Interestingly, while females tend to have higher rates of anxiety and depression when compared to males in general populations (across multiple cultures) [49], males' psychosocial distress in conjunction with FAPD (and more broadly) may be better expressed by constructs such as irritability, which is also a core feature of clinical externalizing disorders, such as oppositional defiant disorder [16] and major mood disturbance such as disruptive mood dysregulation disorder [16]. Assessing for constructs such as irritability in conjunction with more commonly assessed for symptoms of anxiety or depression may also capture a greater number of youth in distress, given that a subset of youth with (and without) chronic pain tend to underreport symptoms of irritability due to perceived stigma [24].
Strengths of the current study include the recruitment and analysis of a fairly heterogeneous sample of youth with FAPD. With almost 38% of these youths identifying as male, the current sample is more representative of community samples than other clinical studies that have over-represented females (e.g., 80% or more female sample, etc.) [13,38]. This will likely increase generalizability of the current results. Further, this study's recruitment methods of integrated screening during a child's regularly scheduled gastroenterology visit may have allowed researchers to gain access to the more diverse array of male and female study participants. Future research should examine these recruitment methods in other pediatric pain settings in order to gain a greater understanding of the most effective methods for examining diverse populations.
Despite the significant strengths of this study, limitations are also present which should be considered when interpreting the results. The sample size for the current study was fairly small and from a single geographic area (Midwest region of the United States). Similarly, participant ages were limited to ages 9-14 due to the current study being part of a larger trial examining a new psychological therapy for youth of that age range. Furthermore, we felt it was important to examine psychosocial outcomes for younger individuals to potentially inform efforts at preventing the development of more significant psychopathology as youth age. However, we recognize that this limits the generalizability of study results to other age groups, such as older teens. We plan to examine more diverse age groups in future studies. Further, the current sample consisted of youth who were seeking medical/psychological treatment and were only admitted into the study after meeting a minimum threshold for functional disability. As such, the psychosocial and/or pain-related impairment that they reported may be elevated when compared to non-treatment seeking community samples. The current study was also limited by using a single parent/child report of irritability. Utilizing other measures of irritability (e.g., behavioral observation) may help enhance future research. Finally, the use of cross-sectional data limits the current study's findings with respect to generalizability to long-term outcomes. As such, future research should include longitudinal data on relevant psychosocial and pain-related outcomes in youth with FAPD.
Due to the significant effect that increased irritability may have on relevant psychosocial outcomes, exploring this phenomenon in future research in other chronic pain populations may be of particular relevance. It may also be particularly important to examine differences in irritability between males and females with varying chronic pain conditions in order to confirm or provide greater insight into the associations between irritability and psychosocial impairment in males specifically that were found in the current study. Finally, given that irritability has been examined in the literature as relating directly to the manifestation of certain psychological issues (e.g., anxiety, depression) [18,52], and that males in this study specifically experienced concomitantly higher rates of irritability and anxiety, incorporating its consideration into treatment may significantly aid in achieving positive outcomes [53]. Specifically, the development and testing of behavioral interventions that address emotional problems such as irritability which may manifest in broader internalizing/externalizing issues and also foster greater parent efficacy (e.g., parent-child interaction therapy (PCIT [54]) as part of a pain coping skills program, particularly for males with FAPD, may bolster outcomes in these youths. To inform such research, we plan to explore the role of irritability in predicting treatment outcomes for youth who completed a tailored cognitive behavioral intervention to target pain and anxiety in our future work.
Conclusions
The results of this study are the first to examine the rates and correlates of issues with irritability both in a general population of youth with FAPD and by gender. Increased irritability was associated with greater psychosocial impairment. Further, increased irritability in males with FAPD was associated with greater psychosocial impairment when compared to females with FAPD. Future research should continue to examine these constructs in larger populations with varying types of chronic pain conditions. | 2018-04-27T04:32:21.562Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "0bef7ba129da2d7fa9907f4a31a5f6ebe7d83d58",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/5/4/52/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5991f9a3bb44ec6a545f7c07eaae0887d831fc3f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219590157 | pes2o/s2orc | v3-fos-license | A nonrandomized open-label phase 2 trial of nonischemic heart preservation for human heart transplantation
Pre-clinical heart transplantation studies have shown that ex vivo non-ischemic heart preservation (NIHP) can be safely used for 24 h. Here we perform a prospective, open-label, non-randomized phase II study comparing NIHP to static cold preservation (SCS), the current standard for adult heart transplantation. All adult recipients on waiting lists for heart transplantation were included in the study, unless they met any exclusion criteria. The same standard acceptance criteria for donor hearts were used in both study arms. NIHP was scheduled in advance based on availability of device and trained team members. The primary endpoint was a composite of survival free of severe primary graft dysfunction, free of ECMO use within 7 days, and free of acute cellular rejection ≥2R within 180 days. Secondary endpoints were I/R-tissue injury, immediate graft function, and adverse events. Of the 31 eligible patients, six were assigned to NIHP and 25 to SCS. The median preservation time was 223 min (IQR, 202–263) for NIHP and 194 min (IQR, 164–223) for SCS. Over the first six months, all of the patients assigned to NIHP achieved event-free survival, compared with 18 of those assigned to SCS (Kaplan-Meier estimate of event free survival 72.0% [95% CI 50.0–86.0%]). CK-MB assessed 6 ± 2 h after ending perfusion was 76 (IQR, 50–101) ng/mL for NIHP compared with 138 (IQR, 72–198) ng/mL for SCS. Four deaths within six months after transplantation and three cardiac-related adverse events were reported in the SCS group compared with no deaths or cardiac-related adverse events in the NIHP group. This first-in-human study shows the feasibility and safety of NIHP for clinical use in heart transplantation. ClinicalTrial.gov, number NCT03150147
S urvival after heart transplantation (HT) has improved markedly over the past three decades, but graft dysfunction still remains the leading cause of early mortality. Only onethird of all donated hearts are used because of the risk of early and late graft dysfunction or logistical problems due to the limitations of acceptable allograft ischemic time 1,2 . Donors are generally older and have more comorbidities now than before 3 . Allograft ischemia lasting more than 4 h increases the risk of mortality, and marginal donors are less tolerant to ischemia 4,5 .
Despite a general improvement in most aspects of HT, donor hearts are still preserved prior to transplantation with an ischemic static cold storage (SCS). Ischemia and reperfusion (I/R) damage contributes to early dysfunction of the donor heart and death of the recipient. Ischemia results in tissue hypoxia and microvascular dysfunction [6][7][8] . The subsequent reperfusion increases the activation of innate and adaptive immune responses, resulting in a cell death program 7,9 . The injured endothelium increases the risk of acute cellular rejection (ACR) and cardiac allograft vasculopathy (CAV) 10 . Together, these factors affect early and late survival 11,12 .
With the SCS method, the heart is flushed with cold crystalloid solutions and transported on ice. The nonischemic heartpreservation (NIHP) system is instead a portable device approved for ground and airborne transportation ( Fig. 1) 13 . The heart is continuously perfused with a cold (8°C) oxygenated cardioplegic nutrition-hormone solution containing erythrocytes from the blood bank. This is in contrast to the organ care system which uses a warm, noncardioplegic preservation solution containing donor blood 14 .
Preclinical studies, using the NIHP system, have shown that the pig donor heart can be safely preserved for 24 h and that the endothelium contractile function can be preserved for at least 8 h 8,13,15 . In a recently published study of life-supporting porcine cardiac xenotransplantation using the same system, NIHP was one of two keys to the success 16 . Therefore, an NIHP system might allow the procurement of distant donor hearts and possibly enable resuscitation of marginal donor hearts, thereby expanding the donor pool. However, this state-of-the-art technology has never been applied to humans.
Here we report the first-in-human use of the NIHP method in adult HT. In this nonrandomized phase II study, we investigate event-free survival and immediate graft function. We show a decrease of cardiac injury markers, less ACR, and no death or cardiac-related serious adverse events among recipients transplanted using the NIHP method. Our results show that NIHP is safe and feasible, encouraging further clinical investigations.
Results
Recruitment. Between April 2, 2017 and September 25, 2018, 42 patients underwent HT, 11 patients were excluded because they met one of the exclusions criteria (4 patients), did not provide written informed consent (4 patients), or required an urgent transplantation (3 patients). Transplantation was planned in advance when the NIHP method could be used because the device and team members trained to use the system must be available. This resulted in the NIHP system being assigned to 6 patients out of the total 31 eligible patients (Fig. 2). The donor and recipient's characteristics did not exclude any patient from being assigned to the NIHP group; however, they were excluded if they met any exclusion criteria. Following organ retrieval, all organs were used. All patients were followed-up for 6 months or until death, and no data on outcomes were missing. The latest follow-up occurred on March 25, 2019.
Donor, recipient, and preservation characteristics. Table 1 shows the baseline characteristics of the donors and recipients in the two study groups. Overall, eight (26%) recipients and nine (29%) donors were women. The median age was 54 years (interquartile range [IQR], 43-60) for the donors and 56 years (IQR, 46-64) for the recipients. Baseline characteristics, except for body size, were similar for those in the two groups. The donor size was similar in the two groups but the NIHP recipients were larger and had a median body mass index (BMI) of 30 kg/m 2 (IQR, 29-32) compared with the SCS group, who had a median BMI of 26 kg/m 2 (IQR, [23][24][25][26][27][28]. This resulted in a larger and Fig. 1 The nonischemic heart-preservation method (NIHP). Shown is a drawing of the NIHP method (a). The equipment consists of a reservoir, a pressure-controlled roller pump, an oxygenator, an arterial-leukocyte filter, a heater-cooler unit, oxygen and carbon dioxide containers, a gas mixer, sensors, and a programmable control system. The reservoir is filled with 2.5 L of the perfusion solution (b) plus~500 mL compatible irradiated and leukocyte-reduced blood cells from the hospital blood bank, providing a hematocrit of~15%. Perfusion is provided through the aortic cannula to the coronary vessels. The picture (c) shows the first human heart transplantation using the NIHP method. The heart is mounted and submerged into the heartpreservation solution, which is actively regulated to maintain a pH of~7.4 and a temperature of 8°C. The device software is adjusted to maintain a mean blood pressure of 20 mmHg in the aortic root, providing a coronary flow between 150 and 250 mL/min. Fig. 1).
Ex-vivo perfusion data. We arrested the donor hearts in the NIHP group with the heart-preservation solution without erythrocytes. Then, we harvested the hearts in the same way as performed for the SCS group. We cannulated the distal ascending aorta from the device and submerged the heart in the preservation medium (Fig. 1c). The median preperfusion organ mounting time (ischemic time) was 24 min (IQR, 20-28 min) ( Supplementary Fig. 2). The organ was perfused for a median 140 min (IQR, 109-162 min) with a pressure of 20 mmHg (IQR, 19-21 mmHg) resulting in coronary blood flow of 178 mL/min (IQR, 160-221 mL/min). The temperature was stable at 8°C during the entire perfusion time ( Supplementary Fig. 3). The median aB-lactate was 1.5 mmol/L preperfusion (IQR, 1.2-1.5) and 1.4 mmol/L (IQR, 1.3-1.5) after continuous perfusion (Supplementary Table 1).
Event-free graft survival (primary outcome). During the first 6 months, all of the patients assigned to the NIHP group met the primary composite outcome of event-free survival (survival free of severe primary graft dysfunction (PGD) at 24 h, free of extracorporal mechanical support use within 7 days, and free of ACR ≥ 2R within 180 days); however, only 18 (72%) of those assigned to the SCS group achieved event-free survival (Kaplan-Meier estimate of event-free survival 72%; 95% confidence interval (CI), 50-86%) ( Table 2 and Fig. 3). All patients survived the first 30 days after transplantation. No death or cardiac-related serious adverse events were reported within 6 months after transplantation in the NIHP group; however, four (16%) death and three (12%) cardiac-related serious adverse events occurred in the SCS group (Table 3).
Secondary outcomes of NIHP and SCS group. Although the NIHP group had a longer duration of preservation (out of body) and the recipients were matched with smaller donors compared with the SCS group, we did not observe any difference in terms of early organ dysfunction or the need for inotropic support. As shown in Table 2, the immediate graft function was similar for both groups. However, there was a difference in cardiac injury markers. One patient (20%) in the NIHP group had a pathological cardiac troponin I (cTnI) > 0.02 ng/mL at the end of preservation compared with all patients in the SCS group (Table 2). Furthermore, the median creatine kinase-muscle/brain (CK-MB) level, assessed 6 ± 2 h after ending perfusion were 76 ng/mL (IQR, 50-101) ng/mL for the NIHP group and 138 ng/mL (IQR, 72-198) for the SCS group (Fig. 4). All patients followed a predefined protocol for surveillance and monitoring. During the first 6 months after transplantation, 2 patients (33%) in the NIHP group had an ACR ≥ 1R; however, 15 patients (60%) in the SCS group did ( Supplementary Fig. 4). None of the patients in the NIHP group had an ACR ≥ 2R; however, 4 patients (16%) in the SCS group did.
The NIHP group showed a tendency for reduced postoperative renal function compared with the SCS group ( Table 2). The minimum creatinine clearance levels within 7 days after transplantation were 33 mmol/L (IQR, 31-40) for the NIHP group and 44 mmol/L (IQR, 34-59) for the SCS group. Half of the patients in the NIHP group needed continuous renal replacement therapy (CRRT) within the first 7 days after transplantation; however, only six (24%) patients in the SCS group needed CRRT. None of the patients required dialysis treatment at the last followup date. The median aspartate aminotransferase (ASAT) on postoperative day 1 was 1.6 (IQR 1.4-2.0) for the NIHP group; NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16782-9 ARTICLE however, it was 2.6 (IQR, 2.2-3.6) for the SCS group ( Table 2). None of the patients developed severe liver failure.
Serious adverse events. The proportion of patients who had serious adverse events (cardiac, renal, pulmonary failure, bleeding complication, or the need for a permanent pacemaker) leading to an extended length of stay in the intensive care unit (ICU) were comparable for the two groups ( Table 3). The most common adverse events in the two groups were acute renal failure (22 patients; 71%) and respiratory failure (defined as need for a ventilator for more than 48 h) (10 patients; 32%). The length of stay in the ICU was similar for both groups; 7.0 days (IQR, 5.4-17 days) and 6.0 days (5.1-11 days) for the NIHP and SCS groups, respectively ( Table 2).
Discussion
This first-in-human study shows the feasibility and safety of NIHP method's for HT. All patients in the NIHP group had an event-free survival at 6 months; however, only 72% of the patients in the SCS group had event-free survival at 6 months. Among NIHP patients, we did not observe any early mortality or cardiacrelated serious complications. However, in the SCS group, three patients received extracorporeal membrane oxygenation and four patients had a moderate ACR. The pathogenesis of PGD is still unclear, but ischemia/reperfusion injury has been identified as a contributing risk factor 5,17,18 . In the present study, we found a decrease in the cardiac injury marker cTnI obtained immediately after preservation and in CK-MB levels after 6 h in the NIHP group. Troponin and CK-MB are sensitive markers of cardiac ischemia and myocardial damage 6,19 . Preclinical studies have shown that CK-MB levels correlate with ischemia/reperfusion tissue damage with HT. Schecter et al. reported that an increased level of cTnI in the preservation solution is associated with development of PGD 17,20,21 . These findings might indicate that the NIHP method reduces the myocardial damage better than the SCS method.
During the first 6 months after transplantation, we also observed less ACR in the NIHP group than in the SCS group. Decreased allograft rejection may suggest that the endothelium was less damaged in the NIHP group; this has been demonstrated in preclinical studies, and is attributable to less ischemia/reperfusion injuries 8,15,22 . According to the latest International Society for Heart and Lung Transplantation registry report, treatment for rejection within the first year after transplantation was associated with an increased risk of CAV development and an increased mortality risk of up to 50% at 5 years 12 . Furthermore, ischemia/ reperfusion tissue injury may enhance the activation of innate and adaptive immune responses, resulting in the initiation of a cell death program 7,9 . We noted more undersized and older donors in the NIHP group than in the SCS group. Unfavorable body size mismatch and older donors are well-known risk factors for PGD 18 . This observation may indicate that using nonischemic preservation for marginal donor hearts can make it possible to expand the donor pool in the future, which has been suggested by others 23,24 . However, a larger study is needed to confirm this observation.
The NIHP method is a new type of technology for clinical use; therefore, a learning effect should be expected. However, all accepted donors were utilized and there were no device-related complications. Both groups had similar proportions of patients with serious adverse events leading to an extended length of stay in the hospital. The simplicity of the NIHP system is probably a significant contribution to these observations. An additional advantage of the method is its hypothermic environment for the heart. The hypothermic preservation provides increased safety and protection against external impacts on the system such as power failure. With normothermic preservation, an interruption in ex-vivo perfusion can result in irreversible damage to the heart. During the only randomized controlled trial evaluating normothermic preservation, five donor hearts were considered unacceptable for transplantation after the use of that preservation system 14 . Because these hearts were considered acceptable initially, it cannot be ruled out that something happened in transit that rendered these hearts unusable. Creating an artificial environment similar to the physiological state in which a warm beating heart is supposed to work is both complicated and risky. Moreover, it involves additional surgical and technical support and appropriate transport, inevitably resulting in more expensive management compared with what is needed for SCS. The future commercial NIHP system will not require extra personnel support.
The potential benefits with NIHP system are an improved postoperative course and reduced total cost of the transplantation. Complications directly connected to the transplant result in increased costs; for example, if the recipient develops PGD requiring mechanical circulatory support, then the ICU stay will be prolonged. An extension of the allograft preservation time will make it possible to schedule transplantation during the day, when the highest competence will be available for these complex, highrisk cases. Furthermore, NIHP may make it possible to increase the donor pool by utilizing more marginal donors and enabling organ sharing across long distances (perhaps even between continents) 25 . Finally, a preservation system that can decrease the activation of innate and adaptive immune responses resulting in a downregulation of the immune system, might provide further benefits for organ transplantation 7,9,24 . This would most likely reduce the need for immunosuppression and decrease the occurrence of complications (for example, toxicity, infection, and malignancies).
Our study has some limitations. Because it was a nonrandomized trial, bias in the selection of both donors and recipients could have affected the results. Another limitation of this study was its unblinded nature. Personnel involved in patient care could have favored the innovative NIHP treatment or favored the established SCS technique, thus leaving the direction of the potential bias open to speculation.
In conclusion, this first-in-human study describes the clinical evaluation of a new technology for HT. It represents a first, necessary step in demonstrating that NIHP is feasible, safe, and effective in clinical practice. Because all patients in the NIHP group had an event-free survival at 6 months, further clinical investigations on the efficacy of machine perfusion in HT are warranted 26 . To confirm and extend the results of this study, a randomized trial is required and has been initiated.
Methods
Study design. This investigator-led prospective, open-label, nonrandomized trial of NIHP treatment of donor hearts for HT was performed at Skane University Hospital, Karolinska University Hospital, Linköping University Hospital, and Uppsala University Hospital, which cover two-thirds of the counties in Sweden. Six patients were permitted to be transplanted with donor hearts preserved with the Fig. 3 The probability of event-free survival. The Kaplan-Meier plot shows the probability of event-free survival (primary end point) defined as survival free of severe primary graft dysfunction at 24 h, survival free of extracorporeal mechanical support use at 7 days, and survival free of acute cellular rejection ≥2R at 180 days (cyan: NIHP group; red: SCS group). Kaplan-Meier estimate free of event was 72% [95% CI 50-86%] for the SCS group. NIHP (n = 6) nonischemic heart preservation; SCS (n = 25) static cold storage. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16782-9 ARTICLE NATURE COMMUNICATIONS | (2020) 11:2976 | https://doi.org/10.1038/s41467-020-16782-9 | www.nature.com/naturecommunications nonischemic method. These were compared with contemporary control patients transplanted with hearts preserved according to standard procedures of SCS. The Swedish research ethics committee approved the trial (2016/603). Patients at the aforementioned centers underwent transplantation at Skane University Hospital; then, they returned to their initial centers for care after transplantation. Transplant candidates were discussed by transplant board members and cardiologists from the participating clinics, and patients accepted for transplantation were screened for the study. Patients accepted for transplantation, who did not fulfill any exclusion criteria, were included in this study after they signed written informed consent. Furthermore, patients on the waiting list were screened for the study (starting April 1, 2017) and those eligible were contacted and included after they signed written informed consent. Due to the delay in the trial registration, a control patient underwent transplantation before the clinical trial registration (ClinicalTrial.gov, number NCT03150147) was completed. The transplantation procedure and perioperative care were completed according to standard practices 27 . All patients were treated with antithymocyte globulin as induction therapy and triple immunosuppression (tacrolimus, mycophenolate mofetil, and glucocorticoids) as maintenance therapy. All participating hospitals followed the national protocol for surveillance and monitoring, which normally includes 14 visits for endomyocardial biopsies during the first year. The biopsies obtained (normally 3-5 biopsies) are sent for histologic evaluation. The grading of ACR (0R, 1R, 2R, or 3R) is done on the basis of an overall assessment of the biopsies according to the ISHLT guidelines 28 . No major amendments were made to the trial design after the start of recruitment.
Eligibility and consent. Organ donors had to be 70 years or younger. Donors were excluded if any of the following criteria were fulfilled: insulin-treated diabetes, significant coronary artery disease, hepatitis B-positive or hepatitis C-positive serology; human immunodeficiency virus-positive serology; tuberculosis, malignancy; and abnormal ventricular function < 45%. All adult (aged 18 years or older) recipients on our waiting list for HT were eligible; however, we excluded those who previously underwent solid organ or bone marrow transplantation, had grown up congenital heart disease, had undergone four or more sternotomies, had known malignancy, had kidney failure (Iohexol plasma clearance < 30 at listing), had liver failure (ASAT, alanine transaminase, or total bilirubin more than five-times the upper limit of normal, or international normalized ratio > 2.0), had ongoing septicemia, and had urgent, and/or systemic inflammatory disorders treated with corticosteroids. Potential participants provided consented while on the waiting list, and that consent was affirmed on the day of transplantation. Consent included allowing the recording of anonymized data for trial purposes and the collection of biological samples for storage in the trial biobank.
Study logistics. Donor hearts were offered to our heart transplant program through Scandiatransplant (http://www.scandiatransplant.org). Assessment of potential donor hearts were based on the usual constellation of clinical factors, including history, coronary angiography, echo assessment, and direct examination of the heart during procurement. The same standard criteria for donor hearts were used for the NIHP group. Transplantation was scheduled in advance when the NIHP method could be used, because the device and team members trained to use the system must be available. Donors and recipients were excluded from the NIHP method only if they met any of the exclusion criteria. Furthermore, initially, we could only use ground transportation which limited the pool of potential donors who could be assigned to the NIHP method. After the NIHP method was approved for air transport in April 2018, the system could be used without this restriction, which resulting in that the NIHP method being used for 5 of the total 11 transplantations performed over the next 5 months. In addition, as mandated by the local research ethics committees, safety and logistic feasibility were assessed after the first and third patients were subjected to the NIHP method. All patients eligible for transplantation, but not assigned to the NIHP method who had signed the written informed consent and did not fulfill any exclusion criteria, were included as controls during the study period.
Nonischemic heart-preservation device. The device used during this study was made in-house and accepted for use by the Department of Medical Technology of Skane University Hospital in Lund, Sweden. The XVIVO Perfusion AB (Göteborg, Sweden) bought the patent to the device and will continue its development with the aim of making it a commercially available device. The device comprises a miniaturized and fully automated heart-lung machine, housed in a portable apparatus (height, 455 mm; length with handles 695 mm; width 415 mm; weight 32 kg), that enables transportation between hospitals ( Supplementary Fig. 5). The equipment consists of a reservoir, a pressure-controlled roller pump, an oxygenator, an arterial-leukocyte filter, a heater-cooler unit, oxygen and carbon dioxide containers, a gas mixer, sensors, and a programmable control system. The reservoir (not shown in the figure) is filled with 2.5 L of the heart perfusion solution plus~500 mL of compatible irradiated and leukocyte-reduced blood cells from the hospital blood bank, providing a hematocrit level of~15 % (Fig. 1b). The NIHP device software is adjusted to maintain a mean blood pressure of 20 mmHg in the aortic root, providing a coronary flow between 150 and 250 mL/min (Fig. 1a).
Nonischemic heart-preservation group. The donor heart was arrested with the heart-preservation solution without erythrocytes (1200 mL) (Fig. 1b). Then the donor heart was then harvested using the same procedure as that used for the SCS group. Thereafter the distal ascending aorta was cannulated from the device with a special double-lumen cannula for easy deairing (Fig. 2a) with the preservation medium was fixed in a vertical position and the heart was completely submerged in the preservation medium (Fig. 1c). Throughout the perfusion process with the NIHP device, the temperature, perfusion flow, and aortic root pressure were continuously monitored with the built-in sensors. During perfusion of the donor heart (NIHP group) blood samples were retrieved from the reservoir every 30 ± 10 min. After explantation of the recipient heart, the continuous perfusion was switched to intermittent perfusion. During the implantation of the heart, the aortic cannula was kept in the aortic root, thereby facilitating stability of the heart. Intermittent perfusions with 200-300 mL of the preservation solution was administrated through the cannula every 15 min during the implantation procedure to avoid ischemia. The cannula was withdrawn before the aortic anastomosis was performed. Blood samples were retrieved from the coronary sinus in the right atrium. When the NIHP device was used, a research fellow and a research engineer participate in the procedure. The research fellow and research engineer transported the machine-perfusion device to the donor hospital and assisted donor surgeons with connecting the heart to the machine. One of the senior staff surgeons performed the transplantation, and an attending surgeon performed the donor harvesting. No changes were made to the existing rules for organ allocation or transportation protocols.
Static cold storage group. For the SCS group, the donor heart was arrested with a crystalloid cardioplegic (1-2 L) solution (Plegisol; Pfizer, New York, NY). The heart was then stored on ice slush at a temperature of~4°C. On arrival to the hospital, 500-800 mL of blood cardioplegia was administrated to the donor heart, and blood samples from the coronary sinus were obtained and analyzed as described previously.
Study outcomes. The primary end point was a composite of survival free of severe PGD at 24 h, free of extracorporal mechanical support use within 7 days, and free of ACR ≥ 2R within 180 days 5,28 . Secondary endpoints included the following: (1) ischemia/reperfusion tissue injury-differences in cTnI and CK-MB collected at end of preservation and 6 ± 2, 12 ± 4, and 24 ± 6 h after the end of preservation (Triage CARDIO3, Alere with Biosite Triage ® MeterPro); (2) immediate graft function as indicated by any one of the following clinical indicators: (i) the need for inotropic support (as judged by inotrope score 5 ) in the first 6 h after arrival to the ICU, (ii) reperfusion time (time from aortic cross-clamp release in the recipient to termination of cardio pulmonary bypass), (iii) left ventricular ejection fraction (EF) < 40% on days 1 post operatively, (iv) right ventricular EF < 40% on days 1 post operatively; (3) postoperative renal function (difference in estimated minimum creatinine clearance within 7 days post transplant and need for CRRT within 7 days after transplantation); (4) postoperative liver function, peak ASAT and peak alanine transaminase within 24 h after transplantation; (5) postoperative pulmonary function and ventilator requirement (number of hours); (6) ACR ≥ 1R within 6 months after transplantation; (7) length of stay in the ICU; (8) graft and patient survival at 6 months.
During the study, we monitored recipient and donor demographics, medical history, vital signs, laboratory assessments, echocardiography, and right-sided cardiac catheterization. The volumes of the cardioplegic and preservation solutions were registered, as were total preservation and ischemic times. We defined the total preservation time as the donor heart's out-of-body time of the donor heart (i.e., xclamp on the donor aorta at donor hospital until release of x-clamp donor aorta at transplant center). Cold ischemia time refers to the length of time that the donor heart was kept cold without any continuous perfusion. The main endpoints, PGD and ACR, were blindly assessed.
All endpoints described were included in the current trial registration and were prespecified in the study protocol, except for CK-MB and cTnI collected at end of preservation; these were added to the protocol on September 10, 2017. The timeframe for the primary end point was extended from 30 to 180 days because no events were observed in the NIHP group (study protocol update December 31, 2018). The collection of biological samples from the donor hearts for storage in the trial biobank has not yet been analyzed. Measurements of troponin postoperatively and CK-MB at two extra timepoints, cell-free donor DNA, and EQ-5D were added to the trial registration (NCT03150147) after completion of the nonrandomized part of this study.
Serious adverse events. (1) Acute cardiac-related events were defined as the need for an intra-aortic balloon pump and/or mechanical circulatory support within 7 days post transplantation; (2) acute bleeding was defined according to the Bleeding Academic Research Consortium (BARC) type IV criteria (>2000 mL/24 h and/or requiring re-operation for bleeding, and/or intracranial bleeding, and/or transfusion of >5 red blood cell concentrates/48 h) 29 ; (3) respiratory failure was defined as impairment of respiratory function requiring re-intubation, requiring tracheostomy, or the inability to discontinue invasive ventilator support within 48 h after cardio pulmonary bypass due to respiratory issues and not due to sedation issues; (4) acute kidney failure was defined according to the Kidney Disease Improving Global Outcomes (KDIGO) criteria as an increase in serum creatinine of >27 μmol/L within 48 h or 1.5 times baseline within 7 days 30 ; (5) acute liver failure was defined as the rapid development of hepatocellular dysfunction, specifically coagulopathy, and mental status changes (encephalopathy) in a patient without prior known liver disease; (6) permanent stroke was defined as an episode of a computed tomography-verified acute neurological dysfunction to be caused by ischemia or hemorrhage that persisted ≥24 h or until death; (7) permanent pacemaker was defined as need for a permanent pacemaker implantation 2 weeks after transplantation.
Statistical analysis. The primary outcome (actuarial survival free of event) was analysed using the Kaplan-Meier method. The Kaplan-Meier estimate is presented with 95% CIs. For patients who had more than one event during follow-up that resulted in failure to reach the primary end point, the event that occurred first is the one included in the analysis. The relative risk and 95% CI were calculated for the outcome variables. The effective size was used to compare mean values. Data were assumed to have unequal variances and the approximate degree of freedom was obtained from Welch's formula. Furthermore, continuous variables were logtransformed to fulfill normality assumptions. The baseline value was defined as the last assessment prior to the transplantation. Continuous variables were summarized using the median, and the IQR and categorical variables were summarized using frequency and percentage. Missing values were not imputed. Because of the small sample size in both groups, only descriptive statistics were performed. Data Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Raw data cannot be shared publicly because of legal and ethical restrictions associated with patient confidentiality. Raw data are available to all interested researchers upon request addressed to the corresponding author J.N. Instructions on how to apply and criteria for access to confidential data are available on the Swedish Ethical Review Authority website (http://etikprovning.se). | 2020-06-12T14:30:00.214Z | 2020-06-12T00:00:00.000 | {
"year": 2020,
"sha1": "4c90bfd7ad81634ce9ca67e8e9bf95b2c5adbb96",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16782-9.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "3c86880a4103171a9b6a2764bf1c77b04712dd2a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232481046 | pes2o/s2orc | v3-fos-license | Mps1 promotes poleward chromosome movements in meiotic prometaphase
In prophase of meiosis I, homologous chromosomes pair and become connected by cross-overs. Chiasmata, the connections formed by cross-overs, enable the chromosome pair, called a bivalent, to attach as a single unit to the spindle. When the meiotic spindle forms in prometaphase, most bivalents are associated with one spindle pole and then go through a series of oscillations on the spindle, attaching to and detaching from microtubules until the partners of the bivalent become bioriented—attached to microtubules from opposite sides of the spindle. The conserved kinase, Mps1, is essential for the bivalents to be pulled by microtubules across the spindle in prometaphase. Here we show that MPS1 is needed for efficient triggering of the migration of microtubule-attached kinetochores toward the poles and promotes microtubule depolymerization. Our data support the model Mps1 acts at the kinetochore to coordinate the successful attachment of a microtubule and the triggering of microtubule depolymerization to then move the chromosome.
INTRODUCTION
In many organisms, cells enter prometaphase of meiosis with improper kinetochore-microtubule attachments that would lead to segregation errors if they were not corrected (Nicklas, 1997;Meyer et al., 2013;Chmátal et al., 2015). In budding yeast each partner chromosome in the homologue pair (called a bivalent) can attach one microtubule to its kinetochore Sarangapani et al., 2014). The bivalents begin meiosis mono-oriented (both partners at one pole) and, through a series of steps, become bioriented and prepared to separate away from each other at anaphase I ( Figure 1A). The microtubule-organizing centers, called spindle pole bodies (SPBs) in yeast, are duplicated in premeiotic S-phase resulting in an older SPB and a newly formed SPB. In late prophase the homologous chromosome pairs (called bivalents) cluster at the sideby-side SPBs in a microtubule-dependent manner ( Figure 1A). The end of prophase and entry into prometaphase is marked by the formation of a spindle between the SPBs, forcing them apart with the bivalents attached mainly to the older SPB. The bivalents are released from this monopolar attachment in an Aurora B-dependent manner (Monje-Casas et al., 2007;Meyer et al., 2013) as was previously demonstrated in mitotic cells (Biggins et al., 1999;Cheeseman et al., 2002;Tanaka et al., 2002). Then, following a series of migrations back and forth across the spindle that include a series of microtubule releases (via Aurora B) and reattachments, the partners of the bivalents become attached to microtubules from opposite SPBs (Meyer et al., 2013). During this process, the spindle assembly checkpoint senses the state of kinetochore-microtubule attachments and delays cell cycle progression into anaphase until all chromosome pairs are bioriented (Shonn et al., 2000;Cheslock et al., 2005).
The process of attaching the kinetochores to microtubules appears to be controlled at several levels (reviewed in Tanaka, 2010;Rieder and Alexander, 1990;Tanaka et al., 2005;Franco et al., 2007;Gachet et al., 2008;Magidson et al., 2011). Second, the microtubule depolymerizes to bring the microtubule plus end to the kinetochore Tanaka et al., 2007). The kinetochore and microtubule plus end can then have any of several fates ( Figure 1B). The microtubule can repolymerize, the kinetochore can release the microtubule, or the kinetochore can form an endon attachment that can move the kinetochore poleward as the microtubule depolymerizes. In this process, the protein composition at the kinetochore-microtubule interface, and modifications of those proteins, change, which promotes the ability of the kinetochore to track the shortening microtubule (Asbury et al., 2006;Westermann et al., 2006;Grishchuk et al., 2008;Daum et al., 2009;Gaitanos et al., 2009;Powers et al., 2009;Welburn et al., 2009;Lampert et al., 2010;Schmidt et al., 2012;Volkov et al., 2013;Umbreit et al., 2014).
Mps1 is a conserved kinase with a central role in the spindle assembly checkpoint (Hardwick et al., 1996;Weiss and Winey, 1996;Abrieu et al., 2001). In budding yeast, Mps1 also has an essential role in meiotic chromosome segregation (Straight et al., 2000). An analysis of the role of the Mps1 in meiosis revealed that it was needed for the efficient poleward migration of centromeres during the biorientation process ( Figure 1A) (Meyer et al., 2013). In addition, Mps1 is needed for an efficient spindle checkpoint in meiosis I. In MPS1 mutants, following anaphase I, most chromosomes end up associated with the spindle pole with which they were initially associated when the spindle first formed (Meyer et al., 2013). This is because they cannot move across the spindle to the opposite pole in prometaphase. Because most chromosomes connect to the older SPB just before prometaphase, even in wild-type cells, MPS1 mutants exhibit more than 80% nondisjunction, nearly all to the older SPB at anaphase I. The Ipl1 kinase, but not Mps1, is critical for releasing these monopolar attachments and for controlling the restructuring of kinetochores in meiosis prophase, but is not critical for poleward migration during prometaphase of meiosis I (Miller et al., 2012;Meyer et al., 2013Meyer et al., , 2015Chen et al., 2020).
This role of Mps1 in promoting force-generating kinetochoremicrotubule attachments is critical for meiosis but less so in mitosis (Meyer et al., 2013). In budding yeast as in many other organisms, MPS1 is an essential gene, but separation-of-function alleles have been identified that result in severe defects in meiotic biorientation but very mild defects in mitosis (Meyer et al., 2013). This suggests either that meiosis is particularly sensitive to defects in the biorientation machinery, or alternatively, that meiotic sensitivity to MPS1 mutations reflects a meiosis-specific process. Interestingly, similar meiosis-specific mutant alleles of MPS1 have also been isolated in Drosophila and zebrafish (Poss et al., 2004;Gilliland et al., 2005).
The manner in which Mps1 promotes the formation of forcegenerating attachments between kinetochores and microtubule plus ends is unclear. Does Mps1 promote the movement of kinetochores toward the spindle midzone so they can encounter microtubules from the opposite pole, or convert lateral attachments to endon attachments, or stabilize end-on kinetochore-microtubule attachments, or trigger microtubule depolymerization to drag kinetochores poleward ( Figure 1B)? Because Mps1 kinase is known to have many targets, it could be involved in coordinating multiple steps in the biorientation process. Here we use live cell imaging experiments to explore the meiotic roles of Mps1. The results of these experiments suggest that MPS1 mutants can form end-on kinetochore-microtubule attachments but these mutants are defective in the subsequent microtubule depolymerization that pulls kinetochores poleward.
Mps1 is necessary for chromosome movements across the meiotic spindle
Previous work has shown that Mps1 is needed for the efficient establishment of force-generating attachments of kinetochores to microtubules. This is a multistep process ( Figure 1B), and the step, or steps, at which MPS1 mutants are defective is unknown. Therefore, we used live cell imaging to track chromosome movements at various stages of the meiotic biorientation process in order to identify the deficiencies that occur when MPS1 is inactive.
We focused on the mps1-R170S mutation because this separation-of-function allele has only mild mitotic defects and severe meiotic defects, thus providing clues as to the critical roles that Mps1 plays in meiosis. As a control, we used an analogue-sensitive allele that allowed us to inactivate the Mps1 kinase activity with an ATP analogue (mps1-as1) (Jones et al., 2005). Prior studies revealed that both mutations result in high levels of meiosis I nondisjunction (Meyer et al., 2013). To track chromosome movement, one chromosome (chromosome I) was tagged adjacent to its centromere with an array of lac operator repeats, and the cells expressed lacI-GFP, which binds to the repeats, from a meiotic promotor (Straight et al., 1996). The movement of this GFP-tagged centromere was tracked in cells with a deletion of SPO11. In this background, homologous partner chromosomes do not become connected by recombination events to form bivalents ( Figure 2A) (Klapholz et al., 1985;Loidl et al., 1994). The resulting partnerless univalents, each with only one kinetochore, can never biorient on the spindle and thus go through repeated cycles of microtubule attachment, migration on the spindle, and microtubule detachment ( Figure 2B) (Meyer et al., 2013). Using this assay, both mps1-as1 and mps1-R170S mutants exhibit a considerable loss in the ability of chromosomes to traverse across the spindle, while in wild-type cells the GFP-tagged chromosome FIGURE 1: Kinetochore-microtubule interactions in budding yeast meiosis. (A) In prophase I, chromosomes have released their attachments to microtubules. At the exit from prophase I, centromeres cluster at the side-by-side SPBs. When SPBs separate to form a spindle, most centromeres are attached to the older SPB. Following a period of oscillations on the spindle including microtubule releases and reattachments, the homologous partners become bioriented. (B) Studies in mitotic cells suggest that most initial attachments are lateral (adapted from Tanaka, 2010). Microtubules depolymerize until they meet the kinetochore. In some organisms, kinetochores can glide toward the microtubule plus end. When the microtubule plus end meets the kinetochore, the illustrated outcomes have been observed.
crosses the spindle, on average, about once every 6 min during prometaphase (Figure 2,C and D).
The coupling of kinetochores to the plus ends of depolymerizing microtubules is presumably the major driving force for the poleward movements that occur on bipolar spindles. However, in assays with bipolar spindles (as in Figure 2C) it is difficult to know exactly how the kinetochore of a particular chromosome is attached to a microtubule. The rapid and processive migrations across the midzone and to the opposite pole are most consistent with the kinetochore being dragged by a depolymerizing plus end-attached microtubule toward the spindle pole where its minus end is attached . However, it is formally possible that these movements could be gliding of the centromere along the side of a microtubule in the opposite direction, away from the SPB and toward the plus end of the microtubule it is tracking ( Figure 1B) (Kapoor et al., 2006;Windecker et al., 2009;Akera et al., 2015).
To examine the directionality of chromosome movements on microtubules in meiosis, we assayed the movements of a univalent chromosome (spo11 background) on the monopolar microtubule array that emanates from the side-by-side SPBs as cells exit pachytene ( Figures 1A and 3A). On these monopolar arrays, all poleward movements of chromosomes are minus end directed and all movements away from the pole are toward the microtubule plus ends. In this experiment, cells were released from a prophase arrest and chromosome movements on the monopolar array were monitored ( Figure 3, A-C). In cells expressing the wild-type MPS1 gene the univalents migrated toward the side-by-side SPBs (clustering) in consecutive cycles (Figure 3, B and C) and as cells approached the time of spindle assembly, GFP-tagged centromeres were more and more likely to have become positioned against the SPBs (Supplemental Figure S1). The beginning of clustering, about 30 min before spindle assembly, may correspond to the time at which new Ndc80 complexes, capable of interacting with microtubule plus ends, are added to the meiotic kinetochore (Miller et al., 2012;Meyer et al., 2015;Chen et al., 2020). This clustering does not occur in ndc80-md mutants that cannot produce new outer kinetochores after exiting prophase, arguing that the movements depend on kinetochore-microtubule interactions (Supplemental Figure S1). The majority of wild-type cells cluster the GFP-tagged centromere 5-10 min before spindle assembly, while clustering is significantly delayed in the MPS1 mutants ( Figure 3D). Further, the length of time centromeres spent at the SPBs during the consecutive cycles of clustering is shorter in MPS1 mutants ( Figure 3E). Similar observations were obtained by monitoring bivalent pairs (SPO11) (Supplemental Figure S2). The trend in these experiments is for centromeres to migrate toward the minus ends of microtubules in an MPS1 and NDC80-dependent manner. Although we cannot visualize individual kinetochore-microtubule attachments in these experiments, the data are consistent with the model that Mps1 is needed to promote minus end-directed locomotion, via Ndc80-mediated attachments to the plus ends of microtubules to get the centromeres to the SPBs. They do not eliminate the possibility that there is also a plus-ended gliding process in budding yeast meiosis. Indeed, this could be one of the forces that moves the centromeres away from the SPBs in the repeated cycles of clustering.
mps1-R170S mutants exhibit pausing defects during the biorientation process
The imaging experiments above (and a prior characterization of Mps1 in meiosis (Meyer et al., 2013), employ relatively long frame intervals (from 45 s to 2 min) to allow acquisition of data for cells proceeding from prometaphase thru anaphase I without photobleaching or toxicity. At this frame rate, a traverse of a centromere across the entire spindle can occur in the interval between sequential frames and details about pauses, restarts, and reversals of direction that occur as the kinetochore interacts with a microtubule are not detected. Understanding these details might clarify at which FIGURE 2: Mps1 promotes migration across the meiotic spindle. (A) Cartoon illustrating the process of re-orientation in the absence of links between homologues (spo11Δ background). As the univalent does not have the ability to biorient, it will reorient indefinitely. (B) The reorientation process in the spo11 background can be evaluated by quantifying the traverses of a GFP-tagged centromere across the spindle. (C, D) spo11Δ diploid cells, with the indicated genotypes, with one GFP-tagged CEN1 and the SPB marker (SPC42-DsRed) were sporulated and released from a pachytene arrest (P GAL1 -NDT80 GAL4-ER) at 6 h after meiotic induction by the addition of 5 μM β-estradiol. The experiment was performed in three biological replicates, and 20 cells were scored in each replicate of the experiment. The pooled data from the three replicates (60 cells for each genotype) are presented. Images were collected at 45 s (one replicate) or 2 min (two replicates) intervals for 75 min. Representative kymographs from wild-type and msp1-R170S cells are shown. Scale bar: 2 μm. (D) For each cell, the number of traverses was recorded for the first 20 min after the spindle formed. The data (as traverses/ minute) for each cell are plotted. Error bars are the average and SD for each set of 60 cells; ****p < 0.0001 (ordinary one-way analysis of variance [ANOVA], multiple comparisons).
steps in the biorientation process Mps1 is playing a critical function. To identify smaller-scale chromosome movements that occur within a single traverse, we imaged chromosome behavior at much faster acquisition rates (2 s intervals) over the course of 5 min, again using a spo11 mutant background so the resulting green fluorescent protein (GFP)-tagged chromosome I univalent could not become bioriented. Images were acquired using a Thru-focus method in which a single image is collected as the objective lens focuses thru the cell (Conrad et al., 2008). Deconvolution of the acquired data then produces a two-dimensional projection of the image. To reduce acquisition times, the SPBs and the centromere of chromosome I were both tagged with GFP.
Chromosome behavior was quantified in cells with bipolar spindles. In control cells expressing wild-type MPS1, chromosomes exhibited several behaviors during the 5 min "snapshots" of prometaphase. We assigned these behaviors to five categories ( Figure 4A). These included 1) clustering at one SPB, 2) maintaining a position between the poles (nonpolar), 3) low-mobility half spindle-small FIGURE 3: Mps1 promotes minus end-directed migration to the base of monopolar microtubule arrays. (A) Schematic representation of centromere clustering on a monopolar microtubule array. (B) Images of a representative wild-type cell exhibiting cycles of clustering of GFP-tagged CEN1 (green) at the side-by-side SPBs (red) before spindle formation (the last image). Scale bar: 2 μm. (C) The pulling of the chromosome can be separated in two alternating phases where CEN1 is either moving toward the SPBs (Clustering) or at a relative constant distance from the SPBs. (D) Clustering of GFP-tagged CEN1 was monitored using live cell imaging of spo11Δ diploid cells with MPS1, mps1-R170S, or mps1-as1 alleles. Experiments were performed in three biological replicates imaged at 45 s (once) or 2 min (twice) frame rates. The graph shows the timing of the final clustering of CEN1 (within 0.5 μm) relative to the time of SPB separation for each individual cell. The red dotted line represents the time at which the SPBs separated. *p < 0.05, ****p < 0.0001 (ordinary one-way ANOVA, multiple comparisons). (E) MPS1 and mps1-R170S cells from the 45 s frame rate replicate were evaluated to determine the amount of time that CEN1 was positioned at the side-by-side SPBs (within 0.5 μm) in individual cells in the 45 min preceding SPB separation to make the prometaphase spindle (duration of clustering). The total time that CEN1 was at the SPBs in each cell is shown (n = 25 cells for the wild-type control and n = 16 cells for mps1-R170S; unpaired t test, *p < 0.05). movements within one half spindle, 4) high mobility-directed movements, toward or away from the SPB but not moving across the entire spindle, and 5) traverses across the spindle. In most wild-type cells the centromere exhibited at least one traverse or half spindle-length migration in a 5 min window of prometaphase ( Figure 4A, iv and v). These high-mobility movements were greatly reduced in mps1-R170S mutants ( Figure 4A, iv and v). In contrast, it was uncommon in the wild-type control strain for centromeres to linger in a nonpolar position (Figure 4Aii), but this occurred significantly more frequently in mps1-R170S mutants where it was the predominant category. Furthermore, the centromeres scored as "nonpolar" in mps1-R170S cells appeared more stationary than those in wild-type cells ( Figure 4B). To quantify this, we plotted the positions of the GFP-tagged centromere relative to the SPBs in every frame of the 5 min movie (150 frames) ( Figure 4C). Representative traces of the GFP-tagged centromeres in a wild-type cell, a dam1-md mutant (which is defective in maintaining end-on kinetochore attachments [Meyer et al., 2018]) and three mps1-R170S cells show that in the mps1-R170S mutants the centromeres appear locked-in-place ( Figure 4C). We quantified all of the movements of centromeres in the nonpolar category ( Figure 4Aii) by determining the median position of each centromere over the 5 min movie and then determining the distance of the centromere from that position in each of the 150 frames ( Figure 4D, cartoon). The data for wild-type cells, mps1-R170S mutant cells, and ndc80-md mutant cells (in which centromeres are left at the spindle midzone, consistent with a failure to form productive end-on kinetochore-microtubule attachments) are shown in Figure 4D. This analysis reveals that in mps1-R170S cells the centromere stays within a smaller area during prometaphase than is observed in wild-type cells ( Figure 4D). Furthermore, mps1-R170S centromeres exhibit significantly more very short movements (less than 100 nm)-note that the spindle length in these experiments is about 2 µm ( Figure 4E).
mps1-R170S mutants exhibit reduced processivity during poleward centromere migrations
The static behavior of the nonpolar centromeres in mps1-R170S mutants is consistent with the model that they represent kinetochores that are attached to the ends of microtubules that are not depolymerizing. This could be analogous to the "paused" kinetochoremicrotubule attachments observed in mitotic budding yeast cells by the Tanaka laboratory (Tanaka et al., 2005Tanaka, 2010) that sometimes occur when a microtubule depolymerizes until it meets a laterally attached kinetochore ( Figure 1B). The elevated numbers of the static nonpolar centromeres in mps1-R170S cells numbers are consistent with the model that one role of Mps1 is to phosphorylate FIGURE 4: Mps1 promotes chromosome mobility on the meiotic prometaphase spindle. (A-C) spo11Δ diploid cells, with the indicated genotypes, with one CEN1-GFP-tagged chromosome and a SPB marker (SPC42-GFP) were sporulated and released from a pachytene arrest (P GAL1-NDT80 GAL4-ER) at 6 h after introduction to sporulation medium by the addition of 5 μM β-estradiol. Subsequently, cells were harvested and observed by time-lapse imaging during meiosis at 2 s intervals for 5 min. Chromosomes were scored in cells with 1.5-3.5-μm-long spindles (cells in prometaphase-metaphase [Meyer et al., 2013]). The experiment was performed as three biological replicates per genotype with 40 cells scored per replicate. (A) Cells were placed in one of five categories according to the primary behavior of the GFP-tagged centromere during the 5 min interval: clustered (remaining close to one SPB), nonpolar (positioned away from the poles and not migrating toward a pole), low-mobility half spindle (making small movements within one half spindle), high mobility (moving poleward or toward the midzone, covering a distance of approximately targets at the end-on attached kinetochore-microtubule interface to help convert paused kinetochores to moving kinetochores. To investigate this model, we characterized the behavior of centromeres making poleward migrations in wild-type and mps1-R170S cells. We identified centromeres that in the course of our 5 min snapshot of prometaphase moved from a position that was about 1 micron (0.9-1.2 µm) away from a spindle pole toward that pole ( Figure 5A). Such cells are rare in the mps1-R170S population due to the preponderance of locked-in-place centromeres. These poleward migrations could come from either pushing or pulling forces, but because the migrations occur within a half spindle (the average spindle length was more than 2 microns) they are presumably mediated most often by minus end-directed movements along a microtubule that emanates from the destination pole ( Figure 5A). The chart of the movements of each tracked centromere as it moves poleward ( Figure 5B) reveals first, that all centromeres exhibit some reversals and pauses during the journey. Some of these might be artifactual as 1) the measurements are taken from two-dimensional projections of three-dimensional spindles so spindle rotations in the Z-dimension could distort the true kinetochore-SPB distance, and 2) the movements are relatively small compared with the sizes of the centromere GFP and SPB foci-distances measured are from the center of each focus. Measuring protocols were used to minimize these issues (see Materials and Methods). Tracking the individual centromeres showed that poleward migrations took significantly less time in wild-type cells than in mps1-R170S mutants ( Figure 5, B and C). To determine whether this was because centromeres reach higher velocities in wild-type cells, we measured the velocities of both poleward and anti-poleward centromere movements over the course of migrations to the pole ( Figure 5D). Measurements were obtained as a sliding three-frame window (4 s) in which the centromere moved in the same direction between frames 1 and 2 and between frames 2 and 3. There was no obvious difference in the average speeds of either poleward or anti-poleward movements of the GFP-tagged centromere in wild-type and mps1-R170S strains; the velocities exhibited by the GFP-tagged centromere as it made poleward migrations were indistinguishable ( Figure 5D; average forward velocity, WT 76.23 nm/s, n = 13, mps1-R170S 58.80 nm/s, n = 15; p = 0.0758; average reverse velocity, WT 38.58 nm/s, n = 12, mps1-R170S 41.33 nm/s, n = 15; p = 0.65, unpaired t tests). If the centromere movements during poleward migration are driven mainly by microtubule depolymerization, then kinetochore microtubule depolymerization occurs at indistinguishable rates in wild-type cells and mps1-R170S mutants.
Because migration to the pole takes much longer in mps1-R170S mutants than in wild-type cells but the velocities of poleward movements are indistinguishable, this argues that the mps1-R170S mutants must pause or reverse more often. To test this, we measured the frequency with which the GFP-tagged centromere paused or reversed direction in its poleward migration ( Figure 5E). The MPS1 mutants exhibited significantly more pauses, or reversals of direction, in their journeys to the pole ( Figure 5F), and the distance traveled between pauses or reversals was significantly shorter ( Figure 5G). Because the velocities of movement in wild-type cells and mps1-R170S mutants are indistinguishable, the higher numbers of pauses in MPS1 mutants results in longer times for poleward journeys of centromeres in these cells.
If Mps1 acts during prometaphase to promote depolymerization of kinetochore microtubules, and kinetochore microtubules are stabilized in MPS1 mutants, then microtubule turnover should be reduced in prometaphase in MPS1 mutants ( Figure 6A). To test this, we measured microtubule turnover in cells expressing a photoconvertible mEos2-tagged α-tubulin subunit (Markus et al., 2015). mEos2-Tub1 has properties of a GFP until it is pulsed with 405 nm light, at which point it switches to a red fluorescent protein (RFP) ( Figure 6B). To measure turnover of kinetochore microtubules, we pulsed half of the spindle of cells expressing mEos2-Tub1 with 405 nm light and then measured turnover of the red fluorescent signal (Table 1). Previous measurements of microtubule turnover in budding yeast have been in mitotic cells, but the majority of defects we have examined with MPS1 mutants have been in meiotic cells. Therefore, we first compared microtubule turnover in metaphase spindles of yeast meiotic and mitotic cells and found them to be indistinguishable ( Figure 6C). To confirm that our methods could detect variations in microtubule turnover rates in meiosis, we measured turnover in cells expressing an auxin-degradable version of the microtubule plus-end protein Stu2 (Stu2-AID*), which helps to regulate microtubule dynamics in mitotic metaphase (Wolyniak et al., 2006;Podolski et al., 2014;Miller et al., 2016;Humphrey et al., 2018;Miller et al., 2019). Cells were induced to enter meiosis, and microtubule turnover was measured in the presence or absence of auxin. As observed previously in mitotic cells, (Kosco et al., 2001;Pearson et al., 2003), inactivating Stu2 in meiotic cells reduced microtubule turnover ( Figure 6D). If Mps1 is, like Stu2, promoting microtubule turnover in metaphase cells, then inactivating Mps1 should give a similar outcome. To test this, we compared microtubule turnover in metaphase meiotic wild-type cells and mps1-as1 cells (both in the presence of the Mps1-as1 inhibitor 1-NMPP1). Microtubule turnover rates in metaphase, with or without Mps1 activity, were indistinguishable. This finding is consistent with the reduction in Mps1 levels at kinetochores as they become bioriented and the spindle checkpoint is satisfied (Dou et al., 2003;Howell et al., 2004;Aravamudhan et al., 2015;Koch et al., 2019). Our failure to one-fourth to three-fourths of a spindle length, and traverse (moving pole-to-pole across the entire spindle). Examples of each classification are shown. Scale bar: 2 μm. *p < 0.05 (unpaired t tests). (B) Traces of the position of CEN1 relative to the SPBs from representative wild-type and mps1-R170S cells that were classified as "non-polar" in panel A. (C) The top left panel is a schematic of the relative positions of the GFP-tagged CEN1 in two sequential imaging frames (SPBs are shown in red). The spindle-centered reference system has three key parameters: The position of SPB1 is constant at x = 0 and y = 0, the position of SPB2 depends on the spindle length (variable over time), and the coordinates x and y (in microns) define the distance of CEN1 from SPB1 at that imaging frame. Shown are traces of the location of CEN1, relative to the SPBs, in 150 sequential time points (every 2 s for 5 min) in five representative cells from the nonpolar category. (D, E) Detailed analysis of centromeres exhibiting nonpolar behavior. (D) We calculated the median position of CEN1 over the course of the 5 min imaging period and then determined the distance of CEN1 from that median position (nanometers) for each frame (150 total) of the acquisition (see cartoon). The graph shows the distribution of distances (in 100 nm bins) from the mean centromere position per cell. The error bars represent the average and SD. n = 8 cells for WT, 17 cells for mps1-R170S, and 13 cells for ndc80-md. (E) The proportion of individual CEN1 movements (in D) that were less than 100 nm from the median position was calculated for each indicated genotype. Mutant genotypes were compared with the wild-type control. **p < 0.01 (ordinary one-way ANOVA). detect a role for Mps1 in metaphase microtubule dynamics could suggest that it is simply not involved in that function. The meiotic defects we have observed in MPS1 mutants were in prometaphase, before chromosomes are bioriented, raising the question of whether microtubule dynamics are discernibly different in prometaphase and metaphase cells using our microtubule turnover assay. In wild-type yeast meiosis, most of the chromosomes are bioriented within a few minutes after spindle formation (Meyer et al., 2013). Therefore, we used the spo11 mutation to obtain a population of cells in which none of the chromosomes are bioriented. Consistent with the higher rates of turnover for unattached versus stably attached kinetochore microtubules (Gorbsky and Borisy, 1989;Zhai et al., 1995), the spindles in the spo11 cells exhibited higher rates of microtubule turnover than were seen in metaphase cells (Figure 6, B and F). If Mps1 promotes depolymerization of the kinetochore microtubules of nonbioriented chromosomes in prometaphase, then this higher rate of turnover seen in prometaphase should be reduced in MPS1 mutants. For both mps1-as1 and mps1-R170S this proved to be the case (Figure 6, G and H). Both mutations reduce the rate of turnover to levels like those seen in metaphase cells, where inactivating Mps1 has no discernible effect on microtubule turnover.
DISCUSSION
Previous work has shown that Mps1 is essential for proper chromosome segregation in meiosis in a variety of organisms (Straight et al., 2000;Poss et al., 2004;Gilliland et al., 2005). We have found that, in budding yeast meiosis, Mps1 impacts at least three steps in the biorientation process (Meyer et al., 2013(Meyer et al., , 2018. First, Mps1 promotes the migration of bivalents to the side-by-side SPBs at the base of a monopolar microtubule array following the exit from meiotic prophase (clustering). Second, Mps1 promotes the processive poleward movements on the prometaphase meiosis I spindle that occur before bivalents become bioriented. Third, through phosphorylation of Dam1, and possibly other targets, Mps1 helps to stabilize end-on attachments of the prometaphase kinetochores to microtubules.
The failure of MPS1 mutants to phosphorylate Dam1 does not explain the massive defects in meiotic chromosome segregation exhibited by MPS1 mutants. Despite their defects in kinetochore-FIGURE 6: Mps1 promotes microtubule turnover in meiotic prometaphase. (A) In wild-type cells the shortening kinetochores of actively biorienting chromosomes are predicted to cause a high microtubule turnover. MPS1 mutants exhibit a locked-in-place phenotype that might represent a defect in the depolymerization of kinetochore microtubules. (B) Cells that were unable to form bipolar attachments (spo11), and thus in a prolonged prometaphase-like state, were used to measure microtubule turnover. Half spindles of meiotic cells were pulsed with 405 nm light to photoconvert mEos2-Tub1 (from green to red). Images were acquired every 15 s, and the intensity of the red signal was measured (see Materials and Methods). Scale bar: 2 μm. (C) Microtubule turnover on metaphase spindles was measured in a diploid strain undergoing either meiosis or mitosis. (D) Microtubule turnover was measured in cells expressing STU2-AID* in the presence or absence of auxin and CuSO 4 (copper was used to induce expression of the P CUP1 -AFB2 F-box protein construct). (E) Microtubule turnover was measured on meiotic metaphase spindles of wild-type or mps1-as1 cells in the presence of the Mps1-as1 inhibitor 1-NMPP1. (F) Microtubule turnover was measured on meiotic metaphase and prometaphase spindles of wild-type cells. (G) Microtubule turnover was measured on prometaphase spindles (spo11) in cells with or without the inactivation of Mps1 by 1-NMPP1. (H) Microtubule turnover was measured on prometaphase spindles (spo11) in wild-type or mps1-R170S cells. All experiments show the averages and SEM of three or more biological replicates with three or more cells per replicate (see Table 1). microtubule interactions, dam1-2A mutants that cannot be phosphorylated by Mps1 exhibit rather mild meiotic chromosome segregation defects (Shimogawa et al., 2006;Meyer et al., 2018). Thus, there must be another role (or roles) of Mps1 that explains its essentiality for meiotic chromosome segregation. Our experiments have not revealed a critical meiotic substrate but have refined our understanding of the ways in which Mps1 affects chromosome dynamics in meiosis I.
Our results suggest that the major defect in MPS1 mutants is in regulating microtubule dynamics at the kinetochore interface. A number of observations point to this conclusion. First, when kinetochores are moving poleward in MPS1 mutants, the average velocity is indistinguishable from that of wild-type cells ( Figure 5D). This suggests that Mps1 is not essential for kinetochores to track depolymerizing microtubules. In addition, it demonstrates that once a kinetochore microtubule begins depolymerizing, its rate of depolymerization is not affected by Mps1. However, the distances traveled between pauses by poleward-migrating centromeres in MPS1 mutants are shorter than in wild-type cells ( Figure 5G) and the pauses are more frequent ( Figure 5F). The pauses during poleward migration of the centromeres could represent losses of kinetochore-microtubule plus-end attachment or pauses in microtubule depolymerization, or both. Given that phosphorylation of Dam1 by Mps1 strengthens kinetochore attachments to plus ends (Shimogawa et al., 2006;Meyer et al., 2018), some of the pauses in MPS1 mutants are probably due to failures in maintaining the kinetochoreplus-end connection. However, other results suggest that this is not the major defect. First, MPS1 mutants exhibit low levels of the lagging chromosomes that are an indicator of a defect in attaching kinetochores to microtubules (Meyer et al., 2013(Meyer et al., , 2018. Second, MPS1 mutants exhibit a stuck-in-the-middle phenotype in which kinetochores maintain a very stable position in midspindle. This is unlike DAM1 mutants, in which kinetochores and plus ends become uncoupled, or NDC80 mutants, in which kinetochores do not attach to microtubules (Meyer et al., 2018); in these two mutants the apparently unconnected centromeres move much more freely than in MPS1 mutants. One explanation for the stuck-in-the-middle phenotype is that MPS1 mutants may be defective in promoting the initiation of depolymerization of kinetochore-coupled MT plus ends. We propose that when a microtubule plus end attaches to a kinetochore, the proximity of the microtubule plus end-associated proteins to Mps1 allows Mps1 to phosphorylate key substrates associated with the plus end, changing their activity or localization in a way that favors microtubule catastrophe over rescue ( Figure 6A). The identity of these Mps1 substrates and how their phosphorylation biases microtubule dynamics remains an important unanswered question.
The above model does not solve another unknown. Why is it that meiotic chromosome segregation is more vulnerable to defects in Mps1 activity than is mitosis? We offer three possible explanations. First, when mitosis begins, kinetochores are already attached to microtubules. In contrast, the chromosome paring process of meiotic prophase demands that kinetochores be released from microtubules for an extended time period. When meiotic prometaphase begins, the kinetochores are dispersed across the nucleus and are then gathered into the microtubule-dense region around the SPBs (clustering) just before the SPBs separate to form a spindle. Mps1 is required for this clustering (Meyer et al., 2013). It may be that in the absence of clustering the formation of initial kinetochore-microtubule attachments on the nascent bipolar spindle is highly inefficient, leading to biorientation defects. A phenomenon similar to clustering, referred to as kinetochore retrieval, has been reported in Schizosaccharomyces pombe meiosis (Kakui et al., 2013;Cojoc et al., 2016). Here, mutations that lead to defects in meiotic kinetochore retrieval also result in subsequent biorientation defects, but it is difficult to know whether the segregation defects are purely due to the failure to cluster the dispersed meiotic kinetochores before spindle formation, or to other effects of the mutations.
Second, the vulnerability of meiotic cells to MPS1 defects might lie in differences between meiotic and mitotic spindles. When yeast meiotic spindles form, most chromosomes are monooriented, with most chromosomes clustered near the older SPB (Meyer et al., 2013). Mitosis starts in a similar way (Marco et al., 2013). Thus, in both meiosis and mitosis, chromosomes that become bioriented have made their way to the spindle midzone from the pole. But yeast meiotic spindles are longer, possibly making them more dependent on processes that get them from the poles to the midzone (Meyer et al., 2013). Movement from the pole to the midzone could be accomplished by pulling of the kinetochore by a long microtubule extending across the spindle from the opposite pole-a process that our results show is defective in MPS1 mutants (Meyer et al., 2013) both because failure to phosphorylate Dam1 results in defective end-on attachments and because processive poleward movements are defective in MPS1 mutants. An alternate means to get to the midzone from the pole is by movement of chromosomes along microtubules from that pole toward their plus ends. This chromosome gliding mechanism has been reported in S. pombe and animal cells but not budding yeast (Kapoor et al., 2006;Windecker et al., 2009;Akera et al., 2015). In S. pombe the process involves proteins (Bub1, Bub3, Mad1, kinesin-5) whose kinetochore localization depends on Mps1 (Windecker et al., 2009;Akera et al., 2015) and is especially critical for chromosome biorientation in cells with long spindles. There is as yet no evidence that this mechanism is important in budding yeast. However, consistent with this model is the recent demonstration that BUB1 and BUB3 mutants, like MPS1 mutants, both exhibit much higher levels of meiotic than mitotic segregation defects and missegregate homologous chromosomes to the older SPB in meiosis I, though not at the high levels seen in MPS1 mutants (Cairo et al., 2020). Finally, the flexibility of the connections between homologous meiotic centromeres could make them vulnerable to deficiencies in Mps1. This is true of meiotic chromosomes across species and may explain the shared dependence on Mps1 in yeast, Drosophila, and zebrafish meioses. Mitotic sister kinetochores are arranged back-to-back, and tightly cohered. Bioriented attachments of sister chromatids are thus probably very quickly under tension and stabilized. In contrast, homologous meiotic kinetochores are connected by chiasmata and therefore a longer tether. This predicts that greater microtubule depolymerization is required in meiosis to separate the homologous kinetochores sufficiently that they are under tension. It may be that in the time interval between the formation of an initial bipolar attachment and the generation of stabilizing tension, one or both of the kinetochore-microtubule connections is lost, and the process must restart. This more challenging meiotic attachment process may render the cell vulnerable to any defects that diminish the efficiency of establishing kinetochoremicrotubule attachments. The observation that in budding yeast, meiotic cells are much more sensitive to defects in the spindle checkpoint than are mitotic cells reinforces the idea that biorientation in meiosis faces greater hurdles than in mitosis (Shonn et al., 2000;Cheslock et al., 2005). But work remains to reveal the greatest vulnerabilities of the meiotic biorientation process and how the cell deals with them.
MATERIALS AND METHODS
Request a protocol through Bio-protocol.
Yeast strains and culture conditions
All strains are derivatives of two strains termed X and Y described previously (Dresser et al., 1994). Strain genotypes are listed in Supplemental Tables S1 and S2. We used standard yeast culture methods (Amberg et al., 2005). To induce meiosis, cells were grown in YP (yeast peptone) acetate to 4-4.5 × 10 7 cells per ml and then shifted to 1% potassium acetate at 10 8 cells per ml. Mitotic cells were grown in SD-TRP (complete synthetic defined medium missing tryptophan) media (Sunrise Science).
Genome modifications
Heterozygous and homozygous CEN1-GFP dots: An array of 256 lac operon operator sites on plasmid pJN2 was integrated near the CEN1 locus (coordinates 153583-154854). lacI-GFP fusions under the control of P CYC1 and P DMC1 were also expressed in this strain to visualize the location of the lacO operator sites during meiosis as described in Meyer et al. (2013).
Fluorescence microscopy
Long-term live cell imaging experiments (every 45-120 s for 3-4 h) were performed with CellAsic microfluidic flow chambers (www.emdmillipore.com) using Y04D plates with a flow rate of 5 psi. Images were collected with a Nikon Eclipse TE2000-E equipped with the Perfect Focus system, a Roper CoolSNAP HQ2 camera automated stage, an X-cite series 120 illuminator (EXFO) and NIS software. Images were processed and analyzed using NIS software. For the timelapse imaging of CEN1 movement, two different exposure programs were defined, depending on the presence (SPO11) or absence (spo11Δ) of chiasmata. In the presence of chiasmata, the intervals were every 2 min for 2 h and later every 5 min for 2 h (Supplemental Figure S2). Without chiasmata, images were acquired every 45 s or 2 min for 75 min followed by every 10 min for 3 h (Figures 2 and 3).
For monitoring movements of CEN1-GFP on monopolar spindles (side-by-side SPBs), following the release from prophase, centromeres were considered as unattached if they did not remain at a constant distance from the SPBs for at least four consecutive frames. Centromeres were considered to be attached if they stayed at a constant distance from the SPBs for at least three consecutive frames or moved incrementally in one direction. The beginning of clustering was defined when CEN1-GFP first reached a position within 0.5 µm of the SPB and remained within this distance for three consecutive frames. Traverses (CEN1 crossing the spindle from one pole to the other) were counted only when the CEN1-GFP signal was overlapping with the SPB signal for at least one frame. Homologues were considered to be bioriented when the homologous CEN1-GFP signals were distinctly separated in two foci.
For high-speed live cell imaging, images were collected every 2 s for 5 min using a Roper CoolSNAP HQ2 camera on a Zeiss Axio Imager 7.1 microscope fitted with a 100×, NA1.4 plan-Apo objective (Carl Zeiss MicroImaging), an X-cite series 120 illuminator (EXFO), and a BNC555 pulse generator (Berkeley Nucleonics) to synchronize camera exposure with focusing movements and illumination. Cells from sporulating cultures were concentrated, spread across polyethyleneimine-treated coverslips, and then covered with a thin 1% agarose pad to anchor the cells to the coverslip. The coverslip was then inverted over a silicone rubber gasket attached to a glass slide. Thru-focus images were acquired as described previously and then deconvolved to provide a two-dimensional projected image for each acquisition (Conrad et al., 2008). For the analysis of centromere movements on bipolar spindles, the coordinates of the two SPBs (labeled by SPC42-GFP) and the centromeres (marked by CEN1-GFP) were defined for each interval. To separate the movement inherent to spindle rotation inside the cells and the movement of CEN1 on the spindle, a relative position for CEN1 and the two SPBs was assigned for each interval. For one SPB (SPB1) this position was defined as being constant as x = 0 and y = 0. For the other SPB (SPB2), the position was defined as x = distance between the SPBs in each frame and y = 0. Finally, the relative position of CEN1 was determined by the distance between CEN1 and SPB1 and the angle formed between the axis SPB1-SPB2 and SPB1-CEN1. As the acquisitions were done in two dimensions, the impact of the spindle rotating in three dimensions was corrected by assuming that the spindle length remains the same or increases over time. Therefore, for instances in which the SPB1-SPB2 distances decreased in sequential frames, the value was corrected by replacing the SPB1-SPB2 distance with the prior maximum spindle length (dMax SPB1-SPB2). The magnitude of this correction was also then applied to correct the SPB1-CEN1 distance; the following formula was applied for each interval: Distance SPB1-CEN1 = Observed distance SPB1-CEN1 x dMax SPB1-SPB2/Observed distance SPB1-SPB2. The velocity of CEN1 movement on the spindle was calculated for each interval by adding the distance between interval n -1 to n + 1 and dividing by the time interval (4 s). The median position for CEN1 was determined in 5 min intervals for each cell by calculating the average position. The dispersion distance was determined for each interval by calculating the distance between CEN1 and this average position. Cells with the following characteristics were selected to monitor poleward migration ( Figure 5): The CEN1 exhibited a migration of 0.9-1.2 µm to its final destination within 0.25 µm of one SPB. The angle of approach had to be within 15°C on the pole-to-pole spindle axis. The migrations started within the same half spindle of the destination SPB. During this 0.9-1.2 µm migration, the intermediate steps were considered poleward movement when the distance between SPB and CEN1 from one interval to the other was decreasing and anti-poleward movement when increasing. The pauses and reversals of direction were determined as follows. First, the distance (D) between the final SPB destination and CEN1 was calculated for each interval (frame). Second, the average distance for each sequential pair of steps was determined. Third, sequential positions in this sliding average were compared. If the distance between the SPB and CEN1 was increasing (D ≥ 0), the movement was considered to be paused/reversed. The number of consecutive poleward steps was determined as the number of consecutive steps showing continued decreasing distance (D < 0).
Measuring microtubule turnover
Microtubule turnover was evaluated in yeast cells expressing mEos2-Tub1, harvested from either log-phase vegetative cultures (in YPAD [yeast peptone adenine dextrose] medium [Amberg et al., 2005]) or meiotic cultures. For meiotic experiments, cells in a pachytene arrest were induced to exit prophase by the addition of estradiol to the medium, using previously published methods (Meyer et al., 2013). Where indicated, auxin (2 mM; Sigma Aldrich I5148-10G), CuSO 4 (200 µM; Sigma Aldrich 451657-10G), or 1-NMPP1 (5 µM; Calbiochem; 5 mM stock in dimethyl sulfoxide) were added to the medium at the time of prophase exit. One hour after prophase exit was induced, cells were concentrated, spread across polyethyleneimine-treated coverslips, and then covered with a thin 1% agarose pad to anchor the cells to the coverslip. The coverslip was then inverted over a silicone rubber gasket attached to a glass slide. Cells synchronously entering prometaphase were then subjected to imaging to measure microtubule turnover.
Cells were imaged using a 100×, NA 1.4 objective on a Zeiss Axio Observer inverted microscope equipped with a Yokogawa CSU-22 (Yokogawa) spinning disk, Mosaic (digital mirror device; Photonic Instruments/Andor), a Hamamatsu ORCA-Flash4.0LT (Hamamatsu Photonics), and Slidebook software (Intelligent Imaging Innovations). Photoconversion was achieved by targeting a selected area in half the spindle with filtered light from the HBO 100 via the Mosaic, and confocal GFP and RFP images were acquired at 15 s intervals for ∼5 min. At each acquisition, we acquired seven images in the Z-dimension with 0.5 µm spacing. To quantify fluorescence dissipation after photoconversion, we measured pixel intensities within an area surrounding the region of highest fluorescence intensity and background subtracted using an area from the nonconverted half spindle using MetaMorph software. Fluorescence values were normalized to the first time point after photoconversion for each cell, and the average intensity at each time point was fitted to a single exponential decay curve F = A × exp(-k × t), using SigmaPlot (SYSTAT Software), where A represents the microtubule population with a decay rate of k, respectively. t is the time after photoconversion. For each experiment, we performed at least three biological replicates with at least three cells imaged per experiment. Cell numbers for each experiment are given in Table 1. Sample identity for scoring fluorescent signals was blinded. The half-life for the microtubule population was calculated as ln2/k. Graphs were prepared using GraphPad Prism. Graphs represent the averages and SEM for combined replicates. | 2021-04-02T06:17:53.660Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "e192d45a1d432d4b592734949c50d855d4e35da4",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.molbiolcell.org/doi/pdf/10.1091/mbc.E20-08-0525-T",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "81fa0b7ed0530bf53075487b9e0699f819d0261c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265376789 | pes2o/s2orc | v3-fos-license | Effect of Nordic Sensi® Chair on Behavioral and Psychological Symptoms of Dementia in Nursing Homes Residents: A Randomized Controlled Trial
Background: Behavioral and psychological symptoms of dementia (BPSD) are present in most people with dementia (PwD), including Alzheimer’s disease. There is consensus that non-pharmacological therapies represent the first line of treatment to address BPSD. Objective: We explore the efficacy of the use of a rocking chair (Nordic Sensi® Chair, NSC) in the treatment of BPSD in nursing home residents with moderate and severe dementia. Methods: We carried out a 16-week randomized, single-blind, controlled, clinical trial with PwD admitted to nursing homes. Participants were assigned to a treatment group (n = 40) that received three times a week one session per day of 20 minutes in the NSC and a control group (n = 37). The Neuropsychiatric Inventory-Nursing Home (NPI-NH) was used as primary efficacy outcome. Occupational distress for the staff was evaluated using the NPI-NH Occupational Disruptiveness subscale (NPI-NH-OD). Statistical analyses were conducted by means of a Mixed Effects Model Analysis. Results: Treatment with the NSC was associated with a beneficial effect in most of BPSD, as reflected by differences between the treatment and control group on the NPI-NH total score (mean change score –18.87±5.56 versus –1.74±0.67, p = 0.004), agitation (mean change score –2.32±2.02 versus –0.78±1.44, p = 0.003) and irritability (mean change score –3.35±2.93 versus –1.42±1.31, p = 0.004). The NPI-NH-OD total score also improved the most in the treatment group (mean change score –9.67±7.67 versus –7.66±6.08, p = 0.003). Conclusions: The reduction in overall BPSD along with decreased caregiver occupational disruptiveness represent encouraging findings, adding to the potential of nonpharmacological interventions for nursing home residents living with dementia.
INTRODUCTION
Dementia is a syndrome characterized by a progressive impairment of the cognitive and functional abilities with important implications for individuals and society.The number of people with dementia (PwD) is expected to increase to 82 million by 2030 and almost double by 2050 [1].
In addition to the cognitive and functional deficits, behavioral and psychological symptoms of dementia (BPSD) are one of the most important challenges that both PwD and their caregivers face throughout the course of the disease [2].BPSD consist of a heterogeneous group of symptoms such as depression, delusions, hallucinations, irritability, disinhibition, agitation, apathy, or sleep and eating problems [3].BPSD results in decreased PwD well-being, impaired quality of life, and cause a heavy burden on caregivers, often leading caregivers to make the decision to institutionalize [4,5].In nursing-homes BPSD can be a major stress for both the care staff and the residents themselves [6].
Medication is often used and many PwD are treated with psychotropic drugs, although in many cases achieve only modest benefits in controlling symptoms while exposing patients to the risk of possible adverse events [8].On the contrary, non-pharmacological interventions are considered to have fewer undesirable effects making them safer options, with at least the same efficacy as medication, in most cases [9,10].In fact, currently there is a consensus to consider nonpharmacological therapies as the first line treatment of BPSD with the exception of emergency situations [11].
A wide range of non-pharmacological approaches have shown positive results for the management of BPSD including physical exercise, music therapy, multisensory stimulation, psycho-educational interventions for caregivers or care staff training [12].However, the need for the development and application of new non-pharmacological therapies is present [11,13].
Within this context, modern rocking chairs may be suitable for long-term care because rocking, a rhythmically repeated movement, can contribute to psychosocial wellbeing [14].However, only a few studies have evaluated the use of rocking chairs for PwD.A 6-week study in nursing homes showed that the use of a rocking chair produced improvements in anxiety and depression as well as reductions in pain medication [14].The results of a repeated-measures study revealed that the use of a glider significantly improved emotions and relaxation in people with severe dementia admitted to nursing homes [15].In the same line, findings from a study using a rocking chair showed a decrease in BPSD and an increase in quality of life in PWD in a nursing home [16].In a multicenter survey of long-term care facilities staff reported the use of a rocking chair improved quality of care and contributed to a calmer environment for PwD [17].
In this regard, it is of interest to consider the therapeutic role of the Nordic Sensi ® Chair (NSC) in the treatment of BPSD based on its ability to offer PwD a sensory experience that brings the benefits of music, therapeutic tactile stimulation, vestibular stimulation, and relaxation in an integrated way, especially those in nursing homes.
Music-based interventions were originally developed with the aim of accomplishing individualized goals and offer a promising option if targeted and evaluated effectively [18].The use of music in PwD is based on the ancestral link between sounds and the human being and its potential to evoke emotions experienced throughout their lives.Music can become a way of expressing their emotions in daily life, thus preventing the onset of anxious or agitated behaviors [19,20].
If PwD is hyperaroused, tactile stimulation and vestibular stimulation is a powerful tool to help regulate arousal levels to enable self-calming and focused attention, especially when PwD is agitated [21].Linear movement activities (e.g., forward-back rocking and swinging) coupled with low-frequency sounds are calming and serve to inhibit the reticular activating system via the vestibular system [21].
The main objective of this study was to evaluate the effectiveness of the NSC in the management of BPSD in real clinical practice in PwD admitted to nursing homes.The secondary objective was to assess the benefits of the NSC on cognitive functioning and quality of life of PwD as well as its potential benefits on the occupational disruptiveness of care staff.
Participants
Study participants had a diagnosis of dementia according to the criteria of the 11 th edition of the International Classification of Diseases of the World Health Organization [22] and/or probable Alzheimer's disease (AD) according to the criteria of the National Institute on Aging Alzheimer's Association workgroups (NIA/AA) [23].PwD were recruited from two nursing homes specialized in dementia care: Centro Residencial Almudena (Rincón de la Victoria, Málaga, Spain) and Residencia DomusVi Fuentesol (Alhaurín de la Torre, Málaga, Spain).Dementia severity was assessed with the Reisberg Global Deterioration Scale (GDS) [24] by the clinician in charge.PwD included in the study were clinically defined in stages 4 to 7 of the GDS.
Centro Residencial Almudena has a capacity for 50 users and offers specialized services for Alzheimer's and other dementias.The healthcare team is made up of internists and psychologists, in addition to medical advisors in each specialty.DomusVi Fuentesol Residence has a total of 146 beds and center has an interdisciplinary team who offers specialized services for dementia and neurocognitive disorders.
Exclusion criteria included PwD who had any evidence of focal vascular lesions (such as hematomas), stroke, normal pressure hydrocephalus; those with serious systemic diseases such as hypothyroidism or chronic renal failure; those with a chronic sensory disorder (e.g., severe vision and hearing impairment) or severe psychiatric disorder.
Considering the variability reported in the literature of the clinical assessment instruments used in the present study, it was anticipated that a sample of 70 PwD (35 in each of the two groups) would allow detection, with 80% power and an anticipated effect size of 0.5, for the primary efficacy variable NPI-NH total score of a statistically significant difference in the mean of 2.5 points or more between the two study groups, assuming a standard deviation of 1.5 points [25].
The study included the evaluation of care staff from both nursing homes who participated in the direct provision of care to the participating PwD.The degree to which the presence of BPSD disturbed the nor-
The Nordic Sensi ® Chair
The NSC (Wellness Nordic A/S, Espergaerde, Denmark, Fig. 1) is an electrically operated rocking chair with built-in music MusiCure ® composed by Niels Eje [26].It is equipped with an integrated audio system with music recording.MusiCure ® is used in a wide variety of different types of treatment and research projects such as cardiac patients, surgery and recovery or psychiatric patients suffering from anxiety, depression, delirium, or sleep problems [26].Recently MusiCure ® has also been used for the treatment of PwD [16].
This framework requires consideration of a personcentered approach to focus to interventions that have a greater likelihood of effecting a positive influence on their quality of life.As research demonstrates, person-centered interventions can be effective in reducing BPSD in PwD and healthcare service providers should be encouraged to use personcentered care as an essential part of treatment when attempting to reduce BPSD [27].
The NSC has three different programs: Relax for deep relaxation, Refresh for recovery and Comfort for gentle relaxation.A 3.7 kg fiber blanket increases the feeling of security and relaxation, while helping users to perceive their own body.In addition to musical programming, the NSC provides predefined tactile stimulation and rocking motion, for a relaxing multi-sensory experience.This approach could facilitate the achievement of a balance between stim-ulation and sensory calm that would contribute to the effective management of BPSD.All settings can be easily customized at the touch of a button.
For the purposes of this study, the NSC Relax for deep relaxation program (Relax Program) was used.The Relax Program lasts 20 min, during which the backrest descends to a semi-reclined position that is maintained throughout the program.The chair has also a footrest that can be raised and lowered.At the end of the program, the backrest returns to the sitting position.At the same time that the chair is rocking in a linear direction, the PwD perceives the automatic relaxation music along with tactile stimulation on the back.
Study design
This was a 16-week randomized, parallel, single-blind, controlled, clinical trial (RCT).After assessments for eligibility PwD were randomly assigned to two groups of equal size: a treatment group that received three times a week one session per day of 20 min in the Relax Program of the NSC and a control group that did not participate in the activity mentioned for the treatment group, but received, at the same time and duration, the care and activities that were part of the daily routines of the center, including group sessions of cognitive stimulation, training in activities of daily living or communication training.
Based on the methodology used in previous studies, we consider that a frequency of three times per week would be adequate to study the effect of the NSC on BPSD [14][15][16][17].A research team from the Instituto Andaluz de Neurociencia (four neuropsychologists and one psychiatrist) filled the outcomes measures of the study results.They were blinded to the group assigned to the patients.An anonymized data base was generated.The safety of the intervention was closely monitored by relying on continuous supervision of the intervention by skilled nurse assistants.During the treatment session, the nurse assistant remained next to the PwD, ensuring that the user was safe, relaxed, and comfortable while seated in the NSC.
The 16-week study extension included a first 2week pre-intervention phase followed by a second 12-week intervention phase with the use of the NSC and a third 2-week post-intervention phase without receiving NSC.Given the duration of the study we chose time points for assessment according to a reasonable sequence: at pre-intervention phase (baseline, Time 0), at mid-intervention phase (week 8, Time 1), at the end of the intervention phase (week 14, Time 2), and two weeks after completion of the intervention phase to check if the NSC effect continued (week 16, Time 3).A schematic chart of the assessment schedule is shown in Fig. 2.
Upon entry into the study, PwD who met the inclusion criteria were randomized to the treatment group or the control group.Randomization was carried out by blocks generating random numbers with repeti- tion, one per block.Randomization numbers were assigned sequentially for all study participants.
Intervention
In both nursing homes, the treatment was carried out on weekdays, during the day shift.The chairs were placed in a room intended exclusively for the treatment sessions.To facilitate confidence and adherence with intervention, PwD were always introduced to the chair by the same nurse assistant.To facilitate the adaptation to the therapeutic process, participants were carefully introduced to the chair, e.g., just suggesting them sit down the first time, carefully rock the second time, start of the full program session the third time.Each PwD had their own schedule of rocking chair use throughout the study.
Ethics approval and consent to participate
The study protocol was approved by the Málaga Research Ethics Committee (Approval number: 03/2022ICPS3).This study is registered on Clini-calTrials.gov with the identifier NCT05706792 on January 31, 2023.A written informed consent was signed by PwD who were able to give or their legal representative.Informed consent was also requested from care staff who participated in the study.The study followed the ethical standards adopted by the Declaration of Helsinki in its latest version (Fortaleza, Brasil, 2013) and was conducted in accordance with the standards of Good Clinical Practice, as described in the Tripartite Harmonized Standards of the International Conference on Harmonisation for Good Clinical Practice 1996.
Outcomes
The primary efficacy measure was the Neuropsychiatric Inventory-Nursing Home (NPI-NH) [28] that is an instrument to be used by the nursing staff to evaluate neuropsychiatric symptoms in PwD in the nursing home setting.The NPI-NH is composed of 12 domains that rate the most frequent BPSD in dementia patients (delusions, hallucinations, agitation, depression, anxiety, euphoria, apathy, disinhibition, irritability, aberrant motor behavior, sleep disturbances, and appetite changes).If a symptom was present during the previous month, each item was scored for frequency (range 0-4) and severity (range 0-3) and transformed to a total composite score (frequency x severity, range 0-12).We cal-culated the total NPI-NH score as the sum of total composite scores (range 0-144).Higher scores indicate more severe BPSD.For the purposes of this study, total score on the NPI-NH and the 12 domains were considered as primary efficacy measures.In this study the NPI-NH had an internal consistency of Cronbach's ␣ of 0.67.The internal consistency of NPI-NH domains were between 0.87 and 0.41.The agitation and apathy domains had the highest scores of internal consistency, with 0.87.Sleep disturbances obtained the lowest score with 0.41.
Secondary efficacy measures were Cohen-Mansfield Agitation Inventory (CMAI) [29] and Cornell Scale for Depression in Dementia (CSDD) [30].The CMAI is composed of 30 items that form four subscales: psychically aggressive behaviors, non-psychically aggressive behaviors, verbally aggressive behaviors, and non-verbally aggressive behaviors.The CMAI also includes the frequency and the severity of the agitation-correlated behaviors and allows to quantify the agitated behaviors in a continuous measure, which is sensitive to the changes.Cronbach's ␣ for the CMAI was found to be 0.86 for this study.
The CSDD is a 19-item semi structured interview designed to assess depression in PwD with scores above 10 indicating a possible depression and scores above 18 suggesting a definitive depression.In this study the CSDD had high internal consistency of 0.84.
Likewise, an assessment of cognitive functions, functional capacity, and quality of life (QoL) of PwD was carried out using the Severe Mini-Mental State Examination (S-MMSE) [31], the Bedford Alzheimer Nursing-Severity Scale (BANS-S) [32], and the Quality of Life in Late-stage Dementia (QUALID) [33].The S-MMSE assesses the cognitive deterioration in advanced dementia.It is composed of 10 items and the score can reach 30 points.The S-MMSE had a high reliability according with a Cronbach's ␣ = 0.88 for this study.The BANS-S consists of 7 items with 4 categories that enables to discriminate changes in advanced phases of dementia.The score ranges from 7 (no impairment) to 28 (total impairment).It assesses the PwD ability to perform three daily activities (dressing, eating and mobility), their ability to speak, their ability to maintain visual contact, the regularity of their sleep-wake cycle and the state of their muscles.The BANS-S had a Cronbach's ␣ = 0.81.
The QUALID is rated by the care staff who has had significant contact with the patient over the previous week and consists of 11 items and evaluates three domains: affective state, comfort, and basic activities of life.Score ranges from 11 to 55, with lower scores being the highest quality of life.The scale had high internal consistency with a coefficient alpha of 0.80 for this study.
Finally, the assessment of occupational distress for the care staff was carried out by means of the Occupational Disruptiveness subscale of the Neuropsychiatric Inventory-Nursing Home (NPI-NH-OD).It assesses the grade of self-reported professional care staff burden.Care staff rates the extent to which each of the 12 behaviors disrupts them and/or generate more work.Score ranges from 0 to 5 points (from not at all to very severely).We calculated the total NPI-NH-OD score (range 0-60).The NPI-NH-OD subscale had an internal consistency of ␣ = 0.68.
Statistical analysis
Demographic variables were reported using the mean, standard deviation in the case of quantitative variables; and number and percentage for qualitative variables.Baseline differences between the two treatment groups were assessed by an analysis of variance (ANOVA) or nonparametric tests, as appropriate.
A Mixed Effects Model Analysis for Repeated Measurements (MMRM) was carried out in order to evaluate changes in neuropsychiatric, cognitive, and functional scores and to handle missing values in some of the follow-up assessments.The effect of time (between the mean baseline measurements and each time point), treatment and interaction between time and treatment were evaluated.The change scores at Time 1 and at Time 2 and the mean change scores differences within and between groups were calculated from the MMRM.All analyses were controlled for demographic and clinical characteristics that approached significance on univariate analysis.Post hoc analyses for multiple comparisons were conducted using Bonferroni´s correction.Cohen's d standardized effect sizes were calculated and defined as small d = 0.20, medium d = 0.50 and large d = 0.80 [34].The main efficacy analysis was based on the Modified Intent-to-Treat (mITT) population using a Last Observation Carried Forward (LOCF) imputation.This mITT-LOCF population was pre-defined as all randomized PwD who received at least one week of the Nordic Sensi ® Chair treatment and had a baseline and at least one post-baseline assessment for the primary efficacy variable on treatment.MMRM analysis did not include the two weeks post-intervention data.After completion of the 12 weeks of intervention period, Student's t-test was used to compare within-group mean score differences at Time 2 and Time 3.
Statistical analyses were carried out using the Statistical Package for the Social Sciences Software (SPSS 25.0, IBM Corporation, Armonk, NY, USA) and the significance level was set at 0.05.For MMRM analyses of the primary efficacy measure a significance level of p ≤ 0.004 was considered significant.
Demographic and baseline scores
Eighty-eight PwD were entered into the study and were randomized.In the first week of the intervention phase, 2 participants dropped out because of occasional dizziness that worsened when sitting in the chair and 3 refused to continue with the study because they did not enjoy sitting in the NSC.Six other participants died before the first post-baseline assessment.The mITT-LOCF population comprised 77 PwD, all of whom completed the 12-week intervention phase (Fig. 3).Of these 77 PwD (65 female, 12 male), 40 (52%) were in the treatment group (37 female, 92.5%), and 37 (48%) were in the control group (28 female, 75.7%).PwD had a mean age of 81.77 ± 8.69 years (range 47-102) with median of 82 years and mean years of education of 7.31 ± 3.24 (range 6-18).All participants were Caucasian.The main efficacy analysis did not include the 2-week post-intervention period.
At baseline, there were no statistically significant differences between the two groups for the demographic and clinical variables except for sex (women, 84.4%, p = 0.042) and anxiolytic use (benzodiazepines; higher in the control group, 67.6% than in the treatment group, 32.5%, p = 0.002) (Table 1).No deaths or serious adverse events occurred during the study.Care staff informed that NSC was well accepted and tolerated with no differences through different stages of GDS.
Concerning the within-group changes, NPI-NH mean change score and mean change scores for delusions, hallucinations, agitation, anxiety, euphoria, apathy, disinhibition, and irritability showed statistically significant differences at the end of the intervention period (Time 2) from baseline.There were no statistically significant differences in each of these variables when comparing mean scores at Time 2 and Time 3 (Table 3).
Secondary efficacy measures
Regarding the CMAI, the MMRM analysis showed a statistically significant interaction effect between time and treatment (p = 0.021).The NSC group performed better than the control group at Time 2 (mean change score -17.72 ± 8.23 versus -6.70 ± 3.02, p = 0.002) and showed an improvement already at Time 1 (Fig. 4).The NSC group showed a statistically significant difference at the end of the intervention period (Time 2) from baseline.There was no significant difference in CMAI in the NSC group when comparing mean scores at Time 2 and Time 3 (Table 3).
The MMRM analysis showed no statistically significant interaction effect between time and treatment for the CSDD (F = 1.071, p = 0.304).There were no significant differences between-groups.The NSC group showed a statistically significant difference at the end of the intervention period from baseline.There was no significant difference in CSDD in the NSC group when comparing mean scores at Time 2 and Time 3 (Table 3).
Cognitive performance, functional status, and quality of life
With regard to the S-MMSE neither the NSC group (mean change score 5.59 ± 5.66 versus 7.43 ± 8.17, p = 0.812) nor the control group (mean change score 6.67 ± 9.28 versus 10.61 ± 7.34, p = 0.443) showed statistically significant differences at the end of the treatment period from baseline.Concerning the BANS-S the NSC group showed statistically significant improvement at the end of the intervention period from baseline (mean change score 15.62 ± 6.01 versus 17.97 ± 4.92, p = 0.05).There was no significant difference in BANS-S in the NSC group when comparing mean scores at Time 2 and Time 3 (p = 0.263).In regard to the QUALID the NSC group showed a statistically significant improvement at the end of the treatment period from baseline (mean change score 19.59 ± 8.77 versus 25.83 ± 8.12, p = 0.003).There was no significant difference in QUALID in the NSC group when comparing mean scores at Time 2 and Time 3 (p = 0.467).
Occupational disruptiveness
Concerning the NPI-NH-OD, the MMRM analysis showed a statistically significant interaction effect between time and treatment (p = 0.042).The NSC group performed better than the control group at Time 2 (mean change score -9.67 ± 7.67 versus -7.66 ± 6.08, p = 0.003) and showed an improvement already at Time 1 (Fig. 4).The NSC group showed a statistically significant difference at the end of the intervention period from baseline (Table 3).There was no significant difference in NPI-NH-OD in the NSC group when comparing mean scores at Time 2 and Time 3 (Table 3).
DISCUSSION
This study was performed to explore the efficacy of the NSC in the treatment of BPSD in nursing home residents with moderate and severe dementia, primarily of the Alzheimer's type.The treatment with the NSC was well tolerated and was associated with a beneficial effect on overall BPSD.PwD treated with the NSC showed statistically significant superiority on the NPI-NH over PwD in the control group.
The NSC showed benefits in most of BPSD.Notably, its use yielded significant benefit regarding agitation, apathy, irritability, disinhibition, aberrant motor activity, and euphoria over the 12-week of treatment.Consistent with the significant reduction in the NPI-NH agitation domain score, we also found a significant decrease in the CMAI score.Importantly, the NPI-NH-OD total score improved significantly in the treatment group.In addition, our findings also showed significant improvement in the residentsf unctional status and quality of life over 12 weeks.
Previous studies have highlighted that the use of a rocking chair has potential benefits in the treatment of BPSD, while demonstrating a very low risk of harm [14][15][16].The efficacy in improving the psychological well-being and balance using a platform rocking chair was examined in 25 PwD admitted to nursing homes for 6 weeks [14].PwD showed small improvements in depression, anxiety and pain medication use but found no improvement in agitation.In a quasi-experimental, repeated-measures design study [15] the effects of a glider swing on emotions, relaxation, and aggressive behaviors in a group of 30 nursing home residents with dementia were evaluated for 10 days.The glider intervention significantly improved emotional state and aggressive symptoms were observed to decrease from the beginning of treatment until after the end of treatment.More recently, in a single-case study [16], performed using a mixed-methods approach, six PwD in a nursing home setting used the NSC for a mean number of five times per week, for eight weeks in total.The results indicated a decrease in BPSD symptoms and increased quality of life upon using the NSC.
Therefore, based on our findings and those provided by early studies, a potential therapeutic role of the NSC in the treatment of dementia BPSD based on its ability to offer patients, in an integrated way, a sensory experience that brings the benefits of music therapy, therapeutic tactile stimulation, vestibular stimulation, multi-sensory stimulation and relaxation should be considered.Although less and less frequently, sometimes PwD, especially those who are institutionalized, may be in a situation of sensory deprivation or, conversely, exposed to excessive environmental stimulation, which may contribute to their experiencing a sense of intrapsychic discomfort, thus favoring the onset of BPSD such as agitation, anxiety or irritability [35,36].Therefore, interventions with residents should facilitate the achievement of a balance between stimulation and sensory calm [12].In this sense, procedures that favor multisensory stimulation such as the NSC are a very appropriate option in these patients and can contribute to achieve a relaxing and stimulating environment at the same time [37,38].
As care staff stated the NSC was considered an affordable, easy to use, nonlabour intensive intervention in the care of PwD.When care staff was asked to what extent the use of the NSC helped them, most believed that it improved the quality of care, freed up staff time and contributed to a calmer and more pleasant environment for everyone.Care staff in the treatment group benefited from the behavioral improvement that patients experienced and reported less occupational disruptiveness on NPI-NH-OD scores than staff in the control group.This is consistent with research showing the relevant role that the presence of BPSD has on the burden of nursing home staff [39,40].
In the same line, to evaluate caregiver opinion, a multicenter survey was conducted among long term care facilities in several European countries.Care staff reported their opinion of the utility of the NSC in the management of BPSD.Most respondents believed that the quality of care provided improved, they had more time for care and helped create a calmer environment and patient friendly [17].
Interestingly, PwD in the NSC group showed significant improvement in functional capacity and quality of life as measured by the BANS-S and QUALID at 14-week follow-up.Quality of life is difficult to measure in PwD, but it is believed to be influenced, among others factors, by the presence of BPSD [41].These benefits could be related to the long-term intervention and person-centered therapeutic process developed in this study, which better matching specific interventions to specific individuals.Applying this multisensory approach based on the specific stimuli offered by the NSC and accompanying support from the dedicated nurse assistant enhances the personalization of interventions for participants and increases the chance of efficacious findings.The specific stimuli offered by the NSC and developed through individualized care in feasible and structured sessions were well accepted, with no negative effects.This is in line with other results showing that person-centered interventions show effects on reducing BPSD and improving quality of life in dementia care [42,43].
When considering the results of an intervention, it is important to consider not only statistically significant improvements but also clinically significant efficacy.For this purpose, it is necessary to compare the observed changes with differences that are considered clinically relevant.For clinical trials in dementia of the Alzheimer's type, it has been assumed that differences as small as 4.5 points on the NPI-NH [44] are clinically relevant.In our study, average differences that we observed on NPI-NH over the 12 weeks study period exceeded this clinically relevant difference (mean difference within group = -19.92points and mean difference between groups = -16.28points).In addition, the magnitude of the observed effect on BPSD could be contextualized using standardized effect sizes.In line with this, the clinical relevance of the significant effect of the NSC on behavioral test scores could be supported by the magnitude of the between group effect sizes with a median Cohen's d of 0.51.
A strength of this work compared to previous studies was the higher number of PwD evaluated and its design as a RCT.In addition, the participants in the study underwent a comprehensive behavioral and functional evaluation with widely used outcome measures focused on reducing observation bias.However, here are also some limitations that should be considered.This study was carried out in two sites and, therefore the generalization of its findings to other settings and PwD should be interpreted with caution.Although the research team was blinded to the group assigned to the PwD, the outcome measures were obtained from care staff who were not completely blinded to the intervention.As a result, although much effort was put into mitigating bias, it is necessary to consider the possible bias from the care staff when report BPSD.It was not feasible for the staff members to be completely unaware of the experimental condition of the PwD, and this could have biased the ratings on the evaluation instruments.Although care staff was not present during the treatment delivery they could have received anecdotal information from other staff members.
Another potential source of bias comes from the assessment of all outcome measures that were answered by the care staff.A critical point about person-centered approaches is that interventions targeting BPSD should be tailored to each individual and circumstances, which change and possibly reflect different unmet needs over time.In fact, tailoring music to individual preferences seems to be more effective at reducing BPSD (e.g., agitation) than applying generic classical or relaxation music.It should also be noted that the application of this approach may provide more specific, but less generalizable, results.Finally, we acknowledge the potential for assessment bias, as PwD (and even staff members) got more attention to themselves throughout the intervention period, i.e., Hawthorne Effect.
Conclusions
These results may have clinical significance in choosing non-pharmacological therapies for BPSD in PwD.The reduction in overall BPSD along with decreased in caregiver occupational disruptiveness and improved quality of life for residents suggest that the NSC represents an encouraging new nonpharmacological approach to improving BPSD of nursing home residents with dementia.The results suggest that NSC is an intervention with potential interest for use in nursing homes as part of the PwD care plan.This study should inspire the design of future long-term randomized controlled trials that contribute to supporting the use of the NSC as a non-pharmacological person-centered intervention for improving BPSD in PwD in nursing homes.
Fig. 3 .
Fig. 3. Flowchart of patients participating in the study.
Table 1
Demographic and clinical characteristics of PwD at baseline Values are mean ± SD or number (%).Independent samples t-test were used for continuous data and χ 2 on categorical data.PwD, people with dementia; NSC, Nordic Sensi ® Chair; GDS, Global Deterioration Scale; AChEI, acetylcholinesterase inhibitors.Values are mean ± SD, Independent samples t-test were used for continuous data.NSC, Nordic Sensi ® Chair; NPI-NH, Neuropsychiatric Inventory-Nursing Home; NPI-NH-OD, Neuropsychiatric Inventory-Nursing Home Occupational Disruptiveness; CSDD, Cornell Scale for Depression in Dementia; CMAI, Cohen Mansfield Agitation Inventory; BANS-S, Bedford Alzheimer Nursing-Severity Scale; QUALID, Quality of Life in Dementia Scale; S-MMSE, Severe Mini-Mental State Examination; PwD, People with Dementia.
Table 3
Results from the mixed effects model analysis for repeated measurements The results displayed are adjusted for sex and benzodiazepines use.Values are mean (standard deviation); d, Cohen's d effect size; # d, effect size from T0 to T2.NSC, Nordic Sensi ® Chair; NPI-NH, Neuropsychiatric Inventory-Nursing Home; CSDD, Cornell Scale for Depression in Dementia; CMAI, Cohen Mansfield Agitation Inventory; NPI-NH-OD, Neuropsychiatric Inventory-Nursing Home Occupational Disruptiveness. | 2023-11-23T16:14:58.653Z | 2023-11-21T00:00:00.000 | {
"year": 2023,
"sha1": "ba84a79a4e9e69e0a00945611ae08c67e9f7d6c4",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/journal-of-alzheimers-disease/jad230391?id=journal-of-alzheimers-disease/jad230391",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5db5bb5fdeff0f19ad02b9f50828f11e03381878",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17727067 | pes2o/s2orc | v3-fos-license | Unified Theory of Elementary Particles -- in Search of Extra Dimensions -----
Even though the unified theory of electroweak interactions is very successful at low energies, there remains one part to be confirmed. It is the sector involving Higgs particles. Those Higgs particles are expected to be discovered. It has been shown recently that Higgs particles can be viewed as gauge fields in higher dimensional gauge theory. The mass of a Higgs particle and its coupling to other particles are constrained by the gauge principle. In this scenario the mass of a Higgs particle is predicted to be in the range of 120 GeV - 290 GeV, exactly in the range explored at LHC, provided that the extra dimension is curved and warped. Thus physics of extra dimensions can manifest itself in collider experiments at the LHC energies.
Unification in extra dimensions
At the most fundamental level quarks and leptons interact with each other by exchanging gauge bosons. Strong interactions are described by SU(3) C color gauge interactions whereas electroweak interactions by SU(2) L × U(1) Y gauge interactions. The associated gauge bosons are gluons, W bosons, Z bosons, and photons. There is one more field necessary to make the standard model of elementary particles to work. It is the Higgs field which not only spontaneously breaks the electroweak symmetry SU(2) L × U(1) Y to the electromagnetic symmetry U(1) EM , but also gives fermions finite masses. There appear many parameters whose values are chosen to fit the observed data. There is no principle regulating the Higgs sector of the standard model.
This seemingly awkward dilemma is resolved in higher dimensional gauge theory. Long time ago Kaluza and Klein proposed an intriguing scenario in which we are living in five-dimensional spacetime. [1] They assumed that our spacetime is close to the product of four-dimensional Minkowski spacetime (M 4 ) and a circle (S 1 ) with a radius R. The metric in the five-dimensional space, g M N (M, N = 0, · · · , 4), decomposes into the four-dimensional metric g µν (µ, ν = 0, · · · , 3), the off-diagonal components g µ4 , and g 44 . The general coordinate invariance in the fifth dimension implies that the g µ4 components behave as four-dimensional elecromagnetic gauge potential A µ . In this manner the fourdimensional gravity and electromagnetism are unified in the five-dimensional gravity.
Motivated by Kaluza and Klein's idea, we consider non-Abelian gauge theory in fivedimensional spacetime. Gauge potential decomposes into two parts; On M 4 × S 1 , for instance, fields are expanded in Fourier series in the fifth coordinate y; y (x) tranforms as a four-dimensional scalar. It is our contention that A (0) y (x) contains the four-dimensional Higgs scalar field. Thus the Higgs field is a part of gauge fields and the unification of gauge fields and Higgs fields is achieved. The scenario is called the gauge-Higgs unification. [2,3] 2 Dynamical gauge-Higgs unification To apply the gauge-Higgs unification scenario to the electroweak interactions, several ingredients must be implemented. (i) In the electroweak theory SU(2) L × U(1) Y breaks down to U(1) EM and the Higgs fields transform as a doublet of the SU(2) L group. On the other hand, the extra-dimensional component of the gauge fields in the decomposition in (1) belongs to the adjoint representation of the gauge group. This means that one must begin with a larger group to achieve gauge-Higgs unification, as first clarified by Fairlie and by Manton. [2] (ii) The electroweak symmetry is spontaneously broken by the 4-d Higgs fields, which are a part of the 5-d gauge fields. Dynamical electrowek symmetry breaking is induced by the Hosotani mechanism. [3,4] When the extra-dimensional space is non-simply connected, there appear non-Abelian generalization of the Aharonov-Bohm phases. Those non-Abelian Aharonov-Bohm phases, {θ j }, become dynamical degrees of freedom, even though they give vanishing field strengths at the classical level. At the quantum level those θ j , in general, develop nonvanishing expectation values, thus breaking the gauge symmetry. (iii) Quarks and leptons are chiral in the electroweak theory. Left-handed and right-handed fermions interact with other particles differently. The most natural and powerful way of incorporating chiral fermions in higher dimensional theory is to have an orbifold in extra dimensions. [5] As a typical example, consider M 4 × S 1 . An orbifold is obtained by identifying two points on S 1 : (iv) As is seen below, phenomenology emerging from dynamical gauge-Higgs unification in flat space contradicts with the observation. To have realistic phenomenology in Higgs particles, quarks, and leptons, extra-dimensional space should be curved. In particular, dynamical gauge-Higgs unification in the Randall-Sundrum warped space yields intriguing consequences which can be tested in the experiments at LHC. [6] 3 Extra dimensions : flat or curved?
Although the symmetry is dynamically broken, there are two major problems. The W boson mass is predicted to be 0.135/R where R is the radius of extra-dimensins. It implies that the Kaluza-Klein mass scale M KK = 1/R is about 600 GeV, which is too low. The mass of the Higgs particle is estimated from the curvature of the effective potential at its global minimum. One finds that m is the weak fine structure constant. It leads to m H ∼ 10 GeV, contradicting with experimental data.
These two features are generic in flat space. The observational fact that the Higgs mass should be much bigger than m W indicates that the extra-dimensional space, if it exsits, must be curved.
The most promising spacetime in the context of dynamical gauge-Higgs unification is the Randall-Sundrum (RS) warped spacetime. [8,9] It has the same topology as M 4 × (S 1 /Z 2 ). The metric is given by As in M 4 × (S 1 /Z 2 ), (x µ , y) and (x µ , −y) are identified. It is the anti-de Sitter space with a cosmological constant −1/k 2 sandwitched by two branes at y = 0 and y = πR. It has been speculated that five-dimensional anti-de Sitter space naturally emerges from such a more fundamental theory as superstring theory. Further reduction to four dimensions yields approximately conformal theory with gauge fields and light fermions. The RS spacetime is specified with two parameters, k and R. It is natural to suppose that the structure of spacetime is determined at the Planck scale M pl = 1.2 × 10 19 GeV. As a consequence it is expected that k = O(M pl ). The size R is determined such that the theory predicts m W = 80.4 GeV. As is shown below, it implies that kR = 12 ± 0.3.
W bosons and the Kaluza-Klein mass
Consider SU(3) gauge group which contains SU(2) L × U(1) Y . Boundary conditions for the gauge fields in the RS spacetime (3) are given by With these boundary conditions zero modes (massless modes) of A M in four dimensions exist only for components where • is marked. The zero modes of A µ are W bosons, Z bosons, and photons, whereas those of A y constitute the Higgs doublet. In particular, the zero mode of A y gives rise to non-Abelian Aharonov-Bohm phase θ W : When θ W = 0 (mod 2π), the SU(2) L symmetry breaks down and W bosons acquire a mass m W given by In the RS warped space the Kaluza-Klein mass scale M KK , characterizing a mass spectrum m n ∼ nM KK is given by In a typical model θ W takes a value (0.2π ∼ 0.5π). To yield k = M pl in (7), kR must be (11.75 ∼ 12.0). Note that kR = 6 and 24 yield k = 10 11 GeV and 10 36 GeV, respectively. Thus the value of kR is determined to be 12 ± 0.3.
Combining (7) and (8), one obtains This should be compared with the formula in flat space; M KK ∼ (2π/θ W )m W . There appears an enhancement factor 1 2 πkR ∼ 20 in the RS warped space. Inserting the value for kR, one finds that At LHC, Kaluza-Klein excited states can be produced in intermediate processes so that their existence can be indirectly checked for the value in (10).
Higgs particles
The Higgs field φ in four dimensions corresponds to fluctuations of the non-Abelian Aharonov-Bohm phase θ W . More explicitly where g 4 is the four-dimensional gauge coupling constant. At the quantum level the effective potential for θ W becomes nontrivial. Expanding it around its global minimum, one finds The effective potential is determined, once the mass spectrum m n (θ W ) is found for each field. It is shown that where f (θ W + 2π) = f (θ W ) is a periodic function with an amplitude of O(1). It follows from (12) and (13) that and where α W = g 2 4 /4π ∼ 0.03. Notice the appearance of the enhancement factor 1 2 πkR in (14) and (15), which distinguishes the formulas in the warped space from those in flat space. In a typical model we have found that f (2) (θ W ) and f (4) (θ W ) are about 4. For θ W = (0.2 ∼ 0.5)π, one finds m H = (125 ∼ 286) GeV! (The experimental bound is m H > 116 GeV. [10]) It is remarkable that the dynamical gauge-Higgs unification predicts the mass of the Higgs particle exactly in the range where experiments at LHC will explore. The quartic coupling constant λ is predicted to be ∼ 0.09, though there is ambiguity in the value of f (4) (θ W ).
We summarize the prediction for M KK , m H , and λ in Table 1
Quarks and leptons
Another magic in the RS warped space appears in the fermion sector. [11] Each multiplet of fermions enters as a 5-d Dirac fermion in a triplet representation of SU (3). For instance, a lepton multiplet ψ in the first generation consists of The components ν L , e L , and e R have zero modes which appear as 4-d ν L , e L , and e R . On the other handν R ,ẽ R , andẽ L have no zero modes so that they drop from the particle spectrum in four dimensions at low energies. The Lagrangian density for a fermion multiplet in the RS space is given by The covariant derivative D M is dictated by the general coordinate invariance and the gauge invariance. ckψ − ψ is called the bulk kink mass term. [12] c is a dimensionless parameter. Although c is called a bulk mass parameter, quarks and leptons remain massless even with c = 0 unless the electroweak symmetry breaks down. Their wave functions in the fifth dimension, however, depend on the value c. When the electroweak symmetry breaks down with nonvanishing θ W , those quarks and lep-tons acquire finite masses given by where z 1 = e πkR . A fermion mass is determined by c, or vice versa. See fig. 2. c = ± 1 2 corresponds to m f = m W . Except for top quarks all fermions have |c| > 1 2 . As shown in Table 2, the top quark mass corresponds to c = 0.43 whereas the electron mass to c = 0.87. Although m t /m e ∼ 10 5 , there appears no hierarchical structure in the c space. This gives us a good hint to understand the hierarchy in the quark-lepton mass spectrum. Table 2 The bulk kink mass parameter c for each quark or lepton, following from (18)
Outlook
The results obtained in the dynamical gauge-Higgs unification in the Randall-Sundrum warped spacetime are surprising. The mass of the Higgs particle is predicted in the range 125 GeV to 285 GeV. We have determined the fermion wave functions in terms of their masses, with which couplings of quarks and leptons to the KK excited states of W bosons etc. can be determined. In the LHC experiments we might be able to see the trace of the extra dimension, directly or indirectly.
Acknowledgement
This work was financially supported by the Japanese Ministry of Education and the 21st Century COE Program at Osaka University, "Towards a New Basic Science: Depth and Synthesis". | 2014-10-01T00:00:00.000Z | 2005-11-05T00:00:00.000 | {
"year": 2005,
"sha1": "3ea60a2568ab3bcbe3f1d8a3f5570384ea3e55f4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5cb9564b27a8763de8ad376d02d6760e3a9a97ad",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
} |
254810490 | pes2o/s2orc | v3-fos-license | Fresh-Cut Eruca Sativa Treated with Plasma Activated Water (PAW): Evaluation of Antioxidant Capacity, Polyphenolic Profile and Redox Status in Caco2 Cells
Plasma Activated Water (PAW) has recently emerged as a promising non-chemical and non-thermal technology for the microbial decontamination of food. However, its use as a replacement for conventional disinfection solutions needs further investigation, as the impact of reactive species generated by PAW on nutritional food quality, toxicology, and safety is still unclear. The purpose of this study is to investigate how treatment with PAW affects the health-promoting properties of fresh-cut rocket salad (Eruca sativa). Therefore, the polyphenolic profile and antioxidant activity were evaluated by a combination of UHPLC-MS/MS and in vitro assays. Moreover, the effects of polyphenolic extracts on cell viability and oxidative status in Caco2 cells were assessed. PAW caused a slight reduction in the radical scavenging activity of the amphiphilic fraction over time but produced a positive effect on the total phenolic content, of about 70% in PAW-20, and an increase in the relative percentage (about 44–50%) of glucosinolate. Interestingly, the PAW polyphenol extract did not cause any cytotoxic effect and caused a lower imbalance in the redox status compared to an untreated sample. The obtained results support the use of PAW technology for fresh-cut vegetables to preserve their nutritional properties.
Introduction
Rocket, also known as arugula, refers to a group of plant species distinguished by their pungent-tasting leaves. Eruca sativa L. is the species most commonly used for human consumption (rocket salad). The main phytochemicals found in the different parts of rocket tissue that contribute to its antioxidant properties are phenolic compounds and flavonoids. Glucosinolates, which are sulfur-containing plant secondary metabolites, are responsible for the bitter taste of rocket salad and have shown antibacterial, anticarcinogenic, and antioxidant properties [1].
Many phenolic compounds are antioxidants that may aid in the prevention of human diseases (e.g., cancer and heart diseases), and they also possess immunomodulatory activity. The health-promoting effects of a vegetable-rich diet have partly been attributed to an increased intake of phenolic compounds with a high antioxidant capacity [2,3].
The processing of fresh leafy vegetables (e.g., salad) for the preparation of fresh-cut products has different effects on the antioxidant properties of the tissue, depending on the species used. On an industrial scale, it is common practice to sanitize fresh-cut vegetables with a chemical disinfectant (usually sodium hypochlorite) to remove pathogenic and spoilage microorganisms. However, concerns about human health and environmental pollution have led to the search for alternatives to chemical treatments that preserve the nutrient density of the materials [4].
As the demand for fresher, safer, and nutritionally dense foods has increased, nonthermal treatment technologies, such as hydrostatic pressure, pulsed electric fields, ultrasound, and plasma, have been considered [5]. Plasma is a partially ionized gas composed of electrons, ions, uncharged neutral particles (e.g., atoms, molecules, radicals), and ultraviolet photons [6].
Plasma Activated Water (PAW) is widely regarded as a promising method for the microbial disinfection of food [6]. To produce PAW, the water is subjected to a cold plasma discharge above or below the water's surface. The reaction species generated by the plasma interact with the water molecules to initiate a variety of chemical reactions, resulting in a one-of-a-kind mixture of biochemically reactive chemicals. In the absence of other chemicals, a unique transfer of energy and chemical reactivity occurs from the gaseous plasmas to the water, leading to a product characterized by a remarkable, transient, broadspectrum biological activity [7,8]. Therefore, PAW is a sustainable potential strategy for a variety of biotechnological applications, including water purification and biomedicine.
Studies have been conducted to evaluate the effectiveness of cold plasmas or PAW technology in inactivating microorganisms [9][10][11][12]. PAW has been shown to be effective in inactivating both natural microbiota and intentionally contaminated pathogens; however, in addition to the microbial quality, the effects of the treatment on other parameters need to be carefully evaluated.
The bioactive components of fresh fruits and vegetables have a significant impact on human health, mainly because they possess antioxidant abilities. As a result, for product quality, the preservation of these components is critical.
After washing with PAW, a significant increase in antioxidant activity was observed in fresh-cut apples [13], pears [14], and mushrooms [15]. Previous research has shown that exposure to reactive oxygen and nitrogen species (RONS) generated by cold atmospheric plasma treatment (CAP) can lead to the oxidation of some phenolic compounds in leafy vegetables [16]. An increase in some specific phenolic compounds after PAW treatment has been observed in different products such as blueberries [17], apples [18], and mung bean sprouts [19]. The authors suggested that this effect was due to a physiological response of the tissue to the stress caused by the reactive species. However, an increase in exposure time triggered oxidative reactions and a progressive reduction in the phenolic content and antioxidant activity.
In a recent study [16], we investigated the effects of PAW, generated by a high-power atmospheric pressure corona discharge plasma source, on the microbial flora of arugula. We found that PAW was able to decontaminate this product while causing only minor changes in the quality parameters. PAW treatments were found to be more effective against the targeted background microbiota compared to hypochlorite, selected as the reference sanitizer due to its widespread use in the food industry. Specifically, shorter immersion times were required to significantly reduce the populations of Enterobacteriaceae and psychotropic bacteria, and all the groups of spoilage microorganisms were inactivated after 2 min of arugula dipping in PAW.
Considering these previous results, the aim of the present study was to investigate the effect of the same PAW treatments on some of the antioxidant properties of rocket salad. Specifically, in rocket salad washed in PAW, we explored (i) the antioxidant activity measured with an in vitro multimodal approach; (ii) the quali-quantitative content of polyphenolic compounds using the standard UHPLC-MS/MS technique; and (iii) the role exerted by polyphenolic extracts on cell viability and oxidative status in Caco2 cells by comparison to the untreated sample.
Polyphenols Extract Preparation for UHPLC-MS/MS Analysis and Cell Line Experiments
We chose to focus our analysis on the PAW-20 extract because the data from antioxidant activity assays have shown that this washing time resulted in an increase in the TPC of the amphiphilic fraction. Moreover, the previous data reported that this treatment time significantly reduced the microbial load and significantly increased the total flavonoid content of E. sativa and its extract, respectively [16].
Three g of treated (PAW) or untreated (UT) freeze-dried rocket leaves powder were mixed with 20 mL of 60% methanol, and the suspension was vortexed vigorously for 2 min. The sample was centrifuged at 10,000 × g for 10 min at 10 • C; then, the supernatant was collected while the pellet was extracted a second time. The supernatants of the two extractions were combined and the solvent was removed using a rotary evaporator (mod. Laborota 4001, Heidolph Instruments, Schwabach, Germany) at 35 • C. A cellcultured medium containing 0.5% DMSO (pH 7.1) was used to dissolve the dry residue, which was stored at −80 • C (stock solution containing 500 mg of freeze-dried rocket leaves powder/mL) for further analysis. Two independent extractions were performed for each sample.
UHPLC-ESI-MS/MS Analysis
An ultra-high-performance liquid chromatography (UHPLC) system combined with a negative electrospray ionization (H-ESI II) triple-quadrupole mass spectrometer (Thermo Scientific, TSQ Vantage, Waltham, MA, USA) was employed for the quali-quantitative determination of phenolic compounds in the rocket salad extract. For these experiments, a SUNSHELL C18 (2.1 i.d. × 100 mm) column with a 2.6 µm particle size (Chromanik, Osaka, Japan) was used.
The sample (500 mg of lyophilized rocket leave powder/mL) was diluted with acidified water (0.2% formic acid) to achieve a final concentration of 3 mg/mL. The mobile phase (flow rate at 0.35 mL/min) consisted of water + 0.2% formic acid (eluent A) and acetonitrile + 0.2 formic acid (eluent B). For gradient elution, a 9-min linear gradient of 2 to 20% acetonitrile in 0.2% aqueous formic acid was used. The capillary temperature was set at 270 • C; the sheath and auxiliary gases were 40 and 5 arbitrary units, respectively; and the voltage source was 3 kV. For the MS/MS analysis, a vaporizer temperature of 200 • C argon was used, with a collision pressure of 1.0.
For compound identification, a full-scan analysis with a range from m/z 100 to 1500 was employed, while a product ion scan experiment was performed for the not fully identified ions by using the full-scan method. Then, the mass spectra were compared with the literature data [21] and MS spectral databases [22,23]. The method of calibration curve was adopted for quantification of the flavonol glycosides by using rutin (external standard) calibration solutions at five concentration levels with a range of 0.1-10 µg/mL.
Cell Culture and Treatments
Caco2 cells were purchased from ATCC and grown in a 1:1 mixture of Ham's F12:DMEM medium, supplemented with 10% fetal bovine serum (Lonza, Basel, Switzerland), 2 mM L-glutamine, 100 U/mL penicillin, and 100 µg/mL streptomycin, at 37 • C under a 5% CO 2 atmosphere. A trypsin/EDTA (Sigma-Aldrich, Steinheim, Germany) treatment was used for cell harvesting. For the reactive oxygen species (ROS) and nitric oxide (NO) determinations, the Caco2 cells were grown in a 1:1 mixture of Ham's F12:DMEM medium without red phenol (Sigma-Aldrich, Steinheim, Germany). The polyphenol extracts from the PAW, CL, and UT rocket leaves were diluted in a complete cell medium to the final concentration required for each experiment (0.1% maximum concentration of DMSO). The concentrations were referred to the extract stock solution that contains 500 mg of freeze-dried rocket leaves powder/mL. A medium containing 0.1% DMSO was used for the culture of the control cells.
Assessment of Cell Viability
Caco2 cells were seeded at a density of 4 × 10 4 cells/well in a white, clear-bottomed 96well microplate and allowed to attach overnight. Increasing concentrations of polyphenol extracts (corresponding to 0.01-100 mg of freeze-dried rocket leaf powder/mL) from rocket leaves exposed to PAW washing for 20 min (PAW- 20) or to the UT sample were used to treat the cells. After 5 h of incubation, cell viability was determined with the CellTiter-Glo ® Luminescent Cell Viability Assay (Promega, Madison, WI, USA), in accordance with the manufacturer's protocol. The luminescence intensity was assessed with an EnSpire ® multimode plate reader (PerkinElmer, Waltham, MA, USA). The samples were derived from two independent extraction procedures. In each experiment, each sample was analyzed in quadruplicate and the data are reported as the mean ± standard deviation (SD) of the two independent experiments.
Assessment of Reactive Oxygen Species (ROS)
Caco2 cells were seeded at a density of 4 × 10 4 cells/well in a black, 96-well clearbottomed microplate and allowed to adhere overnight. ROS production was assessed using the DCFDA Cellular ROS Detection Assay Kit (Abcam, Cambridge, UK), following the manufacturer's protocol. Intracellular esterases deacetylate the 2,7-dichlorofluorescein Nutrients 2022, 14, 5337 5 of 14 diacetate (DCFDA) into a non-fluorescent compound, which is subsequently oxidized into the fluorogenic dye 2 , 7 -dichlorofluorescein (DCF) by reactive oxygen species. Briefly, the cells were loaded with 20 µM DCFDA for 45 min at 37 • C and washed twice with PBS. Then, the Caco2 cells were treated with increasing concentrations of polyphenol extracts from the PAW-20 or UT samples for 5 h. The fluorescence intensity of the DCF (excitation 485 nm; emission 535 nm) was assessed with the EnSpireTM multimode plate reader (PerkinElmer, Waltham, MA, USA). Tert-Butyl Hydrogen Peroxide (TBHP), at a concentration of 100 or 150 µM, was used as the positive control for the experiments. The data were reported as a percentage of the control after the subtraction of the background (blank wells with no cells and compounds at the same concentration used for treatment), followed by normalization to total protein content quantified by the Bio-Rad DC Protein assay (Bio-Rad Laboratories, Hercules, CA, USA Bio-Rad). The samples were derived from two independent extraction procedures. In each experiment, each sample was analyzed in quadruplicate and the data are reported as the mean ± standard deviation (SD) of the two independent experiments.
Assessment of NO
Caco2 cells were seeded at a density of 6 × 10 5 cells/well in 6-well plates and allowed to adhere overnight. Increasing concentrations of polyphenol extracts from the PAW-20 or UT rocket leaves were used to treat the cells. After 5 h of incubation, the total intracellular nitrite/nitrate concentration was assessed using the Nitric Oxide Assay Kit (Abcam, Cambridge, UK), in agreement with the manufacturer's protocol. Because NO is rapidly converted to nitrite and nitrate, their total concentration is used as a measure of NO production. The experiments were performed using 30 µL of cell lysate and the samples were incubated for 4 h with the enzyme nitrate reductase to allow the conversion of nitrate to nitrite. The fluorescence intensity of the DAN probe (excitation 360 nm; emission 450 nm) was assessed with the EnSpireTM multimode plate reader (PerkinElmer, Waltham, MA, USA). The samples were derived from two independent extraction procedures. In each experiment, each sample was analyzed in quadruplicate and the data are reported as the mean ± standard deviation (SD) of the two independent experiments.
Statistical Analysis
SPSS statistical software (version 21.0, SPSS, Inc., Chicago, IL, USA) was used to perform the statistical analyses. The data obtained from the in vitro experiments were analyzed by the one-way Analysis of Variance (ANOVA) to evaluate the effect of the treatments on the measured variables, and the two-tailed Student's t-test and/or Tukey's HSD post hoc test were carried out for comparing the groups of interest. The data obtained from the cell line experiments were analyzed by pairwise multiple comparisons for one-way ANOVA, followed by Tukey's HSD post hoc test to detect differences between the groups of interest (treatment vs. control and PAW-20 vs. UT at different concentrations). Statistical significance was determined with a conventional p value of ≤ 0.05.
Antioxidant Activity of Rocket Salad upon Exposure to PAW
Phenolic compounds are among the most important phytochemicals that possess antioxidant activity due to their chemical structure, which gives them redox properties and radical scavenging activity [24]. Therefore, in this study, we first investigated the antioxidant activity in rocket salad upon exposure to PAW for different times (2, 5, 10, and 20 min) or to washing with a hypochlorite solution (CL), the latter being a common reference method in the food industry. We also analyzed untreated rocket salad (UT) as a control sample. Antioxidant activity was investigated by a multimodal in vitro approach, according to our previous studies [20,25], to evaluate both the radical scavenging activity (RSA), i.e., DPPH and ABTS assays, and reducing power, i.e., total phenolic content (TPC) and FRAP. The results are reported in Table 1. The ABTS assay (expressed as Trolox equivalent, TE) was performed on the hydrophilic and amphiphilic extracts, both of which showed comparable RSA for the UT and CL samples, whereas significant differences appeared, especially between the PAW washing times. Specifically, the RSA of the hydrophilic fraction was significantly higher (p ≤ 0.05) after the 10-and 20-min immersion of the arugula in the PAW than after the 2-min immersion, reaching values that were about 30-40% lower than those of the controls (UT and CL, respectively). In contrast to the TPC results (see below), increasing the treatment time seemed to decrease the RSA of the amphiphilic fraction. In fact, the RSA showed the highest value after dipping rocket salad for 2 min in PAW (65 ± 3 µmol TE g -1 DM), resulting in a mean increase of 40% compared to the controls (UT and CL), while it significantly decreased by 40% after 20 min compared to the 2 min treatment (p ≤ 0.05).
These results on the RSA of the amphiphilic extracts were also observed for the DPPH assay, which evidenced a significant increase (p ≤ 0.05) in RSA after dipping the rocket salad in PAW for 2 min compared to the other treatment times (especially at 20 min) and the controls (both CL and UT).
Then, the evaluation of TPC in the rocket salad was performed by measuring the ability of both the hydrophilic and amphiphilic fractions to reduce the Folin-Ciocalteu reagent. The results showed that the shorter PAW washing time (2 min) significantly decreased the TPC in the hydrophilic extract (p ≤ 0.05), while the extension of the exposure time (from 5 to 20 min) did not significantly affect the TPC compared to the controls (both UT and CL). Regarding the amphiphilic fraction, which generally showed a higher reducing power than the hydrophilic fraction, the highest TPC value was observed for the CL sample. The PAW resulted in a significant increase in TPC value of about 70% after washing for 20 min compared to the UT sample. A previous study reported that PAW washing did not result in significant differences in the TPC compared to the untreated sample, except after 5 min of treatment, which induced a slight reduction [16]. It is noteworthy that the authors evaluated the TPC on the total ethanol/formic acid extracts, whereas, in this study, we analyzed both the hydrophilic and amphiphilic fractions separately. The TPC and FRAP assays performed with the amphiphilic fraction were in agreement, indicating a positive relationship between the reducing power and the exposure time to the PAW.
Overall, antioxidant activity in terms of RSA was positively affected by dipping the samples for a short time in PAW, while increasing of the exposure time (20 min) seemed to increase the reducing power, due to an increase in TPC compared to the UT sample. This effect has also been previously observed by other authors and several explanations have been put forward. On the one hand, an increase in polyphenols can be attributed to the activation of key enzymes involved in the phenolic pathway after a PAW longtime exposure, as reported for fresh-cut rocket as a response to processing stress (e.g., cutting) [26]. On the other hand, the cell wall modifications caused by ozone treatment are Nutrients 2022, 14, 5337 7 of 14 believed to be responsible for the release of the conjugated phenolic compounds in the cell wall of fruits such as bananas and pineapples [27]; therefore, this might also happen in the case of PAW since ozone is one of its reactive species. It is generally believed, however, that the changes are related not only to the total amount but also to the type of phenolic compounds [28].
Qualitative and Quantitative Analysis of E. sativa Extracts
To increase the knowledge about PAW technology in the food matrix, we evaluated its effect on the qualitative and quantitative polyphenolic profile in the rocket salad samples. To this aim, a UHPLC-MS/MS analysis was performed on the methanolic extracts obtained after exposing the rocket salad to PAW for 20 min (PAW-20) or the UT sample.
The base peak chromatograms acquired in the full-scan mode of the analyzed samples are shown in Figure 1, and the MS data for the identified compounds are listed in Table 2. The chromatographic peak detection and sample data analysis are reported in more detail in Supplementary File S1.
Overall, antioxidant activity in terms of RSA was positively affected by dipping the samples for a short time in PAW, while increasing of the exposure time (20 min) seemed to increase the reducing power, due to an increase in TPC compared to the UT sample. This effect has also been previously observed by other authors and several explanations have been put forward. On the one hand, an increase in polyphenols can be attributed to the activation of key enzymes involved in the phenolic pathway after a PAW long-time exposure, as reported for fresh-cut rocket as a response to processing stress (e.g., cutting) [26]. On the other hand, the cell wall modifications caused by ozone treatment are believed to be responsible for the release of the conjugated phenolic compounds in the cell wall of fruits such as bananas and pineapples [27]; therefore, this might also happen in the case of PAW since ozone is one of its reactive species. It is generally believed, however, that the changes are related not only to the total amount but also to the type of phenolic compounds [28].
Qualitative and Quantitative Analysis of E. sativa Extracts
To increase the knowledge about PAW technology in the food matrix, we evaluated its effect on the qualitative and quantitative polyphenolic profile in the rocket salad samples. To this aim, a UHPLC-MS/MS analysis was performed on the methanolic extracts obtained after exposing the rocket salad to PAW for 20 min (PAW-20) or the UT sample.
The base peak chromatograms acquired in the full-scan mode of the analyzed samples are shown in Figure 1, and the MS data for the identified compounds are listed in Table 2. The chromatographic peak detection and sample data analysis are reported in more detail in Supplementary File S1. Table 2. Chromatographic peak detection for [M-H] − at m/z 436 (peak 1), 420 (peak 2), 787 (peak 8), and 993 (peak 11) are reported in Supplementary File S1. Most of the compounds detected in both polyphenol extracts correspond to glycosylated flavonols, especially kaempferol, isorhamnetin, and quercetin, in agreement with the previous literature data [21]. As shown in Figure 1 The results, reported in Table 3, showed similar profiles in terms of concentrations (µmol/L) of the total flavonol glycosides in both the analyzed samples, and differences were found in only two compounds, namely isorhamnetin-O-hexoside and quercetin-Odihexoside, which were significantly higher in the UT sample than in the PAW-20 extract (p ≤ 0.05). In addition, in these analytical conditions, glucosinolates were detected, such as glucoraphanin (1) and glucoerucin (2), which are mainly found in cruciferous vegetables as secondary metabolites and are responsible for the pungent aroma of rocket salad [30]. For comparison, their relative abundances were obtained as a signal ratio to the total chromatographic area, and it was found that both [M-H] − glucoraphanin (1) and [M-H] − glucoerucin (2) were significantly higher (p ≤ 0.05) in the PAW-20 (26 ± 2 and 24 ± 3%, Nutrients 2022, 14, 5337 9 of 14 respectively) than the UT sample (17.9 ± 0.8 and 15 ± 1%, respectively), corresponding to increases of about 44 and 50%, respectively. Interestingly, glucosinolates are the precursors of bioactive compounds, such as sulforaphene and erucin, which are extensively studied for their health-promoting effects [31][32][33]. Changes in the hydrolysis products of glucosinolates, such as isothiocyanates and nitriles, in rocket have been previously observed as a consequence of PAW treatment [34]. These compounds are obtained by the enzymatic hydrolysis of the respective glucosinolate compound. The authors observed a modification of the relative amounts of the hydrolysis products; that, however, was not time dependent.
Peak Compound 1 RT (min) [M-H] − (m/z) MS/MS Ions (m/z) 2 References
The increase in glucosinolate compounds could be related to the physiological response of the tissue to abiotic stress represented by washing in PAW. Other authors [35] observed an increase in the endogenous production of these metabolites in response to environmental stress. Indeed, abiotic stress can induce specific responses at a cellular level, which have the aim of counteracting the stressful conditions. Although the mechanisms have not been fully elucidated, often these responses can involve the de novo synthesis of secondary metabolites such as glucosinolates, and we can therefore speculate that a similar effect occurred in the present research. Moreover, for a better understanding of the health properties of the treated products, it would be interesting to investigate how PAW affects the further enzymatic hydrolysis of glucosinolates into their corresponding products.
So, the results obtained by the UHPLC-MS/MS analysis showed only minor differences between the polyphenolic profile of the PAW-20 and UT samples, while washing in PAW significantly affected the spectrophotometric determination of TPC. This could be because the TPC assay depends on an oxidation/reduction reaction. Moreover, the two determinations were performed on E. sativa extracts obtained according to two different protocols of extraction, as specifically reported in the Materials and Methods section. Of note, the extracts characterized by the UHPLC-MS/MS analysis were used for the assays performed in the Caco2 cell line.
Effect of PAW-Rocket Salad Extract on Cell Viability
We have previously reported that polyphenol extract from apples' exposure to atmospheric double-barrier discharge (DBD) plasma technology did not affect cell viability [25]. To evaluate whether washing in PAW induces the generation of compounds that may be dangerous to human cells, first we evaluated whether the PAW-20 extract affects cell viability by using the CellTiter-Glo ® Luminescent Cell Viability Assay (Promega, Madison, USA). Caco2 cells were treated for 5 h with different concentrations of the PAW-20 or UT sample, ranging from 0.01 to 100 mg of freeze-dried rocket leaf powder/mL. Based on the UHPLC-MS/MS analysis, we used concentrations corresponding to a total polyphenol content of about 0.02-200 µM. The PAW-20 extract induced a significant proliferative effect compared with the control cells when used at a concentration of 50 mg/mL (100 µM), whereas, at a concentration of 100 mg/mL (200 µM), it caused a slight decrease in cell viability. In contrast, the extract of UT at the highest concentration tested resulted in a significant cytotoxic effect compared with the control. Comparing the effect of the UT and PAW-20 extracts, the greater cytotoxicity can be attributed to the former than the latter (see Figure 2).
This opposite effect on cell proliferation and cytotoxicity of extracts, derived from both Diplotaxis tenuifolia (L.) DC. and E. sativa, has been reported in the literature, albeit the assays were performed under different experimental conditions [36][37][38]. Interestingly, concentration-dependent activities were also reported in the literature for individual compounds such as erucin and sulforaphene (derived from the reaction of glucosinolates with the enzyme myrosinase) [39][40][41]. Furthermore, we cannot rule out a synergistic effect of the various bioactive compounds detected in the extracts.
(100 μM), whereas, at a concentration of 100 mg/mL (200 μM), it caused a slight decrease in cell viability. In contrast, the extract of UT at the highest concentration tested resulted in a significant cytotoxic effect compared with the control. Comparing the effect of the UT and PAW-20 extracts, the greater cytotoxicity can be attributed to the former than the latter (see Figure 2).
Figure 2.
Effect of PAW-rocket salad extract on Caco2 cell viability. Caco2 cells were treated for 5 h with increasing concentrations of polyphenols (0.01-100 mg/mL) extracted from PAW-20 (white bar) or UT (gray bar) rocket salad. Cell viability was determined by CellTiter-Glo ® Luminescent Cell Viability Assay (Promega). Control represents Caco2 cells incubated with culture medium containing 0.1% DMSO. Data are presented as the mean ± SD of relative percentage of control sample (set at 100%) obtained from two independent experiments each carried out in quadruplicate. Statistical significance was calculated by pairwise multiple comparisons for one-way ANOVA followed by Tukey's HSD post hoc test. **: significant difference versus control p < 0.001; different letters, when reported, mean significant difference between groups (PAW-20 versus UT at different concentrations), p < 0.05. This opposite effect on cell proliferation and cytotoxicity of extracts, derived from both Diplotaxis tenuifolia (L.) DC. and E. sativa, has been reported in the literature, albeit the assays were performed under different experimental conditions [36][37][38]. Interestingly, concentration-dependent activities were also reported in the literature for individual compounds such as erucin and sulforaphene (derived from the reaction of glucosinolates with the enzyme myrosinase) [39][40][41]. Furthermore, we cannot rule out a synergistic effect of the various bioactive compounds detected in the extracts.
Effect of PAW-Rocket Salad Extract on Cellular Redox Homeostasis
The results on cell viability prompted us to investigate the effect of the extracts on cellular redox homeostasis. In a healthy state, both reactive oxygen species (ROS) and reactive nitrogen species (RNS) are generated in a well-regulated manner, controlling cellular functions by modulating signaling pathways and playing an important role as second messengers. Oxidative stress is characterized by an imbalance between increased levels of ROS/RNS and low activity of cellular radical scavenging mechanisms. Oxidative stress Figure 2. Effect of PAW-rocket salad extract on Caco2 cell viability. Caco2 cells were treated for 5 h with increasing concentrations of polyphenols (0.01-100 mg/mL) extracted from PAW-20 (white bar) or UT (gray bar) rocket salad. Cell viability was determined by CellTiter-Glo ® Luminescent Cell Viability Assay (Promega). Control represents Caco2 cells incubated with culture medium containing 0.1% DMSO. Data are presented as the mean ± SD of relative percentage of control sample (set at 100%) obtained from two independent experiments each carried out in quadruplicate. Statistical significance was calculated by pairwise multiple comparisons for one-way ANOVA followed by Tukey's HSD post hoc test. **: significant difference versus control p < 0.001; different letters, when reported, mean significant difference between groups (PAW-20 versus UT at different concentrations), p < 0.05.
Effect of PAW-Rocket Salad Extract on Cellular Redox Homeostasis
The results on cell viability prompted us to investigate the effect of the extracts on cellular redox homeostasis. In a healthy state, both reactive oxygen species (ROS) and reactive nitrogen species (RNS) are generated in a well-regulated manner, controlling cellular functions by modulating signaling pathways and playing an important role as second messengers. Oxidative stress is characterized by an imbalance between increased levels of ROS/RNS and low activity of cellular radical scavenging mechanisms. Oxidative stress contributes to aging and plays a role in the development of different human diseases (for example diabetes, cancer, and Alzheimer's disease) [42,43].
The production of ROS was evaluated using the DCFDA Cellular ROS Detection Assay Kit (Abcam). Caco2 cells were incubated for 5 h in the presence of the extracts at the concentrations tested for the cell viability assays. We observed a significant increase in intracellular ROS production in a concentration-dependent manner for both the PAW-20 and UT extracts compared to the control cells (Figure 3a).
The production of ROS was evaluated using the DCFDA Cellular ROS Detection Assay Kit (Abcam). Caco2 cells were incubated for 5 h in the presence of the extracts at the concentrations tested for the cell viability assays. We observed a significant increase in intracellular ROS production in a concentration-dependent manner for both the PAW-20 and UT extracts compared to the control cells (Figure 3a). However, we highlighted a significantly lower ROS generation when the cells were treated with the PAW-20 compared to the UT extract, starting at a concentration of 5 mg/mL (10 μM) (Figure 3a). Interestingly, the highlighted ROS production only partially affected cell viability. These results are in good agreement with our previous study, in which we reported that intracellular ROS production was lower in Caco2 cells receiving polyphenol extract derived from apples exposed to DBD plasma technology than in the untreated ones [25]. Then, we determined the modulation of the intracellular NO levels, following the cells' treatment in the same experimental conditions reported above. NO was assessed by measuring the stable intracellular oxidation products nitrite and nitrate, using the Nitric Oxide Assay Kit (Abcam). Besides its role in triggering redox imbalance, NO is a ubiquitous mediator of many different biological processes, such as vasodilation, neurotransmission, and immune response [44]. Moreover, NO modulates intestinal epithelial cell tight junction and plays a role in gastrointestinal motility, both under physiological and pathological conditions [45]. As previously revealed for ROS production, we demonstrated an increased generation of NO in a concentration-dependent manner for both the PAW-20 and UT samples, compared to the control cells. Moreover, the Data are presented as the mean ± SD of relative percentage of control sample (set at 100%) obtained from two independent experiments, each carried out in quadruplicate. Statistical significance was calculated by pairwise multiple comparisons for one-way ANOVA followed by Tukey's HSD post hoc test. **: significant difference versus control p < 0.01; different letters, when reported, mean significant difference between groups (PAW-20 versus UT at different concentrations), p < 0.05. However, we highlighted a significantly lower ROS generation when the cells were treated with the PAW-20 compared to the UT extract, starting at a concentration of 5 mg/mL (10 µM) (Figure 3a). Interestingly, the highlighted ROS production only partially affected cell viability. These results are in good agreement with our previous study, in which we reported that intracellular ROS production was lower in Caco2 cells receiving polyphenol extract derived from apples exposed to DBD plasma technology than in the untreated ones [25]. Then, we determined the modulation of the intracellular NO levels, following the cells' treatment in the same experimental conditions reported above. NO was assessed by measuring the stable intracellular oxidation products nitrite and nitrate, using the Nitric Oxide Assay Kit (Abcam). Besides its role in triggering redox imbalance, NO is a ubiquitous mediator of many different biological processes, such as vasodilation, neurotransmission, and immune response [44]. Moreover, NO modulates intestinal epithelial cell tight junction and plays a role in gastrointestinal motility, both under physiological and pathological conditions [45]. As previously revealed for ROS production, we demonstrated an increased generation of NO in a concentration-dependent manner for both the PAW-20 and UT samples, compared to the control cells. Moreover, the intracellular NO production was, in any case, lower in the Caco2 cells loading with the PAW-20 than the UT extract (Figure 3b).
The modulation of oxidative stress by E. sativa extracts has been poorly investigated, and sometimes the results seemed to be conflicting. Treatment of human peripheral blood mononuclear cells with an E. sativa extract or glucosinolate fraction does not induce a significant modulation of ROS production, while it is able to reduce the cytotoxicity and ROS production induced by H 2 O 2 treatment [46]. Other authors also reported the antigenotoxic effect of both glucosinolate-rich extract of E. sativa cv. Sky against H 2 O 2 [47] and E. sativa extract against benzo[a]pyrene-DNA-induced damage [48]. However, it is documented that bioactive compounds derived from different species of the Brassicaceae family have the ability to induce oxidative stress in cancer cells [49]. Indeed, the notion that ROS/RNS are 'bad' or 'good', in this context, needs to be further elucidated.
According to the obtained results, the use of the PAW technology led to an immediate slight increase in the RSA analyzed in the amphiphilic fraction compared to the UT and the CL samples, the latter being used as an industrial process reference. On the other hand, the PAW determined a higher value of the reducing power with increasing treatment time. It is worth mentioning that our results revealed a significantly greater relative abundance (p ≤ 0.05) of glucosinolates in the PAW-20 sample compared to the untreated one. Since the literature data suggest that these unique phytochemicals, and their related isothiocyanates, represent an important contribution to human health, further studies are needed to clarify the effects of PAW technology on the key enzymes involved in the glucosinolate pathway. Furthermore, given the key role of oxidative and nitrosative stress in the development and progression of various human diseases, these preliminary results should be pursued.
Conclusions
The results obtained in this study indicate that the immersion of rocket in PAW reduced very few phenolic compounds compared to the untreated sample (UT), while the use of PAW technology seemed to positively affect the relative percentage of glucosinolate. However, the opposite effects on cell viability observed in the PAW-20 and UT extracts could be explained by the differences observed in the polyphenol profile. The data obtained in human cultured colonocytes showed that the polyphenol extract obtained from rocket leaves did not induce a significant change in cell viability after exposure to the PAW, in contrast to the extract obtained from the UT, which, on the contrary, induced a cytotoxic effect at the highest concentration tested. On the other hand, both extracts induced an imbalance in the Caco2 cell redox status, albeit the PAW extract exhibited a lower effect.
In conclusion, the in vitro and in-the-cell-line results provide new insight into the effects of PAW technology on food matrices, for its potential application as a novel and safe strategy in the food industry. | 2022-12-18T16:18:35.532Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "54507b9c1383783fed0b40c66e763168e8000d4f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/24/5337/pdf?version=1671112066",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a43f7058a8db059ca5e8f71f8d971e37cf8b8627",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
71143560 | pes2o/s2orc | v3-fos-license | Implementing Chief Resident Immersion Training (CRIT) in the Care of Older Adults: Overcoming Barriers and Promoting Facilitators
The Chief Resident Immersion Training (CRIT) in the Care of Older Adults curriculum was developed at Boston University School of Medicine to improve the care of older adults through an educational intervention. The curriculum targeted chief residents (CRs) because their role as mediators between learners and faculty provides the greatest potential impact for transmitting knowledge. The goals of CRIT are to: (1) provide education on geriatric principles and on teaching/leadership skills, (2) foster interdisciplinary collaboration, and (3) complete an action project. This study demonstrates successful implementation of CRIT at a different academic institution in a rural state. The CRs indicated that their confidence in their ability to apply and teach geriatrics improved after CRIT. In addition, the CRs indicated that CRIT improved their confidence in their overall skills as CRs. The barriers and facilitators to implementation are addressed in order to promote successful adoption of CRIT at other institutions, including those in rural states.
Introduction
In studying and implementing educational innovations, the goal is to generalize effective programs so that they may be reproduced effortlessly in other institutions to achieve similar results and widespread adoption. After implementing the Chief Resident Immersion Training (CRIT) in the Care of Older Adults at the University of Louisville, located in a rural state, other rural states can successfully implement CRIT.
CRIT was developed by Geriatrics, General Internal Medicine, and the Department of Family Medicine at Boston University School of Medicine, after being awarded a Donald W. Reynolds Foundation grant in 2003. This funding was provided to develop the Boston University Medical Center (BMC) Comprehensive Geriatric Education project, with the intention of improving the care of older adults through education across the continuum. Chief residents were targeted because their role as mediators between medical residents/students and clinical faculty could provide the greatest potential impact for transmitting knowledge in the care of older adults [1]. To date, research has not been completed to prove the improvement of clinical outcomes after this educational intervention.
With additional funding and support" the CRIT curriculum was implemented at 33 other institutions across the country, training over 1000 chief residents from 2007 to 2013 [2]. The goals of the CRIT conference remained the same for the additional sites: (1) to provide education on geriatric principles, (2) to provide tutelage on teaching and leadership skills, (3) to foster interdisciplinary collaboration, and (4) to create and complete an action project during the participants' chief resident year [1]. Data from these 33 other institutions showed successful implementation nationwide.
Reporting of CRIT implementation beyond the BMC is limited in the medical literature [2][3][4][5][6][7]. There is only one published paper about CRIT and it reports on including interprofessional education into the curriculum. Only one of the presentations began to touch upon potential barriers and facilitators of successful implementation, as they presented data on their action project characteristics and completion rate [7]. The literature does not report on implementation, barriers, and facilitators in a rural state.
Implementation science is the study of translating evidence-based interventions perfected in the lab under ideal conditions into practice in real world environments, identifying explicit barriers and promoters of success [8]. Implementation science shifts the focus from collection and reporting of data to analysis and application. It can explore whether a particular intervention was successfully applied and if the barriers and the facilitators for success were addressed. Without barrier and facilitator identification, analysis, and subsequent distribution of contextual findings, similar errors in implementation will be repeated elsewhere, resulting in wasted resources. The frustration and cost, actual and opportunity, associated with suboptimal implementation may prevent incorporation of the intervention, resulting in the envisioned behavioral change never being realized. Although each context should expect to have unique barriers, a study of the barriers to and promoters of success from a different but similar institution will provide information that can be utilized in development of strategies to enhance the probability of success. This study will show that CRIT can still be implemented successfully at an institution in a rural state and will highlight the barriers and facilitators at an institution with limited resources.
This article not only presents data on CRIT implementation at the University of Louisville School of Medicine (ULSOM), but also will provide a detailed discussion of barriers to and facilitators of implementation. UL SOM differs from the BMC in many ways; therefore, other institutions seeking to implement CRIT can use this information in their implementation. The BMC is an urban medical school with 1685 medical students and 12 geriatricians. The ULSOM is located in a rural state and only has 630 medical students and five geriatricians. Even with this smaller faculty, the ULSOM was able to implement CRIT successfully. Other institutions may benefit from our experience and attempt to address barriers and implement facilitators proactively, avoiding the costs associated with suboptimal implementation.
Methods
The ULSOM was selected to receive a two-year grant to participate in CRIT. The grant was funded by the Donald W. Reynolds Foundation. As part of the grant agreement, the ULSOM agreed to use the curriculum previously developed by a multidisciplinary team of faculty from the BMC [1].
The content for the program was developed around a case presentation of an older female who presented to the emergency department with an acute abdomen and undergoes surgery. The case evolved over three 2-h interactive modules designed to foster interdisciplinary collaboration in the management of older adults.
During both 2014 and 2015, chief residents were invited from all residency programs at the ULSOM with the goal that this two-day program would enhance their knowledge about geriatric medicine, strengthen their leadership and teaching skills, and allow them opportunities to network with other chief residents. Residency program directors were also invited to attend, hoping to foster collaboration among faculty and encourage the program director to serve as a mentor to the chief resident on his/her action project. As part of the cost-sharing agreement of the grant, each program was charged $500 (USA) for the program director, two chief residents, and their families to be able to attend.
Participants
A total of 18 chief residents participated in 2014 and 10 chief residents in 2015. In 2014, most of the chief residents were Caucasian (94%), with the remainder being Asian Americans. In 2015, 60% of the participants were Caucasian, 10% were Asian American, 10% were African American, and 20% classified themselves as "Other" (Table 1). In 2014, the majority of attendees were family medicine (FM) residents. In 2015, the majority of attendees were FM and internal medicine residents. Emergency medicine, general surgery, podiatry, medicine/pediatrics, radiation oncology, and other residents attended in both years (Table 2). In 2014, 11% of chief residents indicated that they had attended a medical school outside of the United States. In 2015, none of the chief residents attended a medical school outside of the United States.
Procedures
During both years, CRIT occurred at a hotel sixty miles from Louisville during early June. An off-site location was selected to minimize resident distractions. The program began with a live patient/family interview regarding navigating the healthcare system when the patient became ill. This interview was followed by two modules on Saturday and one module on Sunday. Each module consisted of the case presentation, small group discussion, and two-three mini-lectures. Small groups consisted of chief residents and program directors, and were facilitated by the faculty. Mini-lectures included decision-making capacity, care of the hospitalized older adult, opioid use in older adults, delirium, functional assessment, polypharmacy, and discharge planning. Details of the case presentation are located in the Appendix A.
Between the main modules, mini-lectures were given on facilitation skills, techniques for giving feedback, working with the reluctant learner, and conflict resolution to enhance chief residents' leadership and teaching skills.
During both years, participants had time in the afternoon to spend with each other, their families and guests, and participate in a variety of recreational activities. Participants and their guests attended a reception, followed by dinner, on Saturday.
Each chief resident was expected to develop an action project that focused on management of older patients and aligned with the chief resident's interests and his/her residency program's needs. During the CRIT program, two working meetings were held where each chief resident met one-on-one with faculty to develop their action project. Near the end of the conference, chief residents shared their action project with a larger group of faculty for feedback. At the end of CRIT, chief residents turned in a copy of their action project.
Chief residents completed a comprehensive survey that was sent out a month before CRIT, completed a post-CRIT survey, and completed a survey six months after the completion of CRIT. This survey was developed by the interdisciplinary team at BMC with expert consultation from a research consulting group. BMC strictly enforced that the same surveys should be used at all participating institutions. The survey asked for demographic information, about background and interest in geriatrics, about clinical practice and teaching related to the care of older patients, about skills of the chief resident, about skills in geriatrics, about the action project, and about feedback on the CRIT program. Each chief resident signed a consent form allowing the use of their data. The University of Louisville Institutional Review Board determined that this study did not meet criteria for human subjects research.
Results
The total number of residents employed in the participating departments (programs) varied, with a mean of 21 residents/department (program). The mean number of medical students rotating through the departments (programs) per year was 73 students (with a range of 0 to 200 students). These numbers represent the impact the chief residents will have on learners in their departments (programs) after attending CRIT. Only chief residents were invited to participate in CRIT.
Participants were asked how many hours over the last two years they participated in a variety of venues where geriatrics was the focus or topic. In 2014, the highest concentration of mean hours were attending rounds (M = Mean hours, M = 18.00; SD = 36.48), while in 2015, geriatric electives constituted the highest mean concentration (M = 30.40; SD = 67.31). The chief residents stated they had approximately 80-102 h of geriatrics exposure during their residency.
In 2014, only about one-third of chief residents had a rotation in geriatrics during their medical school (28%) and over half during their residencies (56%). In 2015, 80% had a rotation in geriatrics during medical school, but only 50% had a rotation during their residency.
On a scale of 1-7, with 1 being "not at all" and 7 being "very much", the mean rating was 3.89 for the degree to which training about geriatrics is addressed in their program in 2014. In 2015, the mean rating was 2.8.
The chief residents were slightly more interested in the geriatric age groups relative to other age groups-average of 3.22 at baseline (scale 1 (a lot less)-3 (is about the same)-5 (a lot more)) compared to 3.42 at 6-month follow-up, in the 2014 cohort. In 2015, the baseline data showed an average of 2.8-leaning towards less interest in geriatric issues. However, in the 6-month follow-up of the 2015 cohort, the data showed an increase to an average of 3.44-slightly more interest in the geriatric age group.
Data analysis was conducted using non-parametric statistics due to the fact that sample size for both years of the program was small (2014 N = 18; 2015 = 10), and preliminary analysis showed that the data collected was not normally distributed. Table 3 shows the reported confidence of clinical practice and teaching as it relates to the care of older patients. Participants were asked about their confidence in applying clinical problem solving, ability to teach others clinical problem-solving skills related to the care of older adults, and the ability to incorporate geriatric issues into formal and informal teaching. They were also asked to rate the degree to which CRIT contributed to this improvement. For the 2014 cohort, ability to apply clinical problem-solving skills to the care of older patients increased over time.
A Wilcoxon signed ranks test indicated that the median post-test (post-CRIT) ranks were statistically significantly higher than the median pre-test (pre-CRIT) ranks (z = −2.60; p = 0.014). The 2014 cohort also showed change from pre-CRIT to six months after CRIT in their perceived confidence to teach others clinical problem-solving skills related to the care of older patients. A Wilcoxon signed ranks test indicated that the median post-test (post-CRIT) ranks were statistically significantly higher than the median pre-test (pre-CRIT) ranks (z = −2.89; p = 0.004). Furthermore, the 2014 cohort showed change in terms of confidence to incorporate geriatric issues into teaching. A Wilcoxon signed ranks test indicated that the median post-test (post-CRIT) ranks were statistically significantly higher than the median pre-test (pre-CRIT) ranks (z = −2.71; p = 0.005). For the 2015 cohort, perceived confidence in ability to apply clinical problem-solving skills to the care of older patients showed a significant change from pre-CRIT to six months after CRIT (z = −2.67; p = 0.007) and confidence in ability to incorporate geriatrics issues into formal and informal teaching (z = −2.46; p = 0.14). After attending CRIT, the 2015 chief resident cohort taught more geriatric medicine topics than prior to attending CRIT. Six months after CRIT, 88% of the chief residents taught about caring for hospitalized older adults (compared to 30% prior to CRIT). In addition, 88% of the chief residents taught about recognizing delirium after CRIT (compared to only 20%prior to CRIT).
The 2014 chief resident cohort also improved their geriatric medicine teaching after attending CRIT. They taught topics related to the care of older adults in bedside teaching, other small group teaching, and at other conferences/lectures after the CRIT. For the 2014 cohort, analysis using a Wilcoxon signed ranks test indicated that the median post-test ranks at six months were statistically significantly higher than the median pre-test ranks in all these categories (for bed-side teaching (z = −2.13; p = 0.03); for other small group teaching opportunities (z = −2.04; p = 0.04); and for other conferences and lectures (z = −2.81; p = 0.005). For the 2015 cohort, for each of these variables, the difference between pre and at the six-month time interval was not statistically significant.
For the 2014 and 2015 cohorts, analysis using a Wilcoxon signed ranks test indicated that the median post-test ranks at six months were statistically significantly higher than the median pre-test ranks regarding the chief residents' perception about the extent to which medical students, house staff and/or faculty come to them as a resource on geriatrics (for 2014: z = −3.23; p = 0.05; for 2015: z = −2.77; p = 0.03).
The chief residents' surveys indicate that the instruction about giving feedback, dealing with the reluctant learner, small group facilitation skills, and conflict resolution was effective in increasing their leadership and teaching abilities ( Table 4). The chief residents indicated that CRIT played a role in improving these skills (seen in "Contributed by CRIT data" in Table 4). It appears that the most effective components of the CRIT curriculum were those that provided practical skills about teaching and leadership. The least successful part of the CRIT curriculum involved the action projects (Table 5). No residents completed their action project in 2015 and only one completed their action project in 2014. During both years, the majority of the chief resident action projects addressed resident education.
Analyses
The ULSOM successfully implemented CRIT through a two-year grant. The surveys demonstrate an improvement in chief residents' confidence in clinically managing older adults. In addition, teaching related to the care of older adults improved after CRIT. After CRIT, chief residents were considered a resource for medical students, house staff, and/or faculty regarding geriatrics issues. CRIT also improved the teaching and leadership skills of chief residents. During both years, chief residents appreciated the opportunity to meet and interact with chiefs from other disciplines. They stated that this was a highlight of the conference. They never had this collaborative opportunity before. Older adults receive better care when their providers know each other and communicate with each other.
The chief residents had prior exposure to geriatrics during their residency programs, but it was not consistent. If the chief residents had not attended CRIT, they would have graduated without being taught geriatrics. Geriatrics was also not consistently taught during medical school. Only 27.8% of the 2014 chief residents had a rotation in geriatrics during their medical school training, however 80% of the 2015 chief residents had a rotation in geriatrics. The increase in 2015 to 80% of chief residents being taught geriatrics in medical school is impressive and encouraging. Medical schools are starting to realize the importance of teaching geriatrics in the context of an aging population. This increase in geriatric medical education in medical school in 2015 might explain why Table 3 and 4 shows a larger improvement in 2015 data compared to 2014 data for clinical practice and teaching related to the care of older patients.
The exposure to geriatric training for medical students and residents at the ULSOM is limited, as it is at most U.S. medical school and training programs. A survey of medical schools in the United States shows that less than half of the responding schools have a structured geriatrics curriculum and that a quarter require a geriatrics clerkship [9]. Barriers commonly cited to preventing full integration of geriatrics into health science programs include: time, scarcity of educators (and lack of advocates), geriatric stereotyping (including lack of exposure to healthy aging). Some learners state that it is difficult to find locations where there is sufficient exposure to older adult patients [10].
In regards to the chief resident action projects, only one of the chief residents over the two-year period was able to complete the project. The participants met with faculty mentors during CRIT to develop initial plans for implementation of the action project. However, even with the assignment of mentors, residents and mentors did not move their projects forward to completion. Some of the chief residents cited barriers including lack of time to complete their planned project, and lack of expert geriatric mentor availability ( Table 6). One faculty mentor reported that her chief resident was located on a physically different training site (the VA (Veterans Affairs) Medical Center vs. the ULSOM) and was not responsive to email updates, which implies that having an onsite geriatric faculty mentor, who can be present face-to-face for the mentee might have made a difference in persistence with the CRIT intervention. One method of overcoming this barrier would be to ensure that there is a geriatrics mentor point-person at each training site to both assist with CRIT dissemination and check-in on the progress of action projects throughout the year (Table 6). We also seemed to have less than robust outcomes with other measures, such as the ability of chief residents to teach others clinical problem-solving skills related to the care of older adults, the amount of responsibility chief residents feel for teaching geriatric issues, and their enjoyment of teaching geriatrics. Having geriatrics faculty experts as advocates and periodically progress checking each chief resident throughout the year might create an improved outcome, especially if time is the most significant barrier. The time for the chief resident themselves is limited and their expertise in geriatrics is nascent, making their work of leading in an area where they are not expert more time consuming and difficult. More frequent check-ins with the chief residents and their residency unit, and monitoring of desired outcome implementation can help create a culture that empowers the chief residents (Table 6).
Discussion
CRIT was successfully implemented over a two-year period at the University of Louisville, located in a rural state. The chief residents who attended CRIT had a positive impact on their departments (programs) by teaching geriatrics to the residents and students. If they did not attend CRIT, they would not have had enough exposure to geriatrics in their residency. After attending CRIT, the chief residents were more interested in the geriatric age group. The chief residents had increased confidence in their clinical practice and teaching related to the care of older adults after attending CRIT. They also gave more presentations on geriatric medicine topics after attending CRIT. The chief residents also had a perception that they were a resource on geriatrics for medical students, other residents and faculty. They also had increased confidence in their overall leadership skills after CRIT. CRIT implementation in a rural state was not reported previously in the literature.
One limitation of this research is the small number of participants. We plan to offer this program to nursing graduate students, social work graduate students, and community workers in the field of geriatrics in the future. In this way, the program will be more interdisciplinary and have more participants. In the future, it was discussed that CRIT could be open to all residents. This might help with recruiting residents to the geriatric medicine fellowship and increase CRIT attendance.
Chief residents' confidence in their leadership and teaching skills went down at six months. However, immediately after CRIT, the chief residents stated they were confident in their leadership and teaching skills. Therefore, at six months, a reunion/refresher course is needed. It would be beneficial if the chief residents bring difficult geriatric medicine cases they have experienced in the past six months in order to learn from their recent experiences.
One barrier for some departments to attend was cost. They could not afford the $500 (USA) fee. In the future, scholarships will be offered to ensure everyone who would like to attend will be able to attend (Table 6).
A positive unintended consequence of CRIT was that the Geriatric Medicine was consulted with in the hospital more frequently after CRIT. The chief residents met with the Geriatric Medicine Faculty at CRIT; therefore, they felt more comfortable in asking for assistance in caring for older adults.
The findings in this study are applicable to other institutions who might want to implement the curriculum. Geriatric medicine is usually a small specialty at academic medical centers. If the ULSOM, located in a rural state, could implement this curriculum with limited faculty and resources, other institutions can also do so. In addition, the barriers of time and cost are universal. Awareness that these barriers need to be addressed proactively will ensure the success of CRIT. In addition, other institutions can implement the items that facilitate a successful CRIT. Hosting CRIT at a resort, allowing families to attend, protecting free time, face-to-face meetings with mentors, frequent progress checks, refresher courses at six months, and scholarships to attend CRIT are all possible with pre-planning.
In order to improve the completion rate of the action projects, another suggestion is to have a six-month follow-up meeting with all CRIT participants to report on the progress of the action projects. The faculty mentor and CRIT learner are more likely to complete the action project if they have a deadline to present the action project in front of their peers.
CRIT did accomplish some of the goals that the planners intended, but was not successful in all areas. However, the overall impact of CRIT was positive. There is a clear indication that CRIT training should continue. The chief residents have a positive impact on the faculty, residents, and students with whom they interact (some chief residents interact with up to 200 medical students per year). Geriatric medicine will now be taught to all these faculty members, residents, and students. CRIT will continue at the ULSOM and it will expand to include more of the interdisciplinary team. The hope is that older adults will receive better care because their providers are trained in geriatric medicine through CRIT. This clinical outcome will need to be studied after future CRIT conferences. Academic medical centers in rural states will now have the tools to successfully implement CRIT because they will be aware of the barriers and facilitators to implementation.
Author Contributions: C.D.F. made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data, drafting of the article, critical review for important intellectual content, and final approval of the version to be published. L.W., R.G., B.F.P., and M.A.S. made substantial contributions to conception and design, analysis and interpretation of data, drafting of the article, critical review for important intellectual content, and final approval of the version to be published. J.G. made substantial contributions to acquisition of data, critical review for important intellectual content, and final approval of the version to be published. L.M., and D.A. made substantial contributions to acquisition of data, drafting of the article, critical review for important intellectual content, and final approval of the version to be published. D.N. made substantial contributions to acquisition of data, analysis and interpretation of data, critical review for important intellectual content, and final approval of the version to be published. S.M., and R.M.-G. made substantial contributions to analysis and interpretation of data, critical review for important intellectual content, and final approval of the version to be published. S.C. made substantial contributions to analysis and interpretation of data, drafting of the article, and final approval of the version to be published.
Funding:
The Donald W. Reynolds Foundation provided a grant to the University of Louisville for hotel rooms and the conference space, faculty and staff time, and educational materials for the conference.
Acknowledgments:
The authors would like to thank Margaret Feldman for her help in preparation of the manuscript for publication.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix CRIT Case
Module I: Mrs. HH is an 84-year-old African-American woman with the following: Plan: Surgery was consulted. They recommend that she go to the operating room. When the surgeons inform the patient about their assessment and recommendations, she repeatedly states, "I can't understand you". The surgeons then try to get daughter to sign consent. Daughter says, "My mother makes her own decisions". Surgeons admit patient to their service.
Module II:
Mrs. HH is an 84-year-old African-American woman who presented to the emergency room with dizziness and abdominal pain for a couple of days. Her daughter also states that the patient is "not herself".
Diagnosis
Physical exam, laboratory data, and imaging data are concerning for diverticulitis and perforation. Surgery was consulted.
CASE:
You determine that the patient has capacity to make medical decisions. After patient is given her hearing aids and informed about the risks and benefits of surgery in simple clear language, she agrees to the operation. A partial colectomy with diverting colostomy is done. She has no complications. Sh has been started on intraveneous (IV) antibiotics, and IV morphine, as needed (prn), has been ordered for pain control. A nasogastric tube (NGT) and indwelling urinary catheter have been inserted. She was transferred to the general medicine/surgical floor and placed in a room near the nursing station.
It is now post-op day 2. This morning, the night float doctor reports that a nurse called him to get a verbal order for a vest and wrist restraints because the patient had pulled out her NGT sometime around 5 am. The patient also refused blood draws this morning. The night float ordered the restraints. On your morning rounds, the nurse tells you that the patient has repeatedly tried to get out of bed and is now tugging on her IV line. The nurse requests that you reinsert the NGT and renew the restraint order now.
You go to the patient's room. You notice that she is calling out for her husband and her dead mother. You check the nursing chart notes and find out that the patient was lethargic on post-op day 1. The nursing notes also document that she had moderate pain yesterday for which she received morphine IV × 5, has had no bowel movements (BMs) since the surgery, and that her left heel had nonblanchable erythema. You also check the medication list in the electronic medical record and notice that, compared to her pre-admission list, there are new and substituted medications. Based on the hospital team's recommendation and her preference, Mrs. HH is discharged home with skilled services from nursing and physical therapy and surgery clinic follow up. While writing the discharge summary you review the medication list in the electronic medical record and notice that, compared to her pre-admission list, there are new and substituted medications.
The nurse reviews the discharge summary and discharge medications with the patient and her daughter. You send her new prescription medications to the pharmacy electronically. Mrs. HH arrives home on a Saturday.
CASE PART 2:
A visiting nurse goes to see the patient at home on Sunday, the day following her hospital discharge. She finds Mrs. HH crying in pain. Her daughter reports that the acetaminophen is not helping. The visiting nurse calls the covering physician and asks for something stronger for pain. The covering physician (who does not know the patient) tells the nurse to send the patient back to the emergency room for evaluation and pain control. | 2018-11-09T10:23:46.568Z | 2018-09-26T00:00:00.000 | {
"year": 2018,
"sha1": "9bed79dcb2afe48029cf95c5317d847bd6cddae3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3417/3/4/62/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bed79dcb2afe48029cf95c5317d847bd6cddae3",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84223183 | pes2o/s2orc | v3-fos-license | Diversity of yeast-like fungi and their selected properties in bioaerosol premises utility
A total of 69 isolates of yeasts were recorded in the indoor air of the school buildings: 43 in heated rooms and 26 in unheated rooms. Perfect stages prevailed. Fungi isolated in our study belonged to 39 species. These were mostly monospecific isolates although five two-species isolates were noted. Differences in the properties of physiological characters of fungi isolated in both study seasons were observed. As indoor and outdoor air does not mix during the heating season, a specific substrate for prototrophic, non-fermenting yeastlike fungi forms. Acid production allows fungi to dissolve inorganic compounds in building structures and to release needed microcomponents. Abilities to produce carotenoid pigments are clearly promoted in yeast-like fungi living indoor. This may be related to the accumulation of compounds that are indirect stages in the cycle of biosynthesis of carotenoids or a surplus of oxidizing compounds.
INTRODUCTION
While fungi of various taxonomic groups constitute a considerable part of the biocoenosis in buildings, they are difficult to detect.Our earlier studies show that yeast-like fungi co-occurring with moulds are notoriously difficult in this respect (Ejdys 2011).Challenges to isolating yeast-like fungi from the indoor bioaerosol can also arise from issues of methodology such as an inappropriately selected incubation temperature and varying nutritive preferences of individual fungi.Colonies of yeast-like fungi, especially in moisture-damaged rooms, are usually overgrown with moulds whose growth rate is higher and nutritive requirements are smaller than those of yeast-like fungi.Pure isolation of the latter may be impossible and consequently a low number of them is reported in studies in various indoor spaces (Awad © The Author(s) 2014 Published by Polish Botanical Society et al. 2010;Krajewska-Kułak et al. 2002;Meklin et al. 2002).Microbiologistis estimate that only 10 to 15% of microorganisms are culturable in vitro at best in some environments (Amann et al. 1995).Insufficient knowledge of the ecophysiology of indoor-air isolates is another reason for failure to culture and isolate yeast-like fungi from the indoor bioaerosol.Therefore the aim of this study was to investigate some properties of yeast-like fungi that help them to survive indoors.
MATERIAL AND METHODS
Yeast-like fungi isolated from the bioaerosol in partly modernized school buildings were investigated.Samples were collected during the heating season (November) and after the heating was switched off (May).Twenty six different rooms on all the stories in two school buildings were selected.Two study sites were set up in each room.Samples were collected using Koch sedimentation method.Three substrate types were used for cultures: solid Sabouraud medium with antibiotics, Rose-Bengal medium and Czapek-Dox medium (Ejdys et al. 2011).
The total number of fungi in the air was identified according to Polish Standard PN-89/Z-04111/03 and yeast-like fungi were isolated and identified according Kurtzman et al. (2010).The morphology of fungal isolates was assessed.Special attention was paid to properties of vegetative cells, budding type, size and shape of blastospores.The developmental stage, abilities to form hyphae and chlamydospores, and selected biochemical traits: fermentation abilities, carbon and nitrogen sources, vitamin absorption, acid production and abilities to synthesize pigments, were identified.Methods and techniques recommended by Kurtzmann and Fell (2000) were used.
RESULTS
The total number of fungi in the indoor bioaerosol was between 314 and 3577 cfu/ m 3 in spring and between 79 and 1533 cfu/m 3 in autumn.Yeast-like fungi were not recorded in the air in one room during the heating season.A total of 1416 cfu/m 3 was recorded in one room (the library) while it did not exceed 393 colony forming units per 1 m 3 in other rooms.The number of cells of yeast-like fungi in the indoor aerosol ranged from 39.3 to 442 cfu/m 3 after the heating had been turned off.
A total of 69 isolates of yeasts were recorded in the indoor air in the school buildings: 43 in heated rooms and 26 in unheated rooms.Perfect stages prevailed.Fungi isolated in our study belonged to 39 species (Tab.1).These were mostly monospecific isolates although five two-species isolates were noted: Magnusiomyces magnusii (syn.A vast majority of the isolates did not form pseudohyphae or they were only fragmentary. Differences in the properties of physiological characters of fungi isolated in both study seasons were observed (Fig. 1).Non-fermenting and pigment producing isolates dominated in heated rooms (Fig. 1B, D) while the majority of fungi recorded after the heating season had the ability fermentation and the production of pigment © The Author(s) 2014 Published by Polish Botanical Society (Fig. 1A, C). 2/3 of isolates did not require any vitamins for growth.Five isolates belonging to four species exhibited acid-forming abilities: Dekkera anomala, D. bruxellensis, Yarrowia lipolytica, Candida tropicalis (syn.Candida citrica).
DISCUSSION
Indoor spaces in the temperate climate are a special biotope.Organisms inhabiting it are not influenced by the seasons characterized by atmospheric conditions of this climatic zone.The indoor environment is mostly influenced by the heating or its lack and resulting fluctuations in the humidity and temperature.When the heating is on and the airing frequency decreases, a biotope consisting of a variety of microhabitats with highly varying parameters are formed.It is usually warmer and drier around the heaters than in the remaining part of the room while contact sites between the ceiling and the walls, especially northern and/or external walls, are the most humid and coldest places.A high diversification of indoor biotopes allows fungi with different physiological requirements to survive.While the diversity of taxa of fungi noted in unheated indoor air varies greatly (some species are allogenic), the species composition of the heating season can be considered to be relatively stable (Ejdys et al. 2009) although difficult to determine.Some two-or even threespecies isolates can be undetectable if only biochemical properties are examined.They are a sum of properties of individual symbionts and taxa cannot be determined reliably.In our previous studies, multispecies isolates were recorded relatively frequently both in aquatic macrobiocoenoses (Biedunkiewicz, Barańska 2011), land biocoenoses (Ejdys et al. 2009) and in organ ontocoenoses (Dynowska, Ejdys 2000;Biedunkiewicz 2001;Dynowska et al. 2008).While this can reflect a frequent natural phenomenon, the high percentage of synergizing yeast-like fungi isolated from the indoor bioaerosol in our study can rather be attributed to the fact that yeast-like fungi survive on culture media better than fungi adopting a solitary lifestyle.Metabolism of yeast-like fungi can make them more difficult to culture.Only approximately 20% of yeast-like fungi are vitamin prototrophs (Kurtzman, Fell 2000).They can produce biotin (vit.B 1 /H), inositol (vit.B 8 ), folic acid (vit.B 9/11 ), pantothenic acid (B 5 ), 4-aminobenzoic acid (vit.B X ), niacin (vit.B 1 /PP), pyridoxin (vit.B 6 ), retinol (vit.A), riboflavin (vit.B 2 ) and thiamin (vit.B 1 ).A vast majority of yeast-like fungi require regulatory compounds for life and must derive them from the environment.It is therefore interesting that 66% of prototrophic fungi were isolated during the heating season and as much as 82% of prototrophic fungi were detected outside the heating period.Importantly, school rooms (public use indoor spaces) are not deficient in the organic matter.Prototrophy may allow yeast-like fungi to win nutritive competition, especially with moulds.
While simple sugars can penetrate indoor spaces with the atmospheric air in spring, these substrates are not easily available to indoor microorganisms in autumn.When a high and steady supply of oxygen is available, fermentation abilities are not needed for life in indoor environment.This explains a low percentage of nonfermenting species in our studies.On the other hand, substrate acidification encourages the release of ions, e.g.iron ions, needed for life.These abilities were observed © The Author(s) 2014 Published by Polish Botanical Society in 30% of fungi identified although it is a very rare property in yeast-like fungi.Abilities to produce acetic acid or citric acid were confirmed only in eight species of 1414 listed in a study by Kutzman and Fella (2000).
Physical and chemical factors and the availability of the nutritive substrate influence carotenogenesis.The highest capacity of carotenogenesis in laboratory tests is obtained at 20-22˚C.Pigment formation is photoregulated although the presence of light is not necessary.This may explain the high percentage of pigmented isolates in autumn, that is during the "short day".The type and concentration of carbon and nitrogen sources and their mutual ratio are mostly given as substrate types.The most recent studies report increased carotenoid production when secondary metabolites of other microorganisms are present in the substrate, even as their extracts (Stachowiak, Czarnecki 2006).If these compounds include enzymes decomposing cell walls, then pigments are a response to free radicals of H 2 O 2 produced.This may also be related to the presence of carotenoid precursors.Their presence may promote pigment-forming fungi.This was most probably the case in our study.A high number of pigmented isolates in indoor spaces is indirectly or directly related to the diversity of the biota of the indoor aerosol.Therefore it seems justified that the occurrence of carotenoid-producing yeast-like fungi may be an indicator of mycological, or even microbiological, air purity.
CONCLUSIONS
As indoor and outdoor air does not mix during the heating season, a specific substrate for prototrophic, non-fermenting yeast-like fungi forms.
Acid production allows fungi to dissolve inorganic compounds in building structures and to release needed microcomponents.
Abilities to produce carotenoid pigments are clearly promoted in yeast-like fungi living indoor.This may be related to the accumulation of compounds that are indirect stages in the cycle of biosynthesis of carotenoids or a surplus of oxidizing compounds.
Fig. 1 .
Fig.1.The frequency of isolates at the time of the heating period and after it is turned off, with the capacity: fermentation (A, B), production of pigments (C, D), vitamins (E, F) and acid (G, H).
Table 1
Yeast-like species isolated during the heating season (November) and when it is disabled (May) of indoor air © The Author(s) 2014 Published by Polish Botanical Society | 2019-03-21T13:04:19.500Z | 2014-06-30T00:00:00.000 | {
"year": 2014,
"sha1": "2a219f1d895f7a323a1e98a6f2f994389fe3bd2f",
"oa_license": "CCBY",
"oa_url": "https://pbsociety.org.pl/journals/index.php/am/article/download/am.2014.002/2954",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2a219f1d895f7a323a1e98a6f2f994389fe3bd2f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
266825711 | pes2o/s2orc | v3-fos-license | Development of a Targeted SN-38-Conjugate for the Treatment of Glioblastoma
Glioblastoma (GBM) is the most aggressive and fatal brain tumor, with approximately 10,000 people diagnosed every year in the United States alone. The typical survival period for individuals with glioblastoma ranges from 12 to 18 months, with significant recurrence rates. Common therapeutic modalities for brain tumors are chemotherapy and radiotherapy. The main challenges with chemotherapy for the treatment of glioblastoma are high toxicity, poor selectivity, and limited accumulation of therapeutic anticancer agents in brain tumors as a result of the presence of the blood–brain barrier. To overcome these challenges, researchers have explored strategies involving the combination of targeting peptides possessing a specific affinity for overexpressed cell-surface receptors with conventional chemotherapy agents via the prodrug approach. This approach results in the creation of peptide drug conjugates (PDCs), which facilitate traversal across the blood–brain barrier (BBB), enable preferential accumulation of chemotherapy within the neoplastic microenvironment, and selectively target cancerous cells. This approach increases accumulation in tumors, thereby improving therapeutic efficiency and minimizing toxicity. Leveraging the affinity of the HAIYPRH (T7) peptide for the transferrin receptor (TfR) overexpressed on the blood–brain barrier and glioma cells, a novel T7-SN-38 peptide drug conjugate was developed. The T7-SN-38 peptide drug conjugate demonstrates about a 2-fold reduction in glide score (binding affinity) compared to T7 while maintaining a comparable orientation within the TfR target site using Schrödinger-2022–3 Maestro 13.3 for ligand preparation and Glide SP-Peptide docking. Additionally, SN-38 extends into a solvent-accessible region, enhancing its susceptibility to protease hydrolysis at the cathepsin B (Cat B) cleavable site. The SN-38-ether-peptide drug conjugate displayed high stability in buffer at physiological pH, and cleavage of the conjugate to release free cytotoxic SN-38 was observed in the presence of exogenous cathepsin B. The synthesized peptide drug conjugate exhibited potent cytotoxic activities in cellular models of glioblastoma in vitro. In addition, blocking transferrin receptors using the free T7 peptide resulted in a notable inhibition of cytotoxicity of the conjugate, which was reversed when exogenous cathepsin B was added to cells. This work demonstrates the potential for targeted drug delivery to the brain in the treatment of glioblastoma using the transferrin receptor-targeted T7-SN-38 conjugate.
■ INTRODUCTION
Glioblastoma (GBM) is one of the most challenging forms of cancer to treat, with a median survival period of around 16 months.−3 The primary clinical approach for GBM involves maximal surgical resection, followed by radiotherapy with concomitant Temozolomide and maintenance over time with Temozolomide (TMZ) chemotherapy. 4,5Nevertheless, complete eradication of the brain tumor is often not feasible due to the inherent resistance of glioblastoma cells, the invasive nature of the tumor cell growth within tissues, and the specific location where the tumor develops. 2,4Unlike peripheral tumors, brain tumors present distinct challenges for treatment, primarily due to the presence of the blood−brain barrier (BBB), which restricts the penetration and accumulation of chemotherapeutic agents to the tumor site. 4,6,9he BBB, serving as both a physical and biological barrier, plays a pivotal role in safeguarding the central nervous system (CNS). 10,11The BBB operates with a tightly controlled transport mechanism. 10,12Although these characteristics are crucial for maintaining the optimal neuronal environment, they limit the effectiveness of most chemotherapeutic agents in treating glioblastoma. 4,11Thus, high drug dosages are required in order to reach therapeutic levels in the tumor.In addition, resistance to chemotherapy and undesired side effects stemming from nonspecific interactions between the anticancer agent and healthy cells are other challenges associated with the treatment of glioblastoma. 7However, in high-grade gliomas (glioblastoma is a grade 4 glioma), the integrity of the BBB is oftentimes compromised by a series of alterations caused by the rapid proliferation of the tumor cells leading to the emergence of a microenvironment that is distinct from the normal brain tissues, termed the blood−brain tumor barrier (BBTB). 7,8,12 distinctive characteristic of the BBB is the expression of several receptors, including those associated with angiogenesis, like vascular endothelial growth factor receptor (VEGFR), platelet-derived growth factor receptor (PDGFR), and epidermal growth factor receptor (EGFR), as well as integrin receptors, low-density lipoprotein receptors (LDLR), and transferrin receptors (TfRs). 13The transferrin receptor plays a crucial role in facilitating the transfer of iron into the brain parenchyma, ensuring the maintenance of iron balance, which is significant for metabolic processes, neural conductivity, and, consequently, the normal functioning of the brain. 14−16 Additionally, it has been reported that the expression of TfR on GBM cells is up to 100fold higher compared to healthy cells. 17,18This is attributed to the escalated demand for iron by malignant cells to support their rapid proliferation. 9Thus, the TfR is a promising target for site-specific drug delivery and intratumoral accumulation for the treatment of GBM using receptor-mediated transcytosis (RMT). 5,12,17,19mong the various targeting ligands available for TfR, Transferrin (Tf) has emerged as an appealing option for delivering drugs specifically to glioblastoma. 13,20,21Tf employs RMT to facilitate the transfer of iron-bound transferrin from the luminal side to the basolateral side of the brain.However, the presence of high concentrations of endogenous Tf competitively inhibits Tf-modified delivery systems.Therefore, alternative ligands with similar affinity for TfR were developed. 13he T7 peptide, with the sequence His-Ala-Ile-Tyr-Pro-Arg-His (HAIYPRH), is a heptapeptide that specifically targets TfR on the BBB/BBTB and on the brain tumor cells. 5,16,20Its binding affinity is comparable to that of Tf. 13,20 Furthermore, the T7 binding site on TfR is reported to be different from that of Tf, avoiding interference with endogenous Tf.The cellular uptake of T7-conjugated drug delivery systems was also found to be accelerated when endogenous Tf is bound to TfR, highlighting T7 as a promising carrier strategy for drug delivery systems targeting glioblastoma (GBM). 13,20−26 Of these strategies, targeted conjugates using the prodrug approach offer distinct advantages, such as controllable and predictable drug release, when compared to vesicular systems that release the drug payload via diffusion.Antibody-drug conjugates (ADCs) have been approved by the US FDA for the treatments of cancers; 27 however, their large size and other issues such as immunogenicity and stability are challenges associated with their use. 27,28Recently, researchers have reported that peptide drug conjugates (PDCs) are devoid of the challenges associated with ADCs.They are small, nonimmunogenic, and relatively more stable than antibodies. 28igure 1 illustrates the components of a PDC.A PDC consists of a peptide ligand with affinity for receptors found on the target cells, a linker that connects the drug payload to the targeting peptide, and a toxin or the drug payload. 29,30The linker may be cleavable, thereby serving as the mechanism of drug release.This characteristic is important for controlling drug release for specific drug delivery to the target site while minimizing drug impact on healthy tissues. 31The use of protease-cleavable linkers, frequently taking the form of distinct peptide sequences, is designed to undergo cleavage solely upon recognition by the proteases that are upregulated within the boundaries of the tumor microenvironment. 31,32An example of such protease overexpressed in tumors is cathepsin B (Cat B), a lysosomal protease, which is secreted into the tumor microenvironment and aids the spread of cancer cells by degrading components of the extracellular matrix (ECM). 33athepsin B is overexpressed in various tumor types, including glioblastoma cells. 32,34The valine-alanine (VA) dipeptide is a cathepsin B recognition sequence that has demonstrated stability in serum, and conjugates containing the sequence are stable in plasma until it accumulates in the tumor where cathepsin B is overexpressed. 35The protease recognizes and binds to the VA sequence, initiating a chemical reaction that leads to the cleavage of the linker after alanine. 36,37The cleavage of the VA linker results in the release of the drug molecule from the conjugate.This allows the drug to become active and exert its therapeutic effects at the intended site of action.
7-Ethyl-10-hydroxycamptothecin, also known as SN-38, is an active metabolite of irinotecan, which has been reported to demonstrate 100−1000 times greater potency compared to irinotecan and displays potent inhibitory effects against DNA topoisomerase I. 38,39 SN-38 is an effective cytotoxic agent against primary and recurrent glioma cells. 40,41However, it is lipophilic, highly toxic when administered intravenously, and unstable in the physiological environment, which limits its clinical application. 42Therefore, a peptide drug conjugate is ideal for improving SN-38's low circulation half-life, solubility, and cytotoxicity profile.
In the present work, the cytotoxic drug SN-38 is coupled to the tumor-targeting T7 peptide via a cathepsin B cleavable VA peptide linker.This ensures that the drug remains covalently bound until it reaches the intended site of action, where Cat B is overexpressed (Figure 2) to release the drug.Within this framework, our research pursuits entail the synthesis and characterization of a T7-SN-38-targeted drug conjugate using strain-promoted azide-alkyne cycloaddition (SPAAC).Our investigation extended to evaluating the cellular uptake and assessing the cytotoxicity of the drug conjugate in U87MG glioblastoma cells.
Peptide Molecular Docking.The protein crystal structure of TfR (PDB code: 3S9N) was used to conduct molecular docking binding predictions. 43The designed peptides were subsequently imported into Schrodinger-2022−3 Maestro 13.3 for ligand preparation and Glide SP-Peptide docking that has been optimized to enhance peptide sampling and scoring. 44,45Using Maestro LigPrep, the peptide structures were preprocessed for docking: OPLS4 forcefields were generated, and ionization states of the peptides were generated at pH 7.4 ± 2.0 using the pK a predicting module Epik.Ligands were preprocessed with the ConfGen algorithm of Maestro, which generates the low-energy conformations for rigid-body docking.Low-energy conformations and stereo- isomers of the peptides were generated with the built-in ConfGen algorithm.Following the ligand processing, SiteMap was deployed to identify possible peptide binding sites and the druggability of each site.A SiteScore is generated, which takes into consideration factors such as (a) the size of the site, (b) how solvent-exposed the site is, (c) the degree of enclosure of the protein, (d) the hydrophobic and hydrophilic nature of the site, (e) the proximity of the site point interaction with the protein (i.e., tightness of interaction), and (f) the degree of hbond donor and acceptor interaction.The larger the SiteScore, the more druggable a site is, so the docking site was selected based on the overall score and the site size, considering the large size of the peptide.In addition, we cross-referenced the selected site with that reported in the T7 docking studies on TfR conducted by Tang et al. 26 Thus, the centroid of the docking grid was positioned between the key binding residues (E244, E266, E533, and E714).The receptor grid for peptide docking was generated using Maestro's Receptor Grid Generation tool.The center of the grid box was supplied as a coordinate ([x, y, z] = [3, −70, 13]) upon adjustment to incorporate the entire peptide within the receptor grid.The ligand was docked into the receptor grid with standard precision (SP-Peptide) with Epik state penalties for pose predictions and glide scoring.
Synthesis of T7-PEG 4 -N 3 (1).N 3 -PEG 4 -His-Ala-Ile-Tyr-Pro-Arg-His (1) (Figure 3) was synthesized using a modified published method. 46Solid-phase peptide synthesis (SPPS) was employed by utilizing the standard Fmoc protocol.Briefly, H-His (Trt)-2-Cl-Trt resin (3.25g, 2.6 mmol) was swollen in DMF for 2 h.The sequence of protected amino acids was introduced in the following order: Fmoc-L-Arg(Pbf)-OH, Fmoc-L-Pro-OH, Fmoc-L-Tyr(tBu)-OH, Fmoc-L-Ile-OH, Fmoc-L-Ala-OH, and Fmoc-L-His(tBu)-OH.All couplings were conducted in DMF using 2.5 equiv of each amino acid and 5 equiv of DIPEA.HATU (2.45 equiv) was used as a coupling agent, with each coupling cycle lasting for 1 h (Supporting Information, Scheme 1).After each coupling, the Fmoc protecting group was removed using a solution of 20% piperidine in DMF.Following the coupling of the final amino acid, the N-terminus of the peptide was capped using an azideterminated polyethylene glycol linker (N 3 -PEG 4 -acid).Following the reaction, the peptidyl resin was washed sequentially with DMF, DCM, and then with methanol.The peptidyl resin was dried under vacuum overnight.T7-PEG 4 -N 3 (1) was cleaved from the polymeric support using a TFA/TIPS/H 2 O cocktail (95:2.5:2.5) and concentrated under vacuum.The concentrate was precipitated in ether, and the product was obtained as a yellow crystalline solid (3.5 g, 86%).Compound 1 was characterized by analytical HPLC and ESI-MS.The compound was used without further purification.ESI m/z 1166.99;([C 52 H 79 N 17 O 14 +H] + calcd.1166.61)(Supporting Information, Figure 1).Analytical HPLC revealed a single peak with a retention time (RT) of 4.97 min, showing that the compound is pure (Supporting Information, Figure 2).
Synthesis of Boc-Ala-PAB-SN-38 (4).To a solution of Boc-Ala-PABA (3) (3.5 g, 11.8 mmol) in dry THF (20 mL) on an ice bath was slowly added phosphorus tribromide (PBr 3 ) (3.2 g, 11.8 mmol) in anhydrous DCM (20 mL). 47The progress of the reaction was monitored by TLC.After 3 h, the reaction was complete, and the solution was transferred into cold water (250 mL).The dichloromethane layer was collected through multiple extractions and dried by using anhydrous sodium sulfate (Scheme 1).After evaporation of the solvent under vacuum, the crude solid product compound 3b was obtained (1.45 g, 41%).Compound 3b was used without further purification.
In Vitro Cleavage Studies of T7-SN-38 by Cathepsin.The following protocols were modified to evaluate the cleavage profile of T7-SN-38 (7): 50−52 11 μL of cathepsin B (human liver, 0.47 mg/mL, 324U/mg) (Merck Millipore, Merck GA) stock was added to 27.2 μL of activation buffer (30 mM DTT, 15 mM EDTA in H 2 O) and incubated for 15 min at room temperature.The mixture was diluted with 4.9 mL of reaction buffer (150 mM sodium acetate buffer, 12 mM EDTA, 24 mM DTT, in H 2 O; pH 6.0), and the reaction was initiated with the addition of 70 μL of a stock solution of the T7-SN-38 drug conjugate (7) in DMSO (final conjugate concentration: 183 μM).The mixture was incubated at 37 °C.Aliquots were withdrawn at predetermined time intervals, and cleavage was assessed by RP-HPLC as a function of time from 0−24 h.The HPLC chromatogram peak areas at the characteristic RT of pure drug and conjugate were used to determine both the disappearance of the T7-SN-38 conjugate as well as the appearance of free SN-38.As a control, the study was performed in the absence of cathepsin B under similar conditions as described above.
Evaluation of the Stability of the T7-Targeted SN-38 Peptide Drug Conjugate.The T7-SN-38 conjugate (7) was dissolved in 2.5 mL of phosphate buffer saline (PBS) (pH 7.4) containing 2% DMSO at a final conjugate concentration of 196 μM and incubated at 37 °C per modified published procedures. 50,53Aliquots of the mixture were withdrawn at different time points (0−24 h) and analyzed by HPLC.
Cell Culture.Human glioma cell lines U87MG were purchased from the American Type Culture Collection (ATCC).The cells were grown and maintained in Eagle's minimum essential medium (EMEM) supplemented with 10% FBS (Corning, Manassas, VA) and 1% penicillin−streptomycin (Sigma-Aldrich) at 37 °C in an incubator with a 5% CO 2 atmosphere.
Cytotoxicity Assay.U87MG cells were seeded in 96-well plates at a density of 1 × 10 4 cells/well.At 24 h, the cultured cells were treated with SN-38 or T7-SN-38 conjugate at various concentrations (from 5 nM to 160 nM) and incubated at 37 °C.At 24 and 72 h, 70 μL of 2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide (XTT reagent mixed with an electron coupling reagent) solution was added to each well.After 3 h of incubation, the absorbance was measured at 450 nm using a Biotek ELx808 absorbance microplate reader (Lonza, Walkersville, MD).The absorbance values were normalized to the control group (cells without treatment), and percent cell viability was plotted using GraphPad Prism 10 (GraphPad Software, Inc.).The XTT assay is a colorimetric method used to assess cell viability and proliferation by measuring the metabolic activity of cells based on their ability to reduce a tetrazolium salt (XTT) to a soluble formazan dye.The results are represented as the mean ± standard deviation of four replicates (n = 4).
T7 Competition Assay: Transferrin Receptor Blocking Studies.U87MG (1 × 10 4 ) cells were seeded in 96-well plates.After 24 h, the media was replaced with media containing a 10-fold molar excess of T7-PEG 4 -N 3 relative to the T7-SN-38 conjugate and incubated for 1 h.After 1h incubation, the cells were treated with SN-38 or T7-SN-38 at various concentrations (from 5 to 160 nM).Cells treated with DMSO (0.05%) and media-only treated cells served as controls.The experiment was repeated, and an XTT assay was performed at 72 h for the 72 h viability evaluations.The percent cell viability based on absorbance at 450 nm was calculated relative to controls.The results are represented as the mean ± std of four replicates (n = 4).
T7 Competition Assay: Exogenous Cathepsin B Studies.U87MG cells at a density of 1 × 10 4 were initially plated into 96-well plates and allowed to incubate for 24 h to facilitate cell adhesion.Following this incubation period, the culture medium was replaced with media containing a 10-fold molar excess of T7-PEG 4 -N 3 relative to the conjugate, and the cells were incubated for 1h.Subsequently, the cells were treated either with 80 nM SN-38 or T7-SN-38 conjugate containing 80 nM SN-38 and were coincubated with exogenous cathepsin B. In a parallel experiment, cells that were initially exposed to excess T7-PEG 4 -N 3 were then treated with 80 nM SN-38 or T7-SN-38 without the addition of exogenous cathepsin B. The XTT assay was performed at 72 h per manufacturer protocol, and the data was used for viability calculations.The percent cell viability data was plotted using GraphPad Prism 10.The results are represented as the mean ± std of four replicates (n = 4).
Statistical Analysis.Data are presented as the mean ± standard deviation (SD) unless otherwise indicated.The IC 50 values were calculated by fitting a concentration−response curve using Microsoft Excel 16.77 (Microsoft Corporation, CA).The difference between any two groups was determined by ANOVA.P < 0.05 was considered statistically significant.
■ RESULTS
Design of the T7 Peptide Drug Conjugate.The synthesized peptide drug conjugate consists of five distinct components: (i) a transferrin receptor targeting peptide (HAIYPRH); (ii) spacer; (iii) a VA peptide sequence susceptible to cleavage by cathepsin B; (iv) a self-immolative linker (PABA); and (v) the cytotoxic agent (SN-38) (Figure 7).By design, T7-SN-38 is expected to selectively bind to transferrin receptors that are expressed on the surface of the BBB because of the T7 peptide affinity for TfR.Once bound, it is transported via receptor-mediated transcytosis across the BBB into the brain parenchyma, where it encounters and binds to glioblastoma cells, which has been reported to overexpress the TfR about 100-fold higher compared to healthy cells.Binding to glioblastoma cells leads to cellular uptake and Cat B-mediated cleavage, followed by self-immolation by 1,6 elimination to release free, active drug resulting in specific cytotoxicity to cancer cells.
Molecular Docking Study.We report the predicted pose and predicted binding affinity of T7 peptide compared with those of T7-SN-38 from which we can infer the possibility of binding to TfR by both the free peptide and conjugate.The molecular docking study demonstrates superior binding of the T7 peptide, with predicted glide scores (i.e., binding affinity) ranging from −10.234 to −9.950 kcal/mol (Figure 8a).T7 makes multiple favorable H-bonds within the TfR binding site, as well as a salt-bridge interaction with Glu244.On the other hand, T7-SN-38 exhibited lower predicted binding affinity due to its larger size and increased torsional flexibility, with the best score of −5.543 kcal/mol, which is less than a 2-fold difference in score.However, the T7 sequence of both the peptide and conjugate are oriented similarly in the target site of TfR, with the penultimate Arg-His of both oriented toward Glu533 and the remainder of the peptide elongated along the helical domain.In Figure 8b, SN-38 extends into the solventaccessible region and does not make much interaction with the protein, thereby providing easy accessibility of protease to hydrolyze and cleave at the Cat B cleavable peptide site.
In Vitro Cleavage Studies of T7-SN-38 by Cathepsin B. The release of the parent drug (SN-38) from the T7-SN-38 conjugate is essential to eliciting cytotoxicity in glioblastoma cells.We hypothesize that T7-SN-38 would be delivered to the glioblastoma cells and cleaved by cathepsin B to release free SN-38 (Figure 9).To investigate the release of SN-38 by enzymatic cleavage, exogenous cathepsin B was added to solutions of the conjugate, and aliquots of the resulting solution were withdrawn at different times and analyzed by analytical HPLC over 24 h.Results obtained by HPLC show that approximately 80% of the T7-SN-38 conjugate was cleaved by Cat B to release free SN-38 (Figure 10), with the appearance of peaks consistent with the retention time of the parent drug (RT = 4.8 min), and the drug conjugate (RT = 5.2 min) (Supporting Information, Figure S18).Over time, the area of the peak at the RT of the conjugate decreased while that of the parent drug increased.Subsequently, the identification of the cleavage products was achieved by mass spectrometry.The results showed that SN-38 (m/z = 393.47)was released from the T7-SN-38 conjugate (m/z = 2454.40;m/2 = 1226.61) in the presence of Cat B (Supporting Information, Figure S19).The T7-SN-38 conjugate was almost fully degraded within 24 h.No free SN-38 was observed when the conjugate was incubated in enzyme free buffer at pH (6.0) for 24 h (Figure 11).significant decrease (p < 0.05) in cell viability and an increase in cell death, regardless of whether the receptors were blocked or not blocked with N 3 -PEG 4 -T7.
T7 Competition Assay: Validation of Importance of the Transferrin Receptor for Observed Cytotoxicity.To validate whether the cytotoxicity of the synthesized conjugate relied on receptor-mediated endocytosis and enzymatic cleavage for drug release, the transferrin receptors on U87MG cells were blocked with excess targeting peptide, as described above.The cells with blocked transferrin receptors were treated with culture media containing 80 nM SN-38 and T7-SN-38 containing the equivalent of 80 nM SN-38, and culture media containing exogenous cathepsin B in addition to 80 nM SN-38 and T7-SN-38 containing the equivalent of 80 nM SN-38.As depicted in Figure 14, cells with blocked transferrin receptors treated with the conjugate did not exhibit an appreciable decrease in cell viability at 72 h.In contrast, SN-38 demonstrated potent cytotoxicity in free drug-treated cells.Interestingly, when targeted conjugate-treated cells with blocked transferrin receptors were exposed to media containing exogenous cathepsin B, a notable increase in cell death was observed.Collectively, these outcomes establish that the effectiveness of our developed drug conjugate indeed hinges on its binding with transferrin receptors, followed by internalization and exposure to intracellular cathepsin B and subsequent enzymatic cleavage of the conjugate to release the active drug, which then prompts a cytotoxic response.
■ DISCUSSION
Glioblastoma (GBM) is classified as an incurable and malignant form of cancer. 55As our understanding of GBM's molecular biology continues to expand, emerging areas such as tumor proliferation, angiogenesis, cell migration, and the ability to penetrate the blood−brain barrier are providing novel prospects for the advancement of GBM treatments. 56Transferrin receptors (TfR) play a vital role in glioblastoma by facilitating iron uptake, supporting cell growth, and potentially serving as targets for therapeutic interventions and diagnostic applications.The expression of TfR on the BBB and its overexpression on glioblastoma cells makes them important molecular targets for advancing glioblastoma research and treatment strategies. 57,58Several researchers have reported that targeting transferrin receptors allows for the selective delivery of therapeutic agents to glioblastoma cells while sparing healthy brain tissue.In agreement with this observation, the T7 peptide has been researched for brain targeting, to improve biodistribution, and to improve the efficacy of drugs in the treatment of glioblastoma. 13,55T7 peptide targeting of the TfR can potentially minimize adverse effects associated with conventional glioblastoma chemotherapy treatments that affect both cancerous and normal cells; thus, the T7 peptide was selected for use as a targeting ligand for site-specific SN-38 delivery in this work.To evaluate the suitability of the peptide for targeting and as proof of the strategy, molecular docking analysis 54 of the T7-SN-38 chemical structure reveals that the conjugate fits into the T7 binding site on the TfR; however, its larger size reduces its binding affinity despite adopting a similar orientation within the target site as the free targeting ligand.However, the coupled SN-38 does not exhibit substantial interaction with the binding region and extends into a solventaccessible region, thereby allowing protease accessibility to hydrolyze and cleave at the Cat B cleavable peptide site.To confirm this, the conjugate was designed, and T7-SN-38 was successfully synthesized and characterized.
In this study, we have successfully showcased a strategy involving the conjugation of SN-38 to the T7 peptide through a valine-alanine Cat B recognition sequence.This approach holds the potential to significantly enhance the delivery of SN-38 to glioblastoma by leveraging the binding affinity of T7 peptide to the TfR expressed on the BBB and overexpressed on glioma cells.Subsequently, the conjugate undergoes intracellular Cat B-mediated cleavage, enabling site-specific payload delivery to glioblastoma cells.Cat B is overexpressed in glioblastoma cells, making it a suitable target for selectively directing therapeutic interventions. 32,59sing this strategy, the 10-OH position of SN-38 was modified to establish a stable ether linkage between SN-38 and the T7 peptide using a Cat B labile linker as the mechanism of drug release.This crucial requirement of the hydrolytic stability of the conjugate in circulation is pivotal to the drug delivery strategy.
By conjugation of SN-38 to the Cat B linker, the SN-38 payload can be released specifically within tumor cells.This approach holds the potential to minimize off-target effects, leading to better treatment outcomes and reduced side effects.In addition, reports have shown that in SN-38, the 10-OH position is available for enzymatic glucuronidation in vivo.Thus, coupling at the 10-OH position inhibits glucuronidation, thereby protecting the 10-OH and ensuring T7-SN-38 stability under physiological conditions until intracellular localization in tumor cells and release of free SN-38. 60,61n the context of an SN-38 ether-based drug delivery system, data from stability studies at pH 7.4 show that the structural integrity of the conjugate is maintained within the period evaluated.This is similar to literature reports. 47,62These data suggest that the conjugate could be stable to degradation while in circulation.This stability is essential to prevent premature drug release, which could lead to unintended toxic effects on healthy cells.Moreover, as confirmed in Figure 12, the conjugation of SN-38 to the hydroxyl group delays activity until cleavage by Cat B, followed by 1,6 elimination to release the free drug.Data from in vitro cleavage studies reveal that incubation of the conjugate with Cat B led to a significant decrease in the peak area of the conjugate within 1 h, with complete cleavage observed by 24 h when analyzed by HPLC.This observation underscores the pivotal role of the cathepsin B-sensitive linker as the mechanism of SN-38 release.Furthermore, in vitro cleavage studies not only validated the enzymatic cleavage of T7-SN-38 but also demonstrated the lack of cleavage and stability of the conjugate in the absence of cathepsin B in control experiments at pH 6.Our approach has great translational potential and may present an innovative therapeutic approach.
Biological evaluation of the synthesized targeted conjugate was carried out in vitro using a cell-based model of glioblastoma.U87MG is a malignant glioma cell line that endogenously overexpresses the TfR. 63The cytotoxicity of the SN-38-containing conjugate, free SN-38, and other controls in U87MG cells were studied using the XTT assay.The data show the concentration-and time-dependent effect of the drug and conjugate on the viability of U87MG cells.As the concentrations of both SN-38 and the conjugate increase, the viability decreases, consistent with the potent cytotoxic nature of SN-38.In addition, as the duration of treatment increases from 24 to 72 h, cell viability decreases.This data is consistent with several published literature on the time-dependent cytotoxicity of SN-38 in several anticancer efficacy studies in different cell lines. 36,64,65o assess the specificity of the drug conjugate for the transferrin receptor, to relate receptor-mediated uptake to cytotoxic efficacy, and to determine the differences (if any) in the uptake mechanism of free SN-38 and targeted SN-38 conjugate, transferrin receptor blocking studies were carried out.Competition for binding to the transferrin receptor was ensured by pretreatment of the cells with a 10-fold excess of the targeting ligand for 1 h before treatment with the targeted SN-38 conjugate and pure SN-38 as the control, followed by assessment of cytotoxicity.Our data reveal that the cytotoxicity of the T7-targeted conjugate was significantly inhibited when compared to the cytotoxicity of the free drug, which was largely uninhibited (Figure 11).The difference in the observed cytotoxicity could be explained by the mechanism of the internalization of the two agents.It is expected that free SN-38 diffuses into the cell by passive diffusion to elicit its effects; thus, the blockade of the transferrin receptor is not expected to inhibit the cytotoxicity of free SN-38.However, with the transferrin receptor-targeted SN-38 conjugate, competition between excess free T7 targeting ligand and the T7-targeted SN-38 conjugate may explain the inhibition of uptake by receptor-mediated endocytosis with the attendant lack of toxicity since uptake via the transferrin receptor must precede intracellular cleavage and release of the free cytotoxic SN-38.
Another series of experiments was carried out to assess the specificity and importance of enzymatic conjugate cleavage to cytotoxicity, confirm the role of transferrin receptor−ligand interactions in uptake and, thereby, therapeutic efficacy, and validate the importance of the transferrin receptor for targeted therapy.In these studies, U87MG cells pretreated with the free T7 targeting ligand for 1 h were treated with T7-SN-38 containing 80 nM equivalent of SN-38 and 80 nM of SN-38, to which exogenous Cat B was added.Control experiments were carried out without exogenous Cat B (Figure 12).Despite the blockade of the TfR as a result of preincubation with the free targeting ligand, cells treated with T7-SN-38 in the presence of exogenous cathepsin B exhibited considerable cytotoxicity after 72 h, similar to the cytotoxic effect observed with free drug (Figure 12).These results validate the desirability and importance of the critical design attributes built into the design and construction of the targeted SN-38 delivery system.It validates the importance of intracellular localization for cathepsin B cleavage and efficient drug release, which is assured by T7-mediated binding to overexpressed transferrin receptors on the cell surface.The results validate the proposed approach for T7 peptide-targeted therapy in the treatment of glioblastoma.
■ CONCLUSIONS
A transferrin receptor-targeted SN-38 conjugate (T7-SN-38) was successfully synthesized to deliver SN-38 across the blood−brain barrier and selectively into glioblastoma cells using transferrin receptors expressed on the BBB surface and overexpressed on the surface of glioblastoma cells.This innovative strategy shows demonstrable cellular uptake in U87MG cells, as evaluated in several biological experiments.Furthermore, the protease activity of intracellular Cat B cleaves the peptide drug conjugate, releasing the active drug to elicit toxicity in cancer cells.We postulate that after the initial interaction between T7-SN-38 and the TfR located on the cell surface, the receptor−ligand complex undergoes internalization via the process of receptor-mediated endocytosis.This internalization facilitates the enzymatic conjugate cleavage and release of the free drug within glioblastoma cells mediated by intracellular cathepsin B, leading to site-specific cytotoxicity.Our results highlight the promising translational potential of the delivery strategy as a site-specific therapeutic approach for individuals with brain cancer.Our findings strongly suggest that T7-SN-38 holds considerable promise as a chemotherapeutic strategy for treating glioblastoma and warrants further in-depth investigation.Work is ongoing to evaluate the reported strategy for the T7-targeted combination delivery of a PARP inhibitor and SN-38 to the brain for the treatment of glioblastoma.These conjugates were evaluated in vivo in subsequent studies.
Additional experimental details, such as synthetic schemes of peptides via SPPS, ESI-MS spectra of all synthesized compounds, 1 H NMR spectra for select compounds, analytical HPLC spectra for synthesized peptides and conjugates, and FT-IR spectra of select compounds (PDF) ■
Figure 1 .
Figure 1.General structure of a targeted peptide drug conjugate.Created with BioRender.com.
Figure 2 .
Figure 2. Receptor-mediated transcytosis for drug delivery across the BBB.(1) The transferrin receptor targeting peptide (T7) binds to transferrin receptors on the BBB.(2) T7-SN-38 forms a receptor−ligand complex on the cell membrane, and the cell membrane begins to invaginate, creating a small region called the clathrin-coated pit.(3) The clathrin-coated pit eventually pinches off from the cell membrane, forming a vesicle known as a clathrin-coated vesicle or endosome, which contains the T7-SN-38 conjugate.(4) Release of T7-SN-38 to the tumor site followed by binding to transferrin receptors overexpressed on glioblastoma cells.(5) Cat B cleavage of the VA linker leads to the liberation of SN-38 from the complex.This activates the cytotoxic drug in the tumor microenvironment.Created with BioRender.com.
Figure 8 .
Figure 8. Results of Glide docking predicted protein-peptide poses.(a) T7 peptide in gray, bound to TfR, the cyan 2D protein structure, with a Glide score of −9.950 kcal/mol.(b) T7-SN-38 peptide conjugate in gray, bound to TfR, the cyan 2D protein structure, with a Glide score of − 5.543 kcal/mol.
Figure 10 .
Figure 10.Cleavage studies data showing the area plot of the conjugate solution in the presence of cathepsin B (T7-SN-38; red line), appearance of the parent drug (SN-38; black line), the control conjugate solution in the absence of exogenous cathepsin B (T7-SN-38; blue line), and appearance of free SN-38 in the absence of cathepsin B (T7-SN-38; gray line) (n = 3).
Figure 13 .
Figure 13.Percent cell viability data showing the effect of SN-38 and T7-SN-38 on U87MG cell lines after (a) 24 h exposure to treatment in TfRblocked cells and (b) 72 h exposure to treatment in TfR-blocked cells.Media, 0.05% DMSO solution in medium, and cells blocked with a 10-fold molar excess of the T7 peptide served as controls.The results are represented as the mean ± std (n = 4).
Figure 14 .
Figure 14.Percent cell viability data in U87MG cells preincubated with excess T7 targeting ligand and treated with culture media containing 80 nM SN-38 and T7-SN-38 containing 80 nM SN-38; and culture media containing exogenous cathepsin B in addition to 80 nM SN-38 and T7-SN-38 containing 80 nM SN-38 after 72 h exposure to treatment.Control wells are media only, 0.05% DMSO in medium and 10-fold molar excess of the T7 peptide.The.The results are represented as the mean ± std (n = 4). | 2024-01-08T16:45:04.738Z | 2024-01-04T00:00:00.000 | {
"year": 2024,
"sha1": "c44f041238d9b0b66d2adada0e61ac1f879b07bd",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c07486",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e036da5cd6801dbd673bf4e55a2c0cc8fea4cb0d",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256193974 | pes2o/s2orc | v3-fos-license | Relationship between dietary patterns and physical performance in the very old population: a cross-sectional study from the Kawasaki Aging and Wellbeing Project
Objectives: As the world’s population is ageing, improving the physical performance (PP) of the older population is becoming important. Although diets are fundamental to maintaining and improving PP, few studies have addressed the role of these factors in adults aged ≥ 85 years, and none have been conducted in Asia. This study aimed to determine the dietary patterns (DP) and examine their relationship with PP in this population. Design: This cross-sectional study (Kawasaki Aging and Wellbeing Project) estimated food consumption using a brief-type self-administered diet history questionnaire. The results were adjusted for energy after aggregating into thirty-three groups, excluding possible over- or underestimation. Principal component analysis was used to identify DP, and outcomes included hand grip strength (HGS), timed up-and-go test, and usual walking speed. Setting: This study was set throughout several hospitals in Kawasaki city. Participants: In total, 1026 community-dwelling older adults (85–89 years) were enrolled. Results: Data of 1000 participants (median age: 86·9 years, men: 49·9 %) were included in the analysis. Three major DP (DP1: various foods, DP2: red meats and coffee, DP3: bread and processed meats) were identified. The results of multiple regression analysis showed that the trend of DP2 was negatively associated with HGS (B, 95 % CI –0·35, –0·64, –0·06). Conclusions: This study suggests a negative association between HGS and DP characterised by red meats and coffee in older adults aged ≥ 85 years in Japan.
identifying DP: a priori DP, which are based on established hypotheses, and posteriori DP, which depend on data (14) . An example of the priori DP is the Mediterranean diet score, which has been reported to be associated with PP. It is unclear if it reflects the daily diets in Japan because of its characteristics of the mediterranean diet and regional differences (9) . Other studies have assessed adherence to the Japanese Dietary Balance Guide and examined its association with mortality; however, these studies were sensitive to the accuracy of the dietary survey (16) . The posteriori DP are techniques that use principal component analysis, factor analysis or cluster analysis to indentify DP. However, the naming of the DP is left to the investigator in each study. In Japan, 285 unique DP have been identified by factor analysis or principal component analysis. After examining their similarity, six major DP were finally identified, including 'healthy' DP or 'western' DP. Healthy DP is defined differently in different countries but is characterised by a high intake of plant foods (such as legumes and vegetables) and vitamins and minerals at the nutrient level (17) . Such as trend towards DP is positively associated with PP tests (18) .
With the expected and current increase in the number of adults aged ≥ 85 years, an important point to consider is that while previous studies have examined various DP, most have focused on adults and younger older adults (about 65 years), and very few studies have focused on adults aged ≥ 85 years.
The association between DP and PP in this age group was examined in the UK. The most frequent DP was characterised by red meats, which longitudinally suggested a negative association with HGS (10) . Because of age and generational differences in diet, it is difficult to apply evidence from the younger older population to populations aged ≥ 85 years, and due to possible regional differences, there is an immediate need in public health nutrition to identify DP specific to Asian regions (e.g. Japan and China) and their relationship to PP (19,20) . Thus, there is little evidence to indicate what DP that age group may have; in other words, little is known about their eating habits at that age. Therefore, this study used data from a population aged ≥ 85 years living in Japan to identify the major DP in the older population in this age group and to examine the association between DP and PP.
Study population
This cross-sectional study used data from the Kawasaki Aging and Wellbeing Project (KAWP) conducted in Kawasaki city (Kanagawa Prefecture, Japan) (21) . The inclusion criteria of KAWP were as follows: (1) resident of Kawasaki city (population of 1·5 million), located in the Greater Tokyo Area; (2) age between 85 and 89 years; (3) no need for long-term care or up to support level 1 (no limitations in performing basic activities of daily living); and (4) ability to independently visit the study site (several hospitals in Kawasaki city). Using the basic registration of residents and the long-term care insurance database, 12 906 participants were screened as potential participants. An invitation letter for this study was mailed to 9978 individuals, and 1464 eligible residents expressed their willingness to participate. Between March 2017 and December 2018, 1026 community-dwelling older adults were enrolled in KAWP (Fig. 1). A comprehensive baseline assessment was conducted, which included the assessment of physical, mental, cognitive performance and social participation. The study excluded individuals with deficits in the dietary survey (n 11) and those who were deemed to have large reporting errors in the dietary survey (estimated energy intake > 16 736 kJ (4000 kcal) or < 2510·4 kJ (600 kcal), n 15).
Dietary survey
The dietary survey was conducted using the brief-type selfadministered diet history questionnaire (BDHQ) that was validated using 3-d half-weighted food records in adults aged ≥ 80 years (22) . The BDHQ was completed by the participant but was reviewed and modified as needed by trained researchers. The questionnaire estimates energy and nutrient intake based on the type, amount and frequency of foods consumed in a typical meal during the past month. The BDHQ was sent to the participants' homes with other questionnaires 2 to 3 weeks before the survey and was completed by them. However, family members were allowed to assist in special cases, such as when the participants could not hold a pen, read or understand. They were brought on the day of the survey, and a trained investigator checked them with the participants on site and made corrections as necessary.
Test of physical performance PP assessment included three tests: HGS, timed up-and-go (TUG) test and walking speedthe latter two assessments were performed at the usual speed. HGS was measured twice in the standing position with the dominant hand using a digital dynamometer (Grip D, T.K.K. 5401, Takei Scientific Instruments). The measurement was recorded at a standing position, with the elbow and wrist in the extended and intermediate positions, respectively. Only the dominant hand was measured twice, and the maximum value was used. TUG measures the time from sitting to walking to a marker 3 m in front of the subject and then turning around and sitting down again. Walking speed was averaged as the 5 m walking test time twice at a comfortable speed. We provided 2-m back and forth sections (acceleration and deceleration sections) on the 5-m gait path.
Assessment of covariates
The study assessed demographic and socio-economic variables such as sex, age, education, economic status and employment status, as well as lifestyle habits such as smoking habits and physical activity. Physical activity was assessed using the modified Zutphen Physical Activity Questionnaire. This questionnaire was validated with the same age population (23) . Participants were asked if they had conducted any of the activities (walking and exercise/ sports) in the previous week. The number of metabolic equivalents (MET)×hours per week was calculated by multiplying the activity intensity, duration (hours/week) and frequency (number of times per week) of each physical activity. Medical data included medical history (heart disease, kidney disease, hypertension, diabetes, dyslipidaemia and cancer) and long-term care needs. Cognitive performance was assessed by a clinical psychologist in a private room using the Mini-Mental State Examination (MMSE).
Statistical analysis
Since it is beneficial to identify the characteristics of the population, this study used principal component analysis to identify DP. Before identifying DP, the estimated fiftyeight foods were categorised into thirty-three foods and food groups, which were estimated based on previous studies of 70-to 90-year-old Japanese (24) . They were adjusted for energy using the density method, and principal component analysis was performed. Based on the eigenvalues and scree plots, we examined up to the third principal components (DP) for possible interpretation as DP and calculated the principal component score. The principal component score is a continuous variable ranging from -1 to 1. A higher score indicates a higher adherence to the DP. In this study, the median principal component score was used to divide the DP into two groups to indicate the trend towards each DP. The low-trend and high-trend groups were differentiated (low-trend group v. high-trend group). A linear regression model was used to examine the relationship between DP and PP. In addition to the principal component scores (quantitative variables) for each DP, Model 1 was adjusted for sex, age, BMI and MMSE. Model 2 was adjusted for activities of daily living, years of education, economic status, smoking habits, physical activity level, living conditions, medical history (CVD, renal disease, hypertension, diabetes, dyslipidaemia and cancer) and long-term care needs in addition to the variables in Model 1. Statistical analysis was performed using SPSS version 26.0 (IBM Japan, Tokyo, Japan), and statistical significance was set at P > 0·05. Subjects analysed, n 1000 1) Missing in BDHQ n 11 Fig. 1 The inclusion criteria of KAWP. There are seven categories of long-term care benefits and support in Japan: no certified (no need for long-term care), support levels 1 and 2 for preventive, long-term care benefits, and care levels 1 to 5 for long-term care benefits. The higher the level of care, the more advanced the functional decline. KAWP, Kawasaki Aging and Wellbeing Project; BDHQ, brief-type self-administered diet history questionnaire
Results
Three DP were identified (Tables 1 and 2). The first DP was characterised by the intake of various plant foods. In addition to vegetables (other green and dark yellow vegetables), other plant foods such as mushrooms, seaweed and fruits showed positive loading. Also, foods such as fish and seafood showed positive loadings, and this DP was classified as 'various foods' (DP1). The second DP was characterised by consuming protein-rich foods such as meats, eggs and soya products. Of these, red meats had the highest loading. In addition, coffee had a high loading, and this DP was classified as 'red meats and coffee' (DP2). The third DP was characterised by a low loading of meshi (cooked rice) and miso soup (traditional Japanese soup) and a high loading of bread and processed meats and was classified as 'bread and processed meats' (DP3). Three DP accounted for 10·7, 6·5 and 5·7 % of the variance, respectively (22·9 %). The characteristics of all participants and details of socio-economic variables, lifestyle, and medical information for each DP are shown in Table 2.
The total number of participants was 1000 (49·9 % men), and the median age was 86·9 years. For DP1, the high-trend group (i.e. the group with principal component scores higher than the median) had more women and better MMSE scores, and differences in economic status were observed. In terms of lifestyle, differences were found in smoking habits and physical activity in the two groups (high-trend group v. low-trend group). DP2 showed significantly fewer people working in the high-trend group; DP3 confirmed more women and longer education in the hightrend group. The nutritional intake of each DP is shown in Table 3. For macronutrients in DP1, the high-trend group had higher protein and fat intake and lower carbohydrate intake. DP2 was similar to DP1, as protein and fat intake were significantly higher in the high-trend group. The total protein and animal protein intake were high in the hightrend group, while plant protein was not significantly different between the two groups. In other words, the proportion of animal protein intake was higher in the high-trend group. In DP3, total protein, animal protein and plant protein intake were significantly lower in the high-trend group. The intake of total fat, SFA, MUFA, PUFA were significantly higher in the high-trend group, and there was no significant difference in carbohydrate intake between the two groups. Next, micronutrients with antioxidant properties such as vitamins A, E and C were all higher in the high-trend group in DP1 and DP3. In contrast, vitamin C was not significantly different in the two groups in DP2. Most of the other nutrients showed higher values in the high-trend group for DP1 and DP2, but many nutrients showed less variability in DP2 than in DP1. DP3 showed different variability for different nutrients. The association between each DP and PP is shown in Table 4. The DP1 ('various foods') and the DP3 ('bread and processed meats') were not significantly associated with any outcomes. The DP2 ('red meats and coffee') had a significant negative association with HGS even after adjusting for confounders (B: -0·35, 95 % CI -0·63, -0·06). Participants with possible cognitive decline (MMSE score ≤ 21) were excluded from sensitivity analysis, and the relationship with PP was examined by identifying DP, but the results did not change. In addition, when the outcome was changed to HGS per body weight (kg/kg), the significance remained unchanged.
Discussion
This study identified three DP and examined their association with PP (HGS, TUG test and walking speed) in adults aged ≥ 85 years. After adjusting for various confounders, a DP characterised by red meats and coffee consumption (DP2) was negatively associated with HGS. To the best DP, dietary pattern.
The P-values were tested for the low-and high-trend groups.
of our knowledge, this is the first study in Asia to examine the relationship between DP and PP in this age group. The first DP, 'various foods', which had the highest contribution in this study, was characterised by the intake of plant foods. This was similar to the 'healthy' DP in previous studies, but there was no significant association with PP (17,18) . A cohort study of adults aged ≥ 85 years in the UK found that DP (high red meat), characterised by red meat and potatoes, was negatively associated with HGS, similar to the results of this study (10) . Although a dramatic westernisation of dietary habits has been pointed out in Japan, the most frequent DP in the 85 years and older group are likely to be similar to the DP of residents in the same country (17,25) . However, the similarity of the second DP to DP of the same age group in the UK indicates that changes in dietary habits may occur in this age group (20) . It is scientifically interesting that the association with HGS is consistent with evidence from the same age group, as well, although careful observation of the dietary habits of this age group over a longer period is necessary.
HGS is often used as a measure of muscle strength, and the relationship between DP and HGS is considered complex but may be explained by protein intake and intake of nutrients with anti-inflammatory and antioxidant functions (10) . Protein is an essential muscle-building nutrient. It has been reported that older adults require more protein for muscle protein synthesis than younger adults; thus, it is clear that adequate protein intake is important for older adults (26,27) . The total amount of protein can be broadly divided into plant and animal sources. In terms of lean body mass, animal protein has been suggested to be more beneficial in younger age groups (< 50 years) but has little or no advantage in the middle-aged and older age groups (> 50 years) (28) . For the acid-base balance of the diet, it has been suggested that the consumption of high-protein-rich foods can lead to acidosis. Acidosis has been shown to affect protein metabolism, leading to decreased muscle mass by reducing protein synthesis and accelerating muscle protein degradation (29,30) . A study in Japan indicated that a higher dietary inflammation index was associated with an increased risk of sarcopenia and with a lower intake of nutrients that protect the muscle from inflammation, but not with over-consumption of nutrients that induce an inflammatory response (31) . While the effects of oxidative stress and chronic inflammation of muscle proteins have been discussed, it has also been suggested that micronutrients, with their antioxidant properties, may play a protective role. Vitamin C is particularly known to improve PP (32) . It has also been suggested that several micronutrients (such as Fe, Mg and Ca) are associated with muscle protein synthesis, frailty and sarcopenia in the older population (33,34) . The critical difference between DP1 and DP2 was the percentage and source of protein intake; DP2 consumed more animal protein than DP1. In addition, for DP1, the source of protein was seafood, such as fish, while for DP2, the main sources were red meat and processed meat. Despite a higher percentage of animal protein intake in DP2, vitamin C intake did not significantly differ between the two groups of DP2. It is also clear that many other micronutrients were less variable than in DP1. This situation may promote chronic inflammation and oxidation of muscles, which may lead to a phenotype of low HGS, promoting a decrease in muscle protein, even if protein intake is sufficient for muscle synthesis. The TUG test and gait speed were not significantly associated with DP. They are measures of comprehensive leg muscle strength. In contrast to HGS, they require not only a single but multiple muscle activities and spatial awareness ability. Therefore, there are complex factors other than nutrition. In addition to kinematic factors such as agility and balance, neurosensory functions such as intelligence and sensation may also be influential (35,36) . This study has several strengths. First, most studies focused on a single food or nutrient, whereas the present study evaluated DP more closely related to real-life dietary habits. Second, this study considered many important confounders, including demographic variables, socio-economic status and lifestyle factors. Third, this is the first study in Asia to examine the relationship between DP and PP in a large population of older adults aged ≥ 85 years, which is very rare worldwide, and age-related confounding may be less than in other studies because the participants were only aged 85 to 89 years. However, this study had several limitations. First, a cross-sectional design was used to examine the relationship between DP and HGS; a causal relationship cannot be inferred. In other words, this study's results may indicate a trend towards DP characterised by animal foods, such as red meat, in populations with lower HGS. Second, although the BDHQ used in the dietary survey has been validated in older adults of this age group, it may not accurately indicate the dietary habits of this age group because it is not a specialised survey. Finally, since all participants were residents of Kawasaki city, these results may not be representative of other regions. In addition, although the proportion of women should be higher considering the life expectancy in Japan, the ratio of men and women, which was 50 % each, does not rule out the possibility that this age group is not representative of the population. Furthermore, since the sample was limited to older adults requiring up to long-term care level 1, it is unclear whether it represents this age group. Therefore, prospective studies that consider the region, study design and dietary assessment methods are needed to clarify the association shown in the present study.
Conclusions
A negative relationship was observed between DP characterised by red meat and coffee with HGS in a population aged ≥ 85 years in Japan. It is necessary to examine the association between DP and HGS longitudinally, which showed a negative trend in the present study. | 2023-01-25T06:18:12.098Z | 2023-01-24T00:00:00.000 | {
"year": 2023,
"sha1": "702a5865ff99d2f6df183079ef385e33e943723f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6B3F7C1791B8396B13B37189C43FF565/S1368980023000113a.pdf/div-class-title-relationship-between-dietary-patterns-and-physical-performance-in-the-very-old-population-a-cross-sectional-study-from-the-kawasaki-aging-and-wellbeing-project-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "031e8b1a1b0a6d145561067012c10d1a4d98861d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246030472 | pes2o/s2orc | v3-fos-license | The calculation of asynchronous motors characteristics based on the T-shaped equivalent circuit according to catalogued data
The paper focuses on the issue of calculating the operational characteristics of low-power asynchronous motors used in the agro-industrial complex. The substantiation of the relevance related to reducing the error while calculating the operating characteristics for the estimation and selection of a rational electric drive is given. The literature analysis presented considers the various methods of calculating the main characteristics of asynchronous motors. The equations have been compiled and solved; they help to calculate the operational characteristics without direct reference to the resistance values in the T-shaped equivalent circuit of the motor. The results of calculating the characteristics of one asynchronous motor and their comparison with the known characteristics given in the theory of electrical machines are presented. A method of engineering calculation for educational and work tasks to improve the efficiency of using electric drives is proposed. The field of further research in terms of clarifying the empirical coefficients and revising the complexity of tasks related to determining the resistance of a T-shaped equivalent circuit considering the capabilities of modern computing environments has been determined.
Introduction
An electric drive based on asynchronous motors is a widespread mechanism for converting electrical energy into mechanical useful work within a wide variety of technological processes for various units.
Asynchronous electric motors of various capacities are used in engineering practice. In this regard the calculation of engine characteristics is performed on the basis of G and M-shaped substitution schemes, which is quite enough for medium and high-power engines, but introduces a significant error for low-power engines that are widely used in the agro-industrial complex.
Modern computing environments make it possible to use more accurate methods with the minimal time costs; for this reason, it is advisable to return to the analysis of the T-shaped asynchronous motor equivalent circuit and the application of its consequences to calculate the characteristics of the electric drive.
The following main characteristics are distinguished in electric drives for motors: 1) the mechanical characteristic showing the relationship between the developed force (torque) and the speed of the drive unit (rotor); 2) the electromechanical characteristic showing the current value of the main motor winding (stator winding for asynchronous motors) depending on the speed of the drive unit (rotor). Besides for solving some system engineering tasks it is necessary to know the characteristics that represent the efficiency of using the engine: 1) the characteristic of the power factor, which makes it possible to determine the load rationality of the conductor cross-sectional area in the supply wires (cables); 2) the characteristic of the efficiency factor, which makes it possible to determine the electric motor efficiency and is used for the analysis of thermal processes.
Currently, the scientists are following the two different ways of doing research in this sphere: 1) the study of determining the G and M-shaped equivalent schemes parameters empirically through constructive data, through catalogue data or by combining these methods [1][2][3][4][5][6][7][8]; 2) the study of selecting the necessary characteristics close to the real ones and suitable for the purposes of engineering calculation [9,10].
Diverse approaches to studying asynchronous motors, their characteristics and the ways to improve their reliability are investigated by a variety of criteria. Some authors consider the problem from the point of view of minimizing engine losses [11], others investigate the torque of the load and carry out high-performance speed control, for which they use an accurate linearization method with nonlinear feedback based on the theory of differential geometry [12]. Recently, new strategies for controlling field attenuation with variable reference voltage for asynchronous motors have been proposed [13], [14]. Scientific studies devoted to the reliability of asynchronous motors and the systems in which they are used are relevant as they help to solve the problem of reducing costs and excessive time spent on maintenance and troubleshooting [15,16].
It should be noted that the joint task of determining a holistic methodology for calculating the asynchronous motors characteristics, taking into account the resistances of the equivalent circuit, is not properly studied. In engineering practice, well-known ratios from the theory of electric machines are used, as well as the formulas with empirical coefficients. It should be noted here that for most of these empirical coefficients, the ratios were determined on the basis of examining the early series engines, while modern industry continues to improve the designs of asynchronous motors. In general, this inevitably leads to the fact that the empirical coefficients will differ more and more from the real values, which means that the calculation error will also be greater. This trend is especially strong for asynchronous motors of low power (less than 2.2 kW), which are widely used in the agro-industrial complex.
Based on the analysis performed, it can be concluded that it is necessary to refine the existing engineering calculation methods to reduce the impact of accumulated errors.
Methods and materials
A T-shaped asynchronous motor equivalent circuit based on standard assumptions within the framework of electric machines theory is used to improve the accuracy of determining the characteristics parameters. We assume that there is such a T-shaped circuit with six linear resistances; and for this scheme it is possible to calculate the characteristics corresponding to the desired asynchronous motor. The traditional transition to the G or M-shaped scheme is not performed. Determination of the total circuit resistance is carried out in two stages: 1) the determination of the equivalent resistance from parallel connection of magnetizing and rotor circuits; 2) the summation of the stator circuit resistance and the equivalent resistance of the magnetizing and rotor circuits.
Omitting the equivalent transformations, we obtain it in an integrated form at the first stage: where Ar1, Ar2, Ax2, Bz1, Bz2 are the dimensionless coefficients determined by constant resistances in circuits: . (2) It should be noted that the sliding coefficients in the first degree are inversely proportional to the active resistance of the rotor circuit also in the first degree; a similar situation with the second degree is observed for the sliding coefficients with the second degree.
Being at the second stage and omitting the equivalent transformations, the total resistance in an integrated form can be determined by the formula where 0 = 1 + m is an active resistance of idling, Om; 0 = 1 + m is the reactance of idling, Om; Cr1, Cr2, Cx1, Cx2 are the dimensionless coefficients determined by constant resistances in circuits:
=
where 0 = � 0 2 + 0 2 is the total resistance of idling, Om; Dz1…Dz4 are the dimensionless coefficients: Any polynomial of the fourth degree can be expressed through the square of the polynomial of the second degree considering the remainder. Then expression (5) can be transformed into: where Fz1, Fz2, Ez2, Ez3 are the dimensionless coefficients: Taking as an assumption that the subduplicate expression (7) in the area of small slips (on the working part of characteristics) differs slightly from one, the function of the total resistance can be written in a simplified form: From expressions (3) and (9) it is possible to determine the power factor functions in the circuit: where cos 0 is an idle power factor. Assuming that the circuit is powered by a sinusoidal voltage with a constant frequency, expression (9) can be used to find the function of the electromechanical characteristics of an asynchronous motor: where U is the net phase voltage, V; 0 = / 0 is the engine idling current, А. From expressions (10) and (11) the following conclusions can be drawn: 1) the no-load current directly depends on the voltage value and does not depend on the active resistance of the rotor circuit; 2) the no-load power factor does not depend on the net voltage and on the active resistance of the rotor circuit; 3) in asynchronous motors with a phase rotor the dimensionless coefficients will change inversely proportional to the value of the total additional resistance rd to the appropriate degree when active additional resistances are introduced into the rotor circuit: (12) 4) in expression (11) we can find the slip coefficients without determining the circuit resistances using the data of the no-load test and four points with identified currents, power factors and speeds.
Results and discussion
The verification of the obtained expressions was carried out on the basis of numerical calculations using the example of a randomly selected asynchronous motor of a conventional design to compare the calculated characteristics known from the theory of electrical machines. The AIR80B2 engine was chosen for the research, the cataloguedata were taken according to the data of the JSC "Mogilev Plant "Elektrodvigatel": 2.2 kW; 2810 rpm; efficiency is 79.7%; cosφн = 0.87; mp = 2.1; mk = 2.6; mmin = 1.8; ip = 6.4. Thus, we can determine the nominal slip Sn = 6. (3) %, rated current in the phase conductor In = 4.807 A, rated motor torque Mn = 7.476 N•m, synchronous speed ω0 = 314.159 rad/s. The additional data established experimentally are required to calculate the characteristics (10) and (11). In those cases when it is impossible to conduct an experiment, it is recommended to take values close to real ones by the specific points of characteristics determined through empirical ratios: 1) the experiment data carried out in idle mode can be evaluated as follows: 2) at the critical speed (at the maximum torque) the current and slip can be evaluated as follows: 3) starting current with slip equal to one (at zero speed): From these data it is possible to solve a system of four linear equations with four unknown coefficients using expression (11); it is not difficult when using computing environments. Then we transfer the coefficients Fz1 and Fz2 to equation (10) and find unknown coefficients. The calculation of the mechanical characteristic can be performed according to the Kloss formula (without simplifications for high-power engines): where ε = 1 is the coefficient determined from the ratio of the equivalent circuit resistances. In the practice of an electric drive application, this coefficient is determined under the condition when the nominal torque is reached at the nominal slip: Based on the known characteristics it is possible to determine the motor efficiency function on arbitrary slip: . (23) It should be noted that expression (22) is visually simpler than (23) and it will be expedient to use it when performing numerical calculations.
The results of calculations based on the estimated data using AIR80B2 engine are shown in the figures below as an example. ---stator current characteristic in r.u.
--efficiency characteristic ··· power factor characteristic It should be noted that if we assume that the AIR80B2 motor has a phase rotor (but not squirrelcage) and an additional active resistance is introduced into its circuit, when using the coefficients from Table 1 and when recalculating through relations (12), similar artificial characteristics can be obtained.
Conclusion
The use of computing environments for engineering design tasks in the educational and detailed design of electric drives is an important task. At the same time, a number of problems, the solution of which was previously roughened in order to reduce the complexity of calculations, can be solved by automating calculations.
The method of engineering calculation presented in this paper helps to evaluate the main characteristics of electric motors with a minimum amount of experimental data.
In further studies, it is necessary to clarify the empirical coefficients used when trying to estimate the values of current and power factor at different rotor speeds, and to find all six resistances of the Tshaped asynchronous motor equivalent circuit; this will improve the accuracy of calculating the artificial characteristics when the frequency of the supply network current changes. | 2022-01-19T20:09:03.766Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "79aced9293b7178e190bddc92ebf9fd7b10f4c44",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/949/1/012130",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "79aced9293b7178e190bddc92ebf9fd7b10f4c44",
"s2fieldsofstudy": [
"Engineering",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
2547879 | pes2o/s2orc | v3-fos-license | Dihydromyricetin Induces Apoptosis and Reverses Drug Resistance in Ovarian Cancer Cells by p53-mediated Downregulation of Survivin
Ovarian cancer is one of the leading causes of death in gynecological malignancies, and the resistance to chemotherapeutic agents remains a major challenge to successful ovarian cancer chemotherapy. Dihydromyricetin (DHM), a natural flavonoid derived from Ampeopsis Grossdentata, has been widely applied in food industry and medicine for a long time. However, little is known about the effects of DHM on ovarian cancer and the underlying mechanisms. In this study, we demonstrated that DHM could effectively inhibit the proliferation of ovarian cancer cells and induce cell apoptosis. Survivin, an inhibitor of apoptosis (IAPs) family member, exhibited a decreased expression level after DHM treatment, which may be attributed to the activation of p53. Moreover, DHM markedly sensitized paclitaxel (PTX) and doxorubicin (DOX) resistant ovarian cancer cells to PTX and DOX by inhibiting survivin expression. Collectively, our findings highlight a previously undiscovered effect of DHM, which induces apoptosis and reverses multi-drug resistance against ovarian cancer cells through downregulation of survivin.
Scientific RepoRts | 7:46060 | DOI: 10.1038/srep46060 Moreover, survivin is overexpressed approximately 40-fold in tumor tissues and renders cancer cells resistant to radiotherapy and chemotherapy. Therefore, identifying survivin inhibitors represents an important step of effective cancer treatment.
Dihydromyricetin (DHM) (Fig. 1A) is a flavonoid that can be isolated from the stems and leaves of Ampelopsis grossedentata 12 . It is associated with multiple pharmacological benefits by performing anti-inflammatory, antioxidant, antibacterial, antihypertensive and antithrombotic activities [13][14][15] . Moreover, previous studies have demonstrated its potent antitumor activity against a broad range of cancers, including breast cancer, liver cancer, colon cancer, and lung cancer [16][17][18][19][20] . DHM can inhibit cancer cell proliferation and can induce cell cycle arrest and (25,50, 100 μ M) on the cell cycle distribution of A2780 cells. Data are expressed as mean ± SD for three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001. apoptosis. It is also known to sensitize cancer cells to chemotherapeutic drugs [21][22][23][24] . However, little is known about its effects on ovarian cancer, and the underlying mechanisms of its anticancer effects still require further investigation. In this study, we aimed to investigate the therapeutic potential of DHM on ovarian cancer and explore its underlying mechanisms of action. Furthermore, the effects of DHM combined with chemotherapeutic agents against resistant ovarian cancer cells were evaluated.
DHM inhibits proliferation and induced G0/G1 and S phase arrest in ovarian cancer cells. The
in vitro anti-proliferation effect of DHM was assessed in A2780 and SKOV3 ovarian cancer cells and IOSE80 human ovarian epithelial cells. The cells were seeded in 96-well plates at a density of 5 × 10 3 cells/well 24 h prior to DHM exposure. The cells were treated with various DHM concentrations (12.5, 25, 50, 100, 200 and 400 μ M). MTT assay was conducted to detect the cell viability after treatment with different concentrations of DHM for 24 and 48 h. As shown in Fig. 1B, DHM inhibited cell proliferation in A2780 and SKOV3 ovarian cancer cells in a concentration-and time-dependent manner. In p53 positive A2780 cells, the IC 50 value was 336.0 μ M after DHM treatment for 24 h. However, in p53 null SKOV3 cells, the IC 50 value was 845.9 μ M, which was 2.5-fold higher than that in A2780 cells. We also tested whether DHM was cytotoxic to normal ovarian cells. Interestingly, no significant cytotoxicity was observed in human ovarian surface epithelial IOSE80 cells after DHM treatment. Next, to confirm the suppressive effects of DHM on ovarian cancer cell proliferation, we performed colony formation assay on A2780 cells. The cells were exposed to 25, 50 and 100 μ M of DHM for 48 h, and were continued to be cultured for 2 weeks in fresh medium until colonies formed. Consistent with the results of the MTT assay, the colony formation capacity was observably reduced with increasing concentrations of DHM, demonstrating that cell proliferation was suppressed by DHM (Fig. 1C).
Previous studies have shown that DHM can induce cell cycle arrest in various types of cancer cells 24,25 . In this study, the cell cycle progression was examined on A2780 cells by flow cytometry. The cells were treated with different concentrations of DHM (0, 25, 50, 100 μ M) for 24 h after starvation. As shown in Fig. 1D, DHM specifically arrested A2780 cells at the G0/G1 and S phase in a concentration-dependent manner. Specifically, after exposure to 100 μ M of DHM for 24 h, the number of cells in G0/G1 increased from 56.18% to 63.44%, similar to the number observed in the S phase, whereas the percentage of cells in the G2/M phase decreased from 19.25% to 7.67%. The data displayed the significant cell cycle arrest effects of DHM on ovarian cancer cells at G0/G1 and S phase in a concentration-dependent manner.
DHM induces cell apoptosis and activates the apoptosis-related signaling pathway.
To explore whether the deregulation of the cell cycle was correlated with the induction of apoptosis, cell morphology was observed and Annexin V-FITC/PI staining was performed after DHM treatment for 48 h. As shown in Fig. 2A, while the untreated cells were rounded, cells became condensed and cell population showed dramatic depletions after DHM treatment. Moreover, A2780 cells treated with different concentrations of DHM for 48 h displayed significant levels of apoptosis in a concentration-dependent manner (Fig. 2B). The apoptotic rates of the A2780 cells in the presence of 25, 50 and 100 μ M of DHM for 48 h were 12.1, 21.1, and 26.9%, respectively (Fig. 2C). The results confirmed that DHM specifically targeted p53 positive A2780 cells and promoted cell apoptosis, which were consistent with the results of MTT assay and cell cycle study.
We further confirmed this result by evaluating the caspase 3/7 activity using the Caspase3/7-Glo Assay Kit (Promega, Madison, USA) As expected, the apoptotic rates of the DHM-treated were 34.0% (25 μ M), 61.9% (50 μ M) and 150.7% (100 μ M), respectively, which were higher than that of the control group (Fig. 2D), suggesting that an enhanced and concentration-dependent caspase 3/7 activity was induced by DHM. Next, we detected the expression of apoptosis-related protein poly ADR-ribose polymerase (PARP), caspase 8 and caspase 9 after DHM stimulation using a Western blotting detection kit. Of note, DHM treatment increased the expression of cleaved PARP and decreased the expression of caspase 3, 8 and 9 in a concentration-dependent fashion (Fig. 2E), demonstrating that PARP function was impaired via enhanced splicing, which caused cancer cell apoptosis.
DHM induces apoptosis by downregulating the expression of survivin. Survivin, as an
anti-apoptotic gene, was shown to be overexpressed in ovarian cancer 26 . Intrigued by the apoptotic effect of DHM induced in A2780 cells, we therefore sought to explore the effects of DHM on the expression of survivin. Cells were treated with DHM for 48 h, and the expression of survivin was examined by an immunoblot assay. As shown in Fig. 3A, compared with control cells, DHM downregulated the expression of survivin in A2780 cells in a concentration-dependent manner. This finding was further evidenced in the immunofluorescence images (Fig. 3B).
Overexpression of survivin attenuates DHM-mediated apoptosis. To confirm whether DHM triggered apoptosis by decreasing the survivin expression, we performed survivin transfection taking advantage of the plasmid pIRES-survivin we constructed based on the previously reported method 27 . pIRES vector was adopted as the control. The results of transfection were confirmed with Western blotting analysis and flow cytometry. As shown in Fig. 3C, cells transfected with pIRES-survivin showed a higher expression level of survivin with or without DHM treatment. In contrast, in the empty vector groups, survivin was present at lower levels under the same condition. The results of Annexin V-FITC/PI dual staining showed that cells transfected with pIRES-survivin plasmid had a decreased apoptotic rate compared to that with empty plasmid when treated with 100 μ M of DHM (Fig. 3D). On the contrary, the apoptotic rate of cells transfected with survivin siRNA reached 45.3% after DHM treatment for 48 h, 13.5% higher than that with scrambled siRNA due to survivin was knocked down by survivin siRNA (Fig. 3E). Activation of p53 was critical for DHM-induced apoptosis. It was documented that p53 and survivin could together modulate cell apoptosis 28,29 . As the above data indicated, DHM triggered increased cell apoptosis in a concentration-dependent manner. Accordingly, we sought to verify the effects of p53 on DHM-mediated apoptosis using Western Blotting. As shown in Fig. 4A, expressions of both p53 and p53 phosphorylation sites, including p-p53 (ser15) and p-p53 (ser37), were upregulated in A2780 cells after exposure to different concentrations of DHM for 48 h and this occurred in a concentration-dependent manner. However, DHM treatment at all tested concentrations did not increase the expression of p-p53 (ser20) and p-p53 (ser46) in A2780 cells and p53 in SKOV3 cells. We then conducted immunofluorescence assay to measure the expression of p53 in the A2780 cells. The fluorescence intensity of p53 was prominently enhanced in the DHM-treated cells (100 μ M), while no obvious green fluorescence was observed in the untreated cells (Fig. 4B). Next, we determined the effect of p53 downregulation on the apoptosis of A2780 cells. Notably, knockdown of p53 using p53 siRNA inhibited the expression of p53, leading to a significantly decreased apoptotic rate in response to DHM ( Fig. 4C and D).
Activation of p53 by DHM is responsible for DHM-induced survivin downregulation. To inves-
tigate the relationship of p53 and survivin in the regulation of DHM-induced cell apoptosis, we further knocked down the p53 protein with p53 siRNA. The cells were seeded in dishes and transfected with 10 μ g/mL siRNA (p53 or scrambled) in 10 μ L/dish Lipofectamine ® diluted in fresh medium. After 24 h, the cells were exposed to various concentrations of DHM, and the expression of survivin expression was evaluated by Western blotting and apoptosis assays. As expected, the cells transfected with p53 siRNA showed a higher level of survivin than those transfected with scrambled siRNA, indicating the opposite role of p53 and survivin in modulating DHM-induced apoptosis (Fig. 4C). Taken together, our data demonstrated that DHM might downregulate survivin expression via p53 activation, leading to A2780 cell apoptosis.
DHM sensitizes resistant ovarian cancer cells to paclitaxel and doxorubicin through suppressing survivin expression.
In retrospective studies, patients with high expression level of survivin show a resistance to chemotherapy and an increased recurrence rate 9 . In tumor tissues, overexpression of survivin for 40-fold renders to cancer cells resistant to chemotherapy. DHM, capable of downergulating the expression of survivin, might serve as a survivin inhibitor and reverse tumor resistance. Currently, paclitaxel (PTX) and doxorubicin (DOX) are two major chemotherapeutic drugs for treatment of ovarian cancer clinically. Thus, we evaluated whether low dose of DHM in combination with PTX and DOX was able to sensitize cancer cells to chemotherapeutic agents. To this end, PTX-resistant A2780/PTX cells were treated with 50 μ M of DHM combined with PTX (ranging from 0.01 to 1 μ M). Similarly, DOX-resistant A2780/DOX cells were exposed to 25 μ M of DHM combined with DOX (ranging from 1 to 4 μ M). Cell viability was detected by MTT assay. As shown in Figs 5A and 6A, the cell viability was decreased after treatment of the combination of DHM with either PTX or DOX. Annexin V assay was performed to evaluate their anticancer effects. Figure 5B showed that the apoptotic rate of the PTX group was 17.16%, whereas the combination of DHM and PTX increased the apoptotic rate to 29.25%. Meanwhile, the apoptotic rate in DHM and DOX combination group was 41.27%, 2.4-fold higher than that of DOX group (17.20%) (Fig. 6B). These observations suggested that DHM was able to sensitize A2780 resistant cancer cells to both PTX and DOX.
The expression levels of cleaved PARP, p53 and survivin were visualized by Western blotting. As shown in Figs 5C and 6C, low concentrations of PTX and DOX resulted in low expression levels of cleaved PARP and p53 in A2780 resistant cancer cells, indicating that the resistant cancer cells were relatively unresponsive to these chemotherapeutical agents. However, combination of DHM and PTX or DOX significantly increased the expression of cleaved PARP and p53, suggesting that the combination therapy enhanced the apoptotic effects to some extent. On the contrary, a decreased expression level of survivin was witnessed after combination treatment of DHM and PTX or DOX. This result consistently confirmed that DHM played an important role as a survivin inhibitor and contributed to reducing the expression level of survivin, which was probably modulated by the upregulation of p53, leading to an enhanced pro-apoptotic effects.
Discussion
Ovarian cancer is the sixth most common cancer among women worldwide. The successful treatment of ovarian cancer has faced several impediments, including a symptomless early state, a high incidence of recurrence and the development of chemoresistance 30,31 . The discovery of promising therapeutic agents for ovarian cancer therapy, particularly, to improve the current responses to chemotherapy, remains a key goal for achieving a better outcome. DHM, which is a natural flavonoid that induces negligible side effects in mice, has received wide interest as a potential candidate for cancer therapy 32 . Previous studies have revealed the effects of DHM on cell proliferation, colony formation, cell cycle distribution and cell apoptosis in a plethora of cancer cell lines, such as hepatocellular carcinoma, osteosarcoma, and melanoma cells 31,33,34 . Based on the results of our MTT assay, DHM treatment significantly inhibited the proliferation of both A2780 and SKOV3 cells in a concentration-and time-dependent manner but induced no significant cytotoxicity in IOSE80 cells. Obviously, A2780 cells (wild-type p53) were more vulnerable than SKOV3 cells (p53 null) at higher concentrations of DHM (Fig. 1B). For instance, treatment with 200 μ M DHM for 24 h decreased the viability of A2780 cells to 70.0% which was significantly lower than that of the SKOV3 cells (84.7%). It was previously documented that the antitumor activity of DHM was at least partially due to the activation of the p53-dependent apoptosis pathway 23,32,35 . Our results demonstrated that the p53 status might be an important determinant of the sensitivity of ovarian cancer cells to DHM.
Survivin, a nodal protein that belongs to the IAP family, is ubiquitously expressed in various types of cancers. Importantly, it is scarcely detectable in normal cells and tissues, which distinguishes it from other potential targets [36][37][38] . The downregulation of survivin was reported to be responsible for suppressing the viability and colony formation ability of cancer cells 39,40 . Considering its significance in cancer development, survivin has become an important anticancer target 41,42 . YM155, a small-molecule survivin suppressant, has shown therapeutic potential against a range of cancers in clinical trials [43][44][45] . Our results showed that DHM could downregulate the expression of survivin in a concentration-dependent manner. This result was further confirmed by transfecting A2780 cells with pIRES-survivin plasmid, which led to the overexpression of survivin and a lower apoptosis rate induced by DHM compared with the empty vector groups (Fig. 3). These findings supported the fact that the downregulation of survivin is one of the mechanisms involved in DHM-mediated apoptosis.
It was reported that silence of survivin could sensitize ovarian cancer cells to chemotherapeutical agents 46,47 . Inhibition of survivin by DHM might facilitate the sensitization to chemotherapeutical agents. Thus, we further evaluated the influence of DHM on conquering drug resistance. Apoptosis assay showed that combination of DHM with PTX or DOX increased the apoptotic rate for 1.7-fold and 2.4-fold on PTX or DOX resistant ovarian cancer cells, respectively, compared to the treatment of PTX or DOX alone. Furthermore, while the treatment of PTX or DOX had little influence on survivin expression levels, and the addition of DHM significantly downregulated the expression of survivin, suggesting that DHM was able to sensitize A2780 resistant cancer cells to both PTX and DOX. Previous studies have shown that wild-type p53 repressed survivin gene expression transcriptionally by direct binding to survivin promoter and activating p21 29 , and DHM was reported to induce cell apoptosis by activating p53 23,32,35 . We found that DHM increased the expression of p53 and phosphorylated p53 (ser15) in a concentration-dependent fashion, demonstrating that p53, the "guardian of the genome", was involved in DHM-triggered apoptosis in A2780 cells. In light of this finding, the association between p53 and survivin has subsequently been investigated. Furthermore, the present study showed changes in the survivin levels after knockdown of p53 (Fig. 4). In addition, while the combination of DHM with PTX or DOX increased the expression level of survivin, the expression level of p53 tended to be lower (Figs 5 and 6). Taken together, it was not surprising that the expression level of survivin showed an increasing trend after DHM treatment, which revealed the biphasic role of p53 and survivin in mediating DHM-triggered apoptosis.
In conclusion, this study demonstrated that DHM may be a promising chemotherapeutic agent for the treatment of ovarian cancer. The suppression of cell proliferation was stimulated by the activation of p53 and the downregulation of survivin. This study also showed a decreased expression level of survivin after DHM exposure, suggesting that DHM may trigger apoptosis though the p53-mediated survivin inhibition. Downregulation of survivin by DHM was also witnessed to sensitize A2780 resistant cancer cells to both PTX and DOX. This finding may shed new light on the direction of an effective strategy for ovarian cancer therapy.
Cell lines and cell culture. Human ovarian cancer A2780 and SKOV3 cells were obtained from Boster
Biotech (Wuhan, Hubei, China) and the American type culture collection (ATCC), respectively. Human ovarian surface epithelial IOSE80 cells were obtained from Shanghai Huiying Biotech (Shanghai, China). PTX-and DOX-resistant A2780 cells (A2780/PTX and A2780/DOX) were selected in stepwise increasing concentrations of PTX or DOX as previously described, respectively 27 . Cells were cultured in DMEM medium with 10% (v/v) heat-inactivated FBS and antibiotics (100 U/ml penicillin, 100 μ g/mL streptomycin) and maintained at 37 °C in a 5% CO 2 atmosphere. Cells were passed using trypsin/EDTA, and the medium was changed every other day. Colony formation assay. A2780 cells were seeded in 6-well plates at a density of 50 cells per well. After exposure to 25, 50 and 100 μ M DHM for 48 h, the cells were cultured for 2 weeks until colonies formed. Cell colonies were visualized by staining with crystal violet, and colonies with cell counts more than 50 were considered to be surviving colonies.
Cell cycle analysis. A2780 cells (2.0 × 10 5 cells) were seeded in 6-well plates following 24 h of starvation. Cells were exposed to 25, 50, and 100 μ M of DHM for 24 h, respectively. After trypsinization, cells were harvested and washed twice with PBS, followed by fixation in 70% ethanol at − 20 °C overnight. The collected cells were stained with 100 μ L of PI stain solution containing 20 μ g/mL PI plus 8 μ g/mL RNase for 30 min in dark conditions. Sample acquisition was performed using a flow cytometer (BD FACS CantoTM, BD Biosciences, San Jose, USA). The cell distributions in phases of SubG1, G0/G1, S, and G2/M were analyzed using ModFit LT software (version 3.0, Verity, USA). The results were analyzed based on three independent replicates. Assessment of apoptosis. Cells were exposed to 25, 50, and 100 μ M of DHM for 48 h. After treatment, cell morphology was observed and captured using a microscope (Olympus MVX10, Japan) equipped with a digital camera (ColorView II, So Imaging System, Olympus). Moreover, an Annexin V-FITC/PI apoptosis detection kit was used to detect cell apoptosis. After being treated with various concentrations of DHM for 48 h, A2780 cells were collected by centrifugation, washed twice with cold PBS and suspended in binding buffer. The cells were stained with 5 μ L of Annexin-FITC and 5 μ L of PI while being protected from light, after which they were analyzed using a flow cytometer (BD Biosciences). For drug combination experiments, A2780/PTX were cotreated with 50 μ M of DHM and 0.1 μ M of PTX, while A2780/DOX cells were cotreated with 25 μ M of DHM and 4 μM of PTX for 48 h, and the same procedures were performed as mentioned above. At least three independent experiments were conducted.
Scientific RepoRts | 7:46060 | DOI: 10.1038/srep46060 Caspase 3/7 activity assay. Caspase 3/7 assay was performed using the Caspase3/7-Glo Assay Kit (Promega, Madison, USA). A2780 cells were seeded in white 96-well plates and treated with different concentrations of DHM (25, 50, 100 μ M) for 24 h. Then, 100 μ L of Caspase 3/7 reagent was added to each well and mixed using a plate shaker. The Caspase 3/7 activity was then determined using a microplate reader (SpectraMax M5). The Caspase 3/7 activity was expressed as a fold of the untreated control treatment. At least three independent experiments were performed.
Western blotting. After treatment with DHM for 48 h, A2780 cells were lysed with RIPA lysis buffer containing 1% protease inhibitor and 1% phenylmethanesulfonylfluoride (PMSF). For drug combination experiments, A2780/PTX were cotreated with 50 μ M of DHM and 0.1 μ M of PTX, while A2780/DOX cells were cotreated with 25 μ M of DHM and 4 μ M of PTX for 48 h, and the same procedures were performed as mentioned above. The samples were incubated in lysis buffer on ice for 20 min and centrifuged at 12,000 g for 20 min, followed by the determination of the protein concentration with a BCA protein assay kit. Equivalent amounts of each sample were subjected to 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto PVDF membranes. After blocking in a solution of blocking powder for 1 h, the membranes were incubated with primary antibodies (1: 1000) at 4 °C overnight, followed by the corresponding secondary antibodies for 1 h at room temperature. The immunoblots were visualized by enhanced chemiluminescence procedures using a Western blotting detection kit (GE Healthcare Life Sciences). Each immunoblot was repeated in triplicate. siRNA transfection. The cells were seeded in dishes (100 mm 2 ) and the transfection groups were added to transfection reagents containing 20 μ M of siRNA (final concentration) diluted in fresh media without antibiotics. After incubation overnight, the medium was removed, and the cells were treated with various concentrations of DHM 24 h later. Western blots and apoptosis assays were used to evaluate the efficiency of siRNA transfection.
Plasmid and transfection. An efficient survivin vector was constructed based on the previous method 27 .
Immunofluorescence. Cells were seeded in 6-well plates for 24 h before being treated with DHM. After treatment for 48 h, cells were fixed in 4% PFA and hydrated with PBS for 1 h, followed by incubation with 5% BSA in 0.2% Triton X-100/PBS for 30 min at room temperature to block non-specific antibodies. The samples were then incubated with primary antibodies overnight at 4 °C. After being rinsed twice with PBS, cells were incubated with a fluorochrome-conjugated secondary antibody diluted in an antibody dilution buffer for 1 h at room temperature in the dark. Cell nuclei were stained with Hoechst 33342. Observations were completed using the In Cell 2000 Analyzer (GE Healthcare Life Sciences). All experiments were conducted in triplicate.
Statistical analysis.
All of the presented data were obtained based on three independent experiments and were expressed as mean ± SD. The results were analyzed by Graphpad Prism 6. Significant differences were assessed using Student's t-test. A significance value of p < 0.05 level was considered to be significant in all cases. | 2018-04-03T03:21:00.835Z | 2017-04-24T00:00:00.000 | {
"year": 2017,
"sha1": "42d520668889f3ba86370eada91e01e1d2082fc4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep46060.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42d520668889f3ba86370eada91e01e1d2082fc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236741076 | pes2o/s2orc | v3-fos-license | Development report of the next generation of civil engineering
: The development of civil engineering technology is a major issue related to the national economy and people's livelihood, especially in the modern era where new technologies are emerging in all walks of life. Combining with the current development status of civil engineering technology in the world today, the author combed the new technologies closely related to the development of civil engineering technology in recent years. These new technologies mainly include: assembly structure, intelligent manufacturing, 3D printing, and industrial digitization. The existing high-tech and civil engineering technology developments are combined. Based on the current domestic and foreign technological development environment, the author puts forward the development forecast of the next generation of civil engineering technology and the company's countermeasures. These predictions and suggestions are of great significance for the national government to formulate policies or the guidance of modern enterprises to formulate development strategies.
Introduction
Civil engineering is an important part of national construction and development, and it involves many aspects such as bridges and highways. It can be said that the development speed and degree of a country are closely related to the development status of civil engineering in that country. After the founding of New China, through continuous exploration and reforms, the socio-economic level has been continuously improved, and the level of cultural and scientific information technology applications has also been continuously improved. For the civil engineering industry, its construction mode presents a new development trend.
Regarding the understanding of the future development direction of the next generation of civil engineering, Sun, J [1] pointed out that as most countries in the world attach importance to Industry 4.0, intelligent manufacturing methods based on intelligent equipment production will be the general trend of the future development of the world's industrial system. Industrial digitization is also one of the hottest issues discussed by engineers and civil engineering experts in the industry in recent years. Chen, KH [2] pointed out that with the advent of the post-epidemic era, the original informatization development model will be banned by digitalization represented by XR (Extended Reality), digital twins, and real-time design. Xie, JK [3] and others studied the development of prefabricated structures in civil engineering, and pointed out that prefabricated structures are conducive to the promotion of intelligent manufacturing and the digital development of the entire life cycle of buildings. Shrive, NG [4] pointed out that with the continuous development of science and technology in recent years, the construction of factories has a trend towards IoT(Internet of Things), service network, data network, inter-factory Internet, and equipment integration.
The author mainly summarizes and analyzes the influence of the development trend of the industry from the current development of civil engineering, the application of fabricated structures, the application of intelligent manufacturing, the application of 3D printing technology and the development trend of digitalization. In civil engineering structures, the application of prestressing technology can greatly improve the performance and strength of the structure. For this reason, researchers have been continuously applying and developing new materials that can meet the requirements of engineering structures in the process of engineering construction. Researchers found that the full application of prestressing technology to concrete structures can greatly improve the anti-riot, fire and seismic performance of concrete structures. At the same time, the durability of the structure is also increased. The continuous improvement of prestressing technology has promoted the application of this technology in all aspects of civil engineering, including high-rise buildings, load-bearing projects and large-span structures.
(2) Building materials are constantly updated and upgraded With the continuous expansion of the construction scale of civil engineering structures and the continuous development of high-strength prestressed steel technology, engineers have found that the strength of building materials directly limits the ultra-high strength of highstrength prestressed steel. At the same time, a series of research results have been achieved in the area of highstrength concrete materials based on the new UHPC. But with the development of science and technology, these materials also need to be constantly updated and upgraded. The materials are mainly developing in the direction of green environmental protection, high strength and durability, and high efficiency and economy. The scale of the global new materials market from 2016 to 2020 is shown in Figure 1. According to the statistical data of iResearch Consulting, there are now more than 20 cities in our country that have developed and constructed underground spaces. The construction and implementation of subway projects have made outstanding contributions to the construction of underground space. As of mid-2020, our country has developed a number of light rail and subway projects under construction and completion, with a total length of 32 kilometers. In this regard, it can be said that light rail and subway projects are an important manifestation of our country's development and construction of underground space, and a sign of progress in civil engineering projects.
(4) The structural design of civil engineering is improving day by day Structural design is the most important factor affecting structural quality. Basically, in order to ensure the overall quality of the building, it is very important to improve the design of the structure. Due to the rapid development of social economy and the continuous advancement of science and technology, seismic loads and wind loads are widely used as consideration factors in the design of civil engineering buildings. Due to the current situation, our country's building structure is developing towards superhigh height and super-flexibility.
Development characteristics of civil engineering in our country
The development of our country's civil engineering industry has made great progress in recent years. Although it started late, it has led the world on some basis. Generally speaking, it has the following characteristics: High-rise buildings are not only built more and more, but also built higher. Structural systems and layouts are also becoming increasingly diversified. The highway and railway transportation business are advancing by leaps and bounds, and the mileage of high-grade highways is increasing rapidly. At present, the number of urban buildings in my country, especially houses, cannot meet people's needs. Although the development of civil engineering construction sites is relatively fast, it still cannot meet people's needs.
Development trends of civil engineering at home and abroad
(1) Urban construction will extend to high and deep aspects The socio-economic level has improved steadily. The number of large cities and megacities is growing rapidly. People's demand for housing is also growing. An inch of land and an inch of gold have become a common consciousness among urban residents.
(2) Rail transit will gradually expand underground The maturity of material conditions provides an opportunity for the development of our country's underground space. From the perspective of the development and use of underground space, the main controlling factor is cost. Therefore, to reduce the total cost of constructing underground spaces, new technologies must be developed.
(3) The port and waterway engineering project will open up to the ocean and desert Making full use of marine resources in accordance with technical conditions is an innovative task that mankind continues to expand. The ocean is more complicated than the land, but the huge potential of the ocean is a huge temptation for mankind. With the passage of time and the advancement of science and technology, people have made some amazing achievements in the field of civil engineering in the ocean.
Application of fabricated structure
The prefabricated structure is the abbreviation of the prefabricated concrete structure, which is a concrete structure formed by assembling and connecting prefabricated components as the main force-bearing components. Fabricated reinforced concrete structure is one of the important directions for the development of building structures in our country. It is conducive to the development of our country's construction industrialization and can improve production efficiency and save energy. The development of prefabricated structures is also conducive to the development of green and environmentally friendly buildings and improve the quality of construction projects.
Development status at home and abroad
In the early 1960s, prefabricated buildings appeared in the former Soviet Union, some countries in Eastern Europe and France, and then gradually spread to the United States, Canada, Japan and other countries. At present, the application density of assembled monolithic concrete structures in civil engineering in developed countries is: 35% in the United States, 50% in Russia, and 35%-40% in Europe.
In 1997, the United States Uniform Building Code allowed the use of precast concrete structures in areas with high seismic intensity. The United States has successfully applied prefabricated buildings to residential, industrial, cultural and sports buildings, such as the Phoenix Convention Center in Arizona and the JL Financial Center in Northern Lorraine.
In Europe, especially in the Nordic countries, prefabricated concrete buildings have a long history and have accumulated a lot of experience in technology. As early as the 1970s, there were more than 450 prefabricated component manufacturers. Among them, the IMC system of the former Yugoslavia has withstood the test of strong earthquakes in the Banyalu area of Yugoslavia before 1969 and 1981, and the system has shown good seismic performance.
As early as the 1950s and 1970s, the structural system of the French housing industry had used building construction techniques based on fully assembled slabs and tool formwork. In the 1970s, there was a transition to the "second-generation construction industrialization" with the production and use of general-purpose components and equipment. In 1978, the Ministry of Housing proposed to promote the "structural system."In the 1990s, the industrialization of French architecture has developed in the direction of modernization of the housing industry. Japan's prefabricated concrete buildings continued to develop from the Second World War to 1990, and they have been widely used in high-rise and super high-rise buildings in earthquake areas. Compared with developed countries, there is a huge gap in the market share of prefabricated construction in our country.
The "Twelfth Five-Year" Green Building and Green Ecological Regional Development Plan issued by the Ministry of Housing and Urban-Rural Development in 2013 clearly stated for the first time that our country should speed up the formation of industrialized building systems such as prefabricated concrete and steel structures.
Advantages of fabricated structure
Table1. Summary table of comparison of features between fabricated structure and cast-in-place structure.
Content
Prefabricated concrete structure Cast-in-place concrete structure
Productivity
On-site assembly, high production efficiency, reducing labor costs by 5-6 days/layer, and labor reduction by more than 50% There are many on-site procedures, low production efficiency, large labor input, 6-7 days/layer, and lowcost labor
Engineering Quality
The error is controlled to millimeter level, the wall has no leakage, no cracks, and 100% plaster-free works can be realized indoors The error is controlled at the centimeter level, the space size is deformed, and the installation of parts is difficult to achieve standardization, and the base quality is poor
Technology integration
It can realize the integration and refinement of design, production and construction through standardization and assembly to form integrated technology It is difficult to realize the standardization and refinement of decoration parts. It is difficult to realize the integration of design and construction and informatization.
Save resources
Construction saves 60% of water, 20% of materials, 20% of energy, 80% of waste reduction, and 70% of scaffolding and support frames Large water consumption, high electricity consumption, serious waste of materials, a lot of garbage, a lot of scaffolding, support frame
Environmental protection
No dust, no waste water, and no noise on the construction site Dust, waste water, garbage, and noise at the construction site
Application of intelligent manufacturing 4.1 What is intelligent manufacturing?
Intelligent manufacturing is the whole process of "manufacturing" of construction projects, and it is the general term for project planning, design and construction based on the consideration of the whole life cycle. Intelligent construction is the completion of various technological operations by robots with complementary functions within a predetermined time and space. The ultimate goal is to realize a construction method that deeply integrates artificial intelligence and construction requirements. [5,6] Promoting smart construction should focus on three key points: Building a management and control platform for engineering construction information model. Digital collaborative design. Robot construction.
Basic concepts of intelligent manufacturing
It was first proposed by American Wright Bonn in 1988. Intelligent manufacturing in the traditional sense is limited to the production process and only involves the intelligence of individual units. Due to the limitation of the technical conditions at the time, there is no data flow. With the development of a new generation of information technology and its continuous penetration in the manufacturing sector, the biggest feature of intelligent manufacturing is data interconnection. It breaks the bottleneck that traditional intelligent manufacturing is limited to the production process, extends to the entire production process, and then integrates into the entire production activity.
Application of Intelligent Manufacturing
The characteristics of intelligent manufacturing: Enhance production flexibility; Reduce the flow of personnel and shortage of skilled workers; Reduce waste of raw materials, save resources, and protect the environment; Improve product consistency and improve product quality; Reduce costs and expand production capacity; Improve standardization and meet safety production regulations.
In the era of Industry 4.0, civil engineering production enterprises must transform and upgrade (see table 2).
Table2. Characteristics of production in the industry 4.0 era.
Mass production
Mass customization
Management philosophy
Focus on structural safety and win the market at low cost Focus on engineering needs and quickly respond to win the market
Drive method
Arrange production according to market forecasts and structural design, which is a pushtype production method According to the needs of the project, the assembly design and production are carried out, which is a pull production method
Core
High efficiency through stability and control rate Realize the diversity of product production structure through mechanization and intelligence
Strategy
Cost leadership strategy: gain competitive advantage by reducing costs and improving production efficiency Differentiation strategy: gain competitive advantage through quick response and personalized products
The goal
Realize large-scale production of engineering structure, reduce manpower and material resources, and save resources Realize diversified structural production methods and quickly adjust structural production
Existing problems of our country's intelligent manufacturing
(1) The foundation of intelligent manufacturing is still relatively weak; (2) Intelligent manufacturing lacks the depth of the supply chain; (3) The basic system of intelligent manufacturing is not yet perfect.
From a national perspective, the main significance of intelligent manufacturing includes: Conforming to the trend of the times and mastering the right to speak in international standardization; The importance of standardization; Respond to national policies and help industrial upgrading; Large-scale complex structures can be produced on a large scale.
Examples of intelligent manufacturing
(1) Production of CRTSⅢ type high-speed railway track slab High-speed railway CRTS III type ballastless track prestressed concrete track slab is a high-speed railway ballastless track structure with independent intellectual property rights in our country. A new ballastless track structure has been developed based on the advantages of Japanese type I plate, German type II plate, domestic turnout plate and post-tensioned type Ⅲ plate. The most mature production method for CRTS III high-speed railway track slabs is through the matrix pedestal method and the unit flow method manufacturing technology. Figure 2 shows the schematic diagram of the unit flow method, and Figure 3 shows a schematic diagram of the matrix pedestal method. (2) The development of intelligent manufacturing robots With the disappearance of the demographic dividend, China's construction industry is facing tremendous pressure on labor costs, as well as a series of problems such as high risks and low production efficiency. Whether in developed countries in Europe and the United States or China in the process of adjustment and development, construction companies have been stuck in the dilemma of lack of first-line skilled workers. In the context of the transformation and upgrading of modern manufacturing and Internet service industries, the traditional construction industry has gradually lost its appeal to the younger generation. [7] (3) The significance of introducing robots in the construction industry Improve production efficiency Guarantee the safety of personnel The inevitable choice of labor shortage 5 Application of 3D printing technology
Basic concepts of 3D printing technology
3D printing technology, also known as additive manufacturing technology. The typical application in the field of civil engineering is the concrete 3D printed arch bridge, as shown in Figure 5. The prefabricated concrete 3D printed Zhaozhou Bridge draws on the construction experience of the built 3D printed buildings, and introduces BIM virtual simulation technology and modern intelligent monitoring methods. [8]
Application of 3D printing technology
(1) Color printing technology Based on its unique advanced UV inkjet printing technology, Japan Mimaki launched the 3DUJ-553 fullcolor 3D printer in 2017, which is the world's first 3D printer that provides more than 10 million colors. The 3D color printing model is shown in Figure 6. (2) 3D printing technology in the clothing industry 3D printing technology will bring a series of innovations to the textile and apparel industry. Compared with traditional clothing production technology, 3D printing technology has the following advantages: 3D printing technology allows the design to be produced at will, and truly achieve personalization. One-time molding, fast manufacturing, eliminating the need for multiple processes of traditional technology. 3D printing technology adopts the volume increase method instead of the traditional volume subtraction method, which saves raw materials and basically no waste is generated. With the rapid development of 3D printing technology and the continuous research and development of new textile materials, garment processing will realize automated "single-quantity single-cutting".
Application of 3D printing in civil engineering
(1) The concrete book house printed by robot 3D The design and construction of the bookstore shows that as a way of intelligent construction, 3D printing not only saves materials and manpower, but also has high construction efficiency and fast construction speed, and it can realize the construction of irregular shapes and ensure high-quality construction quality. [9] (2) Application of 3D printing in intelligent manufacturing of assembled monorail transit The most important component in the intelligent manufacturing of prefabricated monorail transit is the guideway, and its core technology is 3D printing technology. According to the three-dimensional design model of the engineering structure, this technology first automatically installs and manufactures the skeleton of the engineering structure such as steel bars, steel beams, steel tubes, etc., and then it uses a 3D printer to spray concrete layer by layer on the surface or inside of the engineering structure skeleton. The entire printing process is precisely controlled by the computer system, so as to realize the three-dimensional rapid and intelligent manufacturing of the engineering structure. 6 Digitization officially enters the stage of history
Basic concepts of digitalization
Informatization means using real money for data. Digitization means using data for real money. Three enabling technologies in the post-epidemic era will fully reshape the engineering construction industry: XR visual retrieval; Digital twins; Real-time design. [10] When formulating the national economic and social development plan, the government pointed out that it is necessary to persist in innovation-driven development and comprehensively shape new development advantages. Accelerating digital development can provide continuous impetus for comprehensively shaping new development advantages. We need to rely on the innovation drive of digital technology to continuously cultivate new industries and generate new kinetic energy.
The technological revolution brings opportunities for digital transformation
The new round of scientific and technological revolution will provide resources and platform foundation for scientific and technological innovation. It will not only promote the rapid development of digital technology, but will also promote the development of cross-integration of digital technology and other technologies. The new round of scientific and technological revolution promotes social innovation and development, and reshapes the development momentum and governance models of education, medical care, transportation, environment, and administration.
Digitization will reshape the innovation and development economic system
(1) Digitization will expand capacity and optimize the innovative production factor system (2) Digitization will promote the occurrence and development of innovation (3) Digitization will change the way and motivation of innovation
The internal and external environment of China's digital transformation
(1) External environment Major countries and organizations in the world have successively promoted digital strategies for innovation and development. They regard it as a development priority for strategic layout, and at the same time attach importance to the construction of digital infrastructure and regard it as a good software and hardware condition for digital transformation.
(2) Internal environment From the perspective of the demand for digital transformation from the development of technological innovation. Our country urgently needs a new scientific research paradigm to promote scientific and technological development. From the perspective of industrial innovation and development's demand for digital transformation. At present, our country's industrial innovation and development are facing many challenges, and digital transformation is urgently needed. From the perspective of social innovation and development's demand for digital transformation, the problems of fairness, efficiency, and quality in our country's social development can be solved through digital transformation.
Conclusions and prospects
In the contemporary era when high-tech emerges endlessly, the development of civil engineering technology is moving towards the fast lane. Its main features are: assembly of structural forms, mechanization of construction modes, intelligence of management methods, and digitization of information management. After the country put forward the national strategic plan of Made in China 2025, the civil engineering industry has a clear trend towards intelligent manufacturing. In order to consolidate China's international voice and leadership, the development of the civil engineering industry must follow the trend and follow the national development plan. This article introduces the development trend of prefabricated engineering structure, intelligent manufacturing, 3D printing technology and digitalization. At the same time, combined with the current domestic and foreign development environment of the civil engineering industry, it is concluded that the next generation of civil engineering will inevitably develop towards the assembly of structural forms, the mechanization of construction modes, the intelligent management of management methods, and the digital development of information management. These conclusions can provide a powerful reference for the national government to formulate future development strategic plans and enterprises to formulate future development plans. | 2021-08-03T00:05:48.842Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "cbe7a38210528987d808870316ff41aee1c27be4",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/48/e3sconf_icepg2021_01018.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "041363f07ce807472a153c32982972396dc3af79",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
979080 | pes2o/s2orc | v3-fos-license | PCR-Based Techniques for Leprosy Diagnosis: From the Laboratory to the Clinic
In leprosy, classic diagnostic tools based on bacillary counts and histopathology have been facing hurdles, especially in distinguishing latent infection from active disease and diagnosing paucibacillary clinical forms. Serological tests and IFN-gamma releasing assays (IGRA) that employ humoral and cellular immune parameters, respectively, are also being used, but recent results indicate that quantitative PCR (qPCR) is a key technique due to its higher sensitivity and specificity. In fact, advances concerning the structure and function of the Mycobacterium leprae genome led to the development of specific PCR-based gene amplification assays for leprosy diagnosis and monitoring of household contacts. Also, based on the validation of point-of-care technologies for M. tuberculosis DNA detection, it is clear that the same advantages of rapid DNA detection could be observed in respect to leprosy. So far, PCR has proven useful in the determination of transmission routes, M. leprae viability, and drug resistance in leprosy. However, PCR has been ascertained to be especially valuable in diagnosing difficult cases like pure neural leprosy (PNL), paucibacillary (PB), and patients with atypical clinical presentation and histopathological features compatible with leprosy. Also, the detection of M. leprae DNA in different samples of the household contacts of leprosy patients is very promising. Although a positive PCR result is not sufficient to establish a causal relationship with disease outcome, quantitation provided by qPCR is clearly capable of indicating increased risk of developing the disease and could alert clinicians to follow these contacts more closely or even define rules for chemoprophylaxis.
Introduction
Leprosy is a chronic infectious disease caused by M. leprae, a slow-growing intracellular mycobacteria with tropism for Schwann cell in nerves and macrophages in the skin. In some patients, the disease is challenging to diagnose since there is no gold-standard method to differentiate between infection and disease. Leprosy is also a neglected disease, being endemic in developing countries, where detection rates show only a slight trend toward a decrease in disease (or number of cases) in spite of good treatment and the efforts of the World Health Organization (WHO) to improve the quality of leprosy control programs [1]. It is accepted that transmission occurs from human to human through the upper airways, although intermediate hosts like armadillos may play a role in certain places, such as the United States [2]. It is generally held that untreated multibacillary (MB) patients are the most important source of transmission, which occurs when bacilli are spread-usually by airborne droplets from nasal and/or mouth. Hence, leprosy patients, especially those with high bacterial load, release billions of bacilli that can potentially contaminate their close relatives or household contacts. As a result, the contacts of leprosy patients are known to have a higher risk of illness than the general population. Surveillance of these contacts would be an easy control strategy to block transmission, as suggested by the World Health Organization.
However, the steady number of new cases of leprosy in endemic countries is thought to result from the perpetuating reservoir of M. leprae-infected contacts and/or from the difficulties of early clinical diagnosis. It has been shown that good surveillance of patients' contacts has increased the detection rate of less severe clinical presentations with lower bacteriological indices [3,4].
Immunological tools to detect M. leprae are based on their ability to detect major unique components like phenolic glycolipid-I (PGL-I), specific proteins by means of monoclonal and polyclonal antibodies [5,6], or T cell immune response as measured by IFNc production [7,8]. Notwithstanding, the development of good diagnostic tests for leprosy is halted by the diversity of the strength of the cellular and humoral responses, varying from high to low (non)responders. On one hand, a major difficulty concerns paucibacillary (PB) forms, in which bacilli or antibodies against it are not easily detected in most cases. These PB patients exhibit cell-mediated immunity, secreting high levels of IFNc after in vitro stimulation with specific M. leprae antigens (or a peptide fraction). On the other hand, MB patients do not produce IFNc in vitro but have high bacillary loads that are easily identified by PCR or anti-PGL-I detection. Concerning IFNc release, one problem in early diagnosis is that most of the household contacts show a similar pattern of IFNc secretion as that of PB patients [9]. Generally, contacts exhibit a sustained high production of IFNc that is dependent on continuous exposure to an infective source, i.e., a MB or sometimes a PB patient.
Both serological and immunological tests have limitations, and neither one can be considered a reliable diagnostic tool. Nevertheless, it is known that experienced clinicians and wellequipped clinics with histopathological examinations and bacillary counts, along with other clinical tests, available can diagnose most of the cases. However, the lack of a gold standard test for leprosy and the inability to distinguish infected individuals from those exhibiting active disease makes leprosy diagnosis essentially based on clinical features. Given that recognition of the disease is required, late diagnosis is relatively frequent in many patients. In addition, the lack of a specific and sensitive test to determine whether the infection has progressed to active disease makes it difficult to interrupt the transmission chain and impairs leprosy control.
Detection by PCR of M. leprae DNA in difficult-to-diagnose cases favors correct diagnosis and the possibility of early identification. In fact, the development and constant improvement of molecular tests for leprosy diagnosis has revealed that clinical manifestations like pure neural leprosy (PNL) are much more common than originally thought [10][11][12]. Here, we review several studies that discuss PCR usefulness in the clinical practice, such as in indeterminate leprosy, with patients who have clinical signs of leprosy but no confirmation through routine tests and histopathology, in difficult-to-diagnose cases, and in early detection in household contacts (Box 1).
Historical Aspects of Biochemical and Genetic Studies of M. leprae
Historically, along with the spectrum of clinical forms of leprosy, one of the problems in developing new diagnostic tests has been an inability to grow M. leprae in vitro. Initial studies of biochemical and molecular features of this mycobacteria species could be achieved only after the development of techniques for growing leprosy bacillus in the mouse footpad [13] and armadillos [14]. These models aided leprosy research, the development of new chemotherapeutic agents, and the confirmation of drug resistance and the antigenic and molecular structure of M. leprae.
The first methods to amplify M. leprae DNA, based on polymerase chain reaction (PCR), were developed a little longer than 20 years ago [15,16]. Later, another wave of significant progress in understanding the molecular biology of M. leprae came about after the completion of the genome sequencing of the leprosy bacillus described by Cole and colleagues [17] came out, along with other mycobacterial genomes allowing comparisons [18]. Since then, bioinformatics and new-generation sequencing approaches have provided information capable of supporting studies aimed at a better understanding of M. leprae genetic diversity [19,20]. In fact, it is astonishing that M. leprae has presented a very stable genome for a very long time. Samples recovered from skeletons are genetically conserved as compared to modern strains [21]. The information about M. leprae genomes also enabled isolation and characterization of genes and expression profiles. Recently, DNA microarrays shed light on M. leprae gene function and provided further understanding of the pathogenesis of leprosy [22][23][24][25][26][27]. In addition, these new technologies have proven useful in leprosy diagnosis, drug resistance detection, and for information about transmission and mycobacterial variability in high-and low-endemic areas [28][29][30][31][32][33]. Furthermore, a detailed review encloses information on pseudogenes, molecular epidemiology, and biology of M. leprae [34]; thus, these issues will not be covered here.
PCR as a Detection Tool
In the past 20 years, definitive identification of M. leprae has been possible through the development of methods for the extraction, amplification, and identification of M. leprae DNA in clinical specimens using PCR. This technique has been applied not only to skin biopsy samples, but also to several different types of specimens such as skin smears, nerves, urine, oral or nasal swabs, blood, and ocular lesions [11,[35][36][37][38][39][40][41]. Different sequences were used as targets for PCR, such as genes encoding the 36-kDa antigen [42], 18-kDa antigen [43], 65-kDa antigen [44], complex 85 [37], 16S rDNA [45], and the repetitive sequences [46] among other M. leprae genes. More recently, real-time PCR technology has improved detection, increasing sensitivity and specificity as it appears to be a robust tool for mycobacteria recognition in selected clinical situations, as well as for quantitation in experimental settings [37,45,[47][48][49][50].
One of the first studies based on PCR was carried out in 1990 by Williams and colleagues They established a procedure for detecting M. leprae DNA in infected tissues [51]. The PCR test was specific and detected M. leprae DNA in biopsies from leprosy patients. The evolution of PCR, as evaluated by technical issues (time and handling) but also by molecular and clinical sensitivity, is remarkable. In the early 1990s, radioactive probes were required to increase PCR sensitivity, and, hence, to overcome problems inherent to radioactivity, nonradioactive probes were developed [43]. Also, nested PCR was introduced and employed to increase specificity and sensitivity, to avoid the use of radioactive probes, and to shorten the time required to obtain a result [44]. Both studies demonstrated the emerging potential of PCR technology in the rapid detection and definitive identification of small numbers of M. leprae in clinical specimens.
The quality and the quantity of the isolated nucleic acid as well as the PCR target product size had tremendous effects on the success of amplification methods. Therefore, several protocols have been described for purification and amplification of M. leprae DNA, RNA, or both. Extractions that do not involve any purification step, for example, can inhibit the polymerase reaction due to impurities in the extract, as described by de Wit in 1991 [52]. Nevertheless, methods employing commercial kits have been consistently used and seem to be effective [53,54], although conditions to evaluate repeatability and other parameters to further explore the potential of the technique are still lacking. In parallel, extraction methods proved to be suitable for formalinfixed samples and further amplification under certain conditions [55]. Samples can also be easily stored in 70% ethanol and FTA cards for M. leprae DNA detection [56] exhibiting similar recovery rates. Furthermore, the size of the PCR fragment amplified has to be taken into account as adaptation of conventional [52,57] to realtime PCR assays [47] requires shorter length amplicons. An overall assessment of the impact of the PCR technology in leprosy diagnosis can be observed in Table 1 using skin biopsy samples as an example. Also, irrespective of whether the detection method used is conventional or real-time PCR, smaller PCR products allow for better amplification efficiency from DNA extracted from either formalin-or ethanol-fixed or fresh tissues. In fact, an important advance has been the real-time PCR technology. This method allows direct quantitation of bacterial DNA content in clinical samples and has improved turnaround time and cost effectiveness (Table 1), permitting more reliable results. The procedure follows the general principle of PCR, and its key feature is that the amplified DNA or cDNA (complementary DNA) is quantitated as it accumulates in the reaction in real time after each amplification cycle. These real-time methods have improved slightly but consistently the analytical and clinical sensitivity when PB patients' samples were assessed in skin samples [37,48]. In addition, analyses using real-time PCR showed that total DNA content estimated by molecular levels could be correlated to bacterial load, corroborating the clinical data, which can be useful to determine a molecular bacteriological index, helping to define the clinical form of patients [37,48,50]. Nevertheless, while PCR diagnosis is not needed for lepromatous patients with high bacillary load and high number of lesions, it is extremely helpful for the diagnosis of the already-mentioned situations such as clinical presentations with scarce number of M. leprae bacilli and difficult-to-diagnose patients.
PCR for Diagnosis of Difficult Cases
Pure neural cases PCR can aid in defining leprosy diagnosis in suspected patients with clinically suggestive or atypical lesions presenting with negative baciloscopy and inconclusive histopathology. This is true for primary neuritic or PNL patients, who are easily missed and misdirected since they do not exhibit cutaneous lesions [11]. Timely treatment is imperative in these cases because, once nerve fibrosis occurs, damage is permanent and irreversible. Ridley and Jopling (R&J) postulated that PNL might occur across the spectrum from borderline lepromatous (BL) to tuberculoid (TT) forms [58], but, in our experience, the PNL cases are indeterminate or borderline tuberculoid (BT) [59]. In fact, these patients cannot be classified according to the R&J system because of the absence of skin lesions and clear histopathological features in the nerve. Nevertheless, a general WHO classification (paucibacillary) is used as none of them present bacilli in the slit-skin smears. A careful investigation examining skin biopsies (areas of skin hypoesthesia) described the histopathological features in the cutaneous lesions of PNL cases [59]. The assessment of PNL skin biopsies showed histopathological features consistent with normal skin, although indeterminate or borderline tuberculoid histological alterations were also detected. However, analysis of patients' nerve biopsies often showed detectable bacilli using Wade staining. It is curious that, even in endemic countries, leprosy is assumed to be a dermatologic disorder. Therefore, it is quite challenging to diagnose PNL cases [10]. Neurologists are not expecting leprosy as a probable cause of peripheral neuropathy and, thus, laboratory techniques (i.e., histological evaluation, PCR from biopsy, and/or PGL-I in the serum) may be used along with pertinent clinical and electroneuromyographical data [12]. In clinical practice, PCR is very useful in detecting M. leprae DNA in nerve specimens that have been shown to be bacteriologically negative by other methods of detection. In fact, Jardim and coworkers [12] demonstrated that M. leprae infection in PNL cases is diagnosed most often by PCR, followed by anti-PGL-I antibodies and direct observation of the bacteria (acid-fast bacilli [AFB]). Hence, PCR is helpful and is being used as a confirmatory and diagnostic routine tool in difficult-to-diagnose cases such as PNL [60][61][62].
Differential diagnosis to other conditions
In an endemic country, leprosy is suspected in patients with anaesthesic lesions, although not necessarily so. PCR could be of The specificity was 100%, and sensitivity was 56%. [63] 16S Taqman real-time PCR The specificity was 100%, and sensitivity was 51%. [63]
16S
SyBr green real-time PCR 100% MB, 50% PB. [48] doi:10.1371/journal.pntd.0002655.t001 immense help for dermatological differential diagnosis in hypochromic or granulomatous lesions, such as pityriasis alba, leishmaniasis, cutaneous tuberculosis (TB), and sarcoidosis, among other skin diseases in which pathological examination is inconclusive. There are few papers evaluating the application of PCR to solve this matter. Our retrospective analysis testing different gene targets (Ag 85B [37], sodA and 16S rRNA [45], and repetitive sequence [RLEP] [50]) using a panel of samples from patients previously diagnosed by pathologists and dermatologists, provided interesting information [63]. When we include a higher proportion of paucibacillary samples (single skin indeterminate and tuberculoid forms), rates of PCR positivity decrease, but we were able to ascertain 50% of sensitivity. Obviously, that is expected since leprosy diagnosis is challenging in exactly these situations. Also, a group of other dermatological diseases were included as a negative control group, and the results suggest [63] that some positive samples for PCR were misdiagnosed. These samples were defined initially as other dermatological diseases, but patients developed leprosy 10 years later, suggesting that PCR for M. leprae DNA could be a very early detection test for leprosy [63].
PCR for treatment monitoring
In 1991, de Wit and coworkers [52] validated a PCR assay based on the selective amplification of a 530-bp fragment of the gene encoding the proline-rich antigen of M. leprae using clinical samples. They were able to detect the presence of M. leprae DNA on frozen biopsy sections from all untreated AFB-positive patients and 56% of the treated AFB-negative patients. The authors believed that PCR positivity reflected the presence of viable bacilli at the time of biopsy since a strong host immune response could result in killing of M. leprae and breakdown or clearance of its DNA in negative PCR samples.
Subsequent studies confirmed that PCR technology could be useful both for diagnosis and for assessment of viable load, as a reduction in signals was found to correlate with loss of viability. A follow-up study using patient's biopsies confirmed that M. leprae is rapidly killed after one month of multidrug therapy (MDT) treatment since MB cases declined by 54.3% and PB cases by 61.8% of initial positivity rate [42]. However, because of the persistence of weak signals, in some cases a long time after effective treatment [64,65], the authors concluded that DNA-based PCR assays lack the sensitivity to estimate any real impact of treatment on bacterial viability. Similarly, in 2001, Santos and colleagues [66] tested a PCR assay on different samples of leprosy patients that had completed MDT treatment. This PCR assay targeted a RLEP described previously and was able to detect M. leprae from hair bulb, blood, nasal secretion, lymph, and skin biopsy samples. Results demonstrated that 54.5% of the individuals were PCR positive in at least one of the samples 8 years after completion of MDT. However, it was not possible to draw final conclusions on the clinical significance of PCR positivity since assays were based on DNA detection and did not reflect viable bacilli.
To overcome this problem, several studies were conducted using reverse transcriptase-PCR (RT-PCR)-based assays for M. leprae viability estimation. It has been noted that an RNA-based test is likely to reflect only nucleic acids from living organisms, as the turnover rate of RNA is high, particularly in prokaryotes. Hence, methods based on a quantitative estimation of RNA levels in the tissues have been useful for monitoring therapeutic responses [67][68][69]. A PCR assay for monitoring bacterial clearance in leprosy patients during chemotherapy based on M. leprae 16S rRNA gene expression was described [67]. After 6 months of MDT, they found that 44% of MB patients and 4% of PB patients tested still showed viable bacilli.
However, this assay was based only on the 16S rRNA, a relatively stable RNA species under several conditions, and was unable to detect rapid killing of M. leprae. Also, given that 16S rRNA gene is a housekeeping gene, a major drawback of these previous works is the lack of a gene target to normalize the template as an indicator of bacterial numbers in the specimen. Thus, Martinez and coworkers [45] propose a real-time PCR integrated approach based on RNA/DNA ratios for viability determination, i.e., decrease of M. leprae-specific RNA is evaluated as a function of total M. leprae DNA content. Previous results demonstrated that a significant decrease in viability could be seen in vitro in as little as 48 hours post-treatment with rifampin. Also, analysis of human biopsies confirmed the correlation of MDT treatment and decline of gene expression level [45]. This new approach may be helpful in the follow-up of leprosy patients on treatment and determination of drug resistance [70]. Also, other researchers have been using the same approach to estimate the viability in M. ulcerans (Buruli ulcers) and also in pathogenic fungi [71,72]. Interestingly, this method has been applied to estimate M. leprae viability in in vitro assays [73,74].
A recent and similar approach to monitor the effectiveness of chemotherapy using hsp18 as the gene target was developed by Lini and coworkers [75]. The copy number of bacterial DNA and hsp18 mRNA was estimated from 47 leprosy patients during treatment using paraffin-embedded biopsy samples. A reduction in DNA and mRNA during chemotherapy was observed, and hsp18 mRNA could not be detected in patients who underwent 2 years of MDT. Ten years ago, WHO recommended shortening the treatment to 12 months, although no molecular studies compared both regimens. Anyways, since there are no clear epidemiological alterations in relapse rates as examples, it could be suggested that indeed all M. leprae is being killed after 12 months of treatment, although a considerable amount of M. leprae DNA remained in the skin after 2 years of MDT. Also, recent molecular epidemiological evidence indicates that reinfection is more common than relapse in second episodes of the disease emerging [76].
PCR for the study of leprosy transmission and household contact surveillance
It is clear that household contact examination and follow-up is a determinant of leprosy control [3,4,77]. The arsenal of laboratory exams to screen this population could increase detection and early diagnosis. Several findings about leprosy indicate that M. leprae transmission mainly occurs by airborne droplet inhalation of M. leprae. Therefore, for purposes of clinical practice, the application of PCR for detection of M. leprae DNA in nasal swab samples from healthy individuals and household contacts has been reported [36,39,78,79]. Results provided evidence that a majority of MB patients carry M. leprae in their nasal mucosa and that carriage of M. leprae occurs among healthy people living in an area where leprosy is endemic [78][79][80]. In household contacts, detection of M. leprae DNA by PCR in nasal swabs does not infer whether the contact will progress to active disease. DNA detection rates in nasal swabs in contacts vary from 1 to 10% (Table 2), which sometimes depends on the clinical form of the index cases. The data are not conclusive because prospective studies enrolling high numbers of contacts are still lacking. However, the high positivity rates observed among healthy individuals (Table 2) question the feasibility of the use of PCR in this site to predict the risk of developing the disease. Nevertheless, it has been shown that positive PGL-I test among contacts can increase the risk of developing leprosy [81,82]. More recently, a very interesting study indicates that, indeed, the risk of progressing to active disease increases if a contact tests positive for PCR in the blood [83].
Thus, it is likely that a PGL-I test in combination with PCR could help identify the population at highest risk among household contacts [84].
It is believed that humans are the only significant reservoir of infection in leprosy, but recent investigations reported the presence of M. leprae DNA in wild armadillos and environmental samples. Thus, studies in areas of high prevalence of the disease confirmed the presence of M. leprae in water samples in Indonesia [85] and soil in India [86] as potential sources of continued transmission of the disease. Also, Job and coworkers suggested that skin and nasal epithelia of untreated MB leprosy patients contribute to the shedding of M. leprae into the environment, which in turn increases the risk for household contacts [87]. In addition, Truman and collaborators [2] used whole-genome sequencing to show that wild armadillos and American patients with leprosy in the southern United States are infected with the same strain of M. leprae. They were able to confirm that about a third of the leprosy autochthonous cases that arise each year in the United States almost certainly result from contact with infected armadillos.
Technical Limitations and Future Perspectives of PCR-Mediated Leprosy Diagnosis
Although PCR could be a useful tool for the detection of subclinical infection, only a few investigations have consistently associated the presence of the M. leprae DNA with further development of the disease among household contacts [83]. However, PCR results associated with a serological test could improve the predictive value of PCR technology in leprosy diagnosis. In addition, the PCR-integrated approach based on RNA/DNA ratios for viability determination can be useful for assessment of infection rate with M. leprae within a population in the future. Earlier diagnosis of leprosy will be of great value in preventing more severe disease that may lead to disabilities. Chemotherapy at an early stage could preclude leprosy transmission and the consequences of late diagnosis.
In clinical practice, the detection of M. leprae by PCR in patients with negative baciloscopy or inconclusive histopathology would be of great value to define leprosy diagnosis. Thus, choosing the right target for an improvement in sensitivity is important. The use of a repetitive sequence as a PCR target DNA, for example, provides the advantage of higher sensitivity over other targets in the DNA because it is present at multiple sites in genomic DNA [88].
However, specificity of a repetitive sequence as a PCR target is an issue since we observed that it is lower than other assays. For this reason, although it seems encouraging, highest sensitivity has to be interpreted with great care. The RLEP gene target is highly conserved and, as a result, many homologous sequences may be present in other Mycobacterium species that have not been thoroughly investigated, generating false positive results, as reported for the M. tuberculosis IS6110 marker elsewhere [89]. So far, gene targets such as 16S and Ag85B could be considered a good cost-benefit ratio concerning specificity and sensitivity (Table 1) [63]. This also argues against results detecting ''M. leprae'' DNA in water or soil [85,86].
For routine application of PCR, some operational aspects such as the invasive nature of the sample collection should be considered. Therefore, comparative studies of different types of clinical samples for leprosy diagnosis have been carried out. Less invasive samples such as blood, urine, nasal swab, hair bulbs, and, most importantly, slit-skin smears were accessed, and, although results were encouraging, they were less efficient than those obtained with skin biopsies; skin biopsies would be the best sample for household contacts screen if not for the ethical considerations [35,90]. Nevertheless, amplification of M. leprae in blood samples, for example, gives inferior results in comparison to those using other types of clinical material [35]. Even though biopsy sampling of the lesion is obtained through an invasive method, it is the choice in most studies as it provides the highest PCR positivity rates. So far, a well-characterized, commercial test for detection of M. leprae DNA in patients' samples is not available. Therefore, many labs continue to report results using their own definitions of sensitivity and specificity, and, in most cases, the results are not comparable across different clinical applications. Currently, several specific M. leprae genes of interest have been identified, and, thus, assays based on existing simple automated machines such as the GeneXpert assay for diagnosis of M. tuberculosis infection [91] could be developed for leprosy. The Xpert MTB/ RIF detects DNA sequences specific for M. tuberculosis and rifampicin resistance by PCR and is a major advance for TB diagnostics, especially for multidrug-resistant (MDR) TB and HIV-associated TB. Additional new technologies for miniature ''lab on a chip'' [92] and lateral flow assays [93][94][95] are also progressing so quickly that such assays would be feasible at pointof-care to improve clinical management decisions on leprosy diagnosis.
No data exist concerning the relative performance of different laboratories and methods for M. leprae DNA detection. An external quality assurance study on diagnostic proficiency that includes certifying and publishing the results in a comparative and anonymous manner would be highly recommended for leprosy diagnosis. Validation of paramount issues like adequate clinical material, nucleic acid extraction methods, sensitivity, specificity, PCR inhibition, and control of contamination will assure a reliable diagnosis of the disease. Thus, comparative testing of characterized samples would be a direct way to identify weaknesses of individual laboratories or certain methods. Furthermore, the positive predictive value (PPV) is another means of evaluating the usefulness of a diagnostic test as it reveals the probability that a positive result reflects the underlying condition being tested for. Its value does, however, depend on the prevalence of the disease, which may vary. Similarly, the negative predictive value (NPV) determines the proportion of patients with negative results who are correctly diagnosed. Although very useful, these values are difficult to apply to leprosy diagnosis due to the lack of a true gold standard method.
Conclusions
Overall, extensive evaluation of PCR tests in field studies has shown that DNA-based PCR assays can be 100% specific, while the sensitivity ranges from 34 to 80% in patients with PB forms to greater than 90% in patients with MB forms of the disease (Table 1). Also, since finding M. leprae is crucial in the confirmatory diagnosis of early leprosy, the use of PCR technique to enhance the ascertainment of difficult cases such as early PB and PNL is advisable and important in reaching a definitive diagnosis (Box 2). Thus, performing PCR to detect M. leprae DNA in difficult-todiagnose cases can be executed in thousands of samples, favoring early identification and early treatment and helping to interrupt the transmission chain. Moreover, definitions of M. leprae strains could be very helpful in leprosy transmission. Undoubtedly, there is a future for PCR-based methods in relation to leprosy since these methods provide options for confirmation of diagnosis, treatment follow-up, detection of resistance, and, especially, support for the diagnosis of difficult cases such as PNL and PB. | 2017-08-17T12:59:52.731Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "7ce13abd174a68355e4376aaafb669a6c489837e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pntd.0002655",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e95793f8f0e941c06e50fc1ec365fad943efad44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6204338 | pes2o/s2orc | v3-fos-license | Highly Pathogenic Avian Influenza Virus Infection of Mallards with Homo- and Heterosubtypic Immunity Induced by Low Pathogenic Avian Influenza Viruses
The potential role of wild birds as carriers of highly pathogenic avian influenza virus (HPAIV) subtype H5N1 is still a matter of debate. Consecutive or simultaneous infections with different subtypes of influenza viruses of low pathogenicity (LPAIV) are very common in wild duck populations. To better understand the epidemiology and pathogenesis of HPAIV H5N1 infections in natural ecosystems, we investigated the influence of prior infection of mallards with homo- (H5N2) and heterosubtypic (H4N6) LPAIV on exposure to HPAIV H5N1. In mallards with homosubtypic immunity induced by LPAIV infection, clinical disease was absent and shedding of HPAIV from respiratory and intestinal tracts was grossly reduced compared to the heterosubtypic and control groups (mean GEC/100 µl at 3 dpi: 3.0×102 vs. 2.3×104 vs. 8.7×104; p<0.05). Heterosubtypic immunity induced by an H4N6 infection mediated a similar but less pronounced effect. We conclude that the epidemiology of HPAIV H5N1 in mallards and probably other aquatic wild bird species is massively influenced by interfering immunity induced by prior homo- and heterosubtypic LPAIV infections.
Introduction
Migratory birds and members of the Anseriformes order in particular, have been suspected as carriers of highly pathogenic avian influenza virus (HPAIV) subtype H5N1 from Southeast Asia into central Asia, Europe and Africa. The primary occurrence of the infection in wild birds in several countries and rapid westward spread of HPAIV H5N1 in 2005 and 2006 have sparked such assumptions [1]. However, the role of wild birds as culprits of H5N1 spread has been heavily debated. Instead, legal and illegal trading practices of poultry, poultry products and captive wild birds were put into focus [2,3,4].
Previous experimental studies with HPAIV H5N1 strains of different origins in various species of water birds including swans and geese [5,6,7], gulls [8,9,10] and ducks [9,11,12] showed that AIV seronegative swans, especially black swans (Cygnus atratus), Canada geese (Branta canadensis) and laughing gulls (Larus atriculla) are highly vulnerable to H5N1 infection. Diving ducks including wood ducks (Aix sponsa) and pochards (Aythya ferina) were also found susceptible, while dabbling ducks including northern pintails (Anas acuta), blue-wing teals (Anas crecca), redheads (Aythya americana) and mallards (Anas platyrhynchos) were less susceptible or tolerant [9]. In these studies the potential role of the latter species which include the most prevalent Eurasian wild duck species, the mallard, for long-distance spread of H5N1 virus was stressed [11,12]. However, there is only a single report of healthy wild duck (common pochard in Switzerland) found in Europe to be naturally infected by HPAIV H5N1 although several clustered outbreaks of symptomatic influenza among wild birds in Europe, some involving mallards, have occurred [13,14]. Sample sizes in crosssectional surveys of wild birds may not have been large enough to exclude a prevalence of approximately less than 1% of HPAIV H5N1. Outbreaks among wild birds, nevertheless, proved to be limited in temporal and geographical extension as well as in numbers of individual birds infected. In Germany, in 2006 only 344 wild birds, mainly swans and geese, were found dying of an HPAIV H5N1 infection despite presence of several hundred thousand individuals of these species in the same area [13]. The reasons for this observation are still not clear, but it is likely that not all infected individual birds develop symptomatic influenza.
It has been hypothesized that a considerable number of these birds may have at least partially been protected by immunity induced by naturally occurring homosubtypic (HA homologous) infection with avian influenza viruses of low pathogenicity (LPAIV), and that cross reactive interference of even heterosubtypic (HA heterologous) LPAIV-induced immunity might have played a silencing role. LPAIV H5 strains are being continuously isolated from Anatidae species including mallards. AIV prevalence in wild ducks along the southern coasts of the North and the Baltic Seas can reach 14% during autumn migration [15]. However, no reliable seroprevalence data from wild Anatidae exist to support this assumption.
We tested the effect of LPAIV-induced immunity by experimental inoculation of seronegative (for whole period before inoculation) captive mallards, with two different LPAIV subtypes, H5N2 and H4N6, and subsequent challenge infection with HPAIV H5N1. An H4 subtype virus was chosen because (i) the HA of this subtype is distantly related, by genetic and antigenic means, to that of the H5 subtype and (ii) subtype H4 viruses show a high prevalence in wild duck populations. Mallards represent the most abundant duck species in Eurasia and migrate over long distances, e.g., along the East-Atlantic flyway [16]. In our study, we provide evidence that both LPAIV-induced homo-subtypic and heterosubtypic (H4) immunity modulate, to a different extent, H5N1 excretion in mallards.
Viruses
The three AIV strains used in this study are maintained in the virus repository of the OIE and National Reference Laboratory for Avian Influenza (NRL AI) at the Friedrich-Loeffler-Institut (FLI). The LPAIV strains A/mallard/Föhr (Germany)/Wv1806-09K/ 03(H4N6) and A/duck/Potsdam/1402/86(H5N2) were used for pre-exposure inoculation of ducks. The HPAIV strain A/duck/ Vietnam/TG24-01/05(H5N1) was used for challenge infection. This clade 1 isolate bears a PQRERRKKR/GLF motif at the HA 0 cleavage site, and has an intravenous pathogenicity index (IVPI) of 2.9 in specific pathogen free (SPF) chickens; in addition, it has been found to induce clinically overt and lethal neurological disease in adult Pekin ducks (Harder et al., unpublished).
Experimental design
Thirty-two mallards (Anas platyrhynchos) were captive-bred and housed indoors in the quarantine building of the FLI. The birds were handled and cared for in accordance with the Animal Protection guidelines and legal approval (trial approval LVL M-V/TSD/7221.3-1.1-003/07). All experiments with HPAIV were conducted under Biosafety Level 3-agriculture (BSL-3-Ag) conditions. At 12 weeks of age, 24 ducks were transported to Biosafety Level 3 (BSL-3) facilities at the FLI. The ducks were inoculated with LPAI viruses, after one week of acclimatization, when they were 13 weeks of age. At this age juvenile free-ranging mallards reveal highest prevalence of LPAIV infections, which is consistent with pre-migration staging in the late summer or early fall [9,17]. Prior to inoculation, oropharyngeal and cloacal swabs were collected from each bird to ensure they were not infected with any subtype of avian influenza virus at the start of the study. In addition, serum samples had been collected regularly since week 4 of age to confirm they were continuously AIV-negative by NP-specific antibody testing with competitive enzyme linked immuno-sorbent assay (ELISA) and haemagglutinin inhibition (HI) test using H4 and H5 subtype antigens. The ducks were randomly assembled into two experimental groups (male and female ducks were included in each group in approximately equal numbers) and each group was housed separately in self-contained isolation units, including: H4 group: twelve ducks which were inoculated via ocular, nasal and oropharyngeal routes with one millilitre (10 6 EID 50 ) of the H4N6 strain; H5 group: twelve ducks inoculated in the same way and the same doses with the H5N2 strain; Control group: eight ducks stayed in quarantine until challenge. All birds were continuously monitored for clinical symptoms and blood samples were collected at 1, 2, 4 and 7 weeks after LPAIV inoculation. Serum samples were tested by ELISA and HI tests.
Seven weeks after LPAIV inoculation, all 32 birds (including controls) were housed together in a BSL-3-Ag facility at the FLI. Oropharyngeal and cloacal swabs were collected from each bird to exclude active infection with and shedding of H4 or H5 LPAIV. Subsequently, all birds were challenged with 10 5 TCID 50 of HPAIV H5N1 strain via ocular, nasal and oropharyngeal routes. The birds were then monitored daily for clinical signs of disease. Oropharyngeal and cloacal swabs were collected from all birds at 1, 2, 3, 4, 7, 10, 14 and 21 days post challenge (dpc) and tested by real-time RT-PCR. The experiment was terminated at 24 dpc when serum samples were collected for serological testing and tissue samples including brain, lungs, liver and pancreas were obtained for virological evaluation.
Hypothesis test of differences between groups are carried out by a Mann-Whitney-U-Test in R.
Real-time reverse transcription-PCR (RT-PCR)
Swab and tissue samples were tested with TaqMan one-step real-time RT-PCR assays targeting the influenza A virus M gene [18] and an H5 subtype gene fragment [19] using the SuperScript III One-step RT-PCR kit with Platinum Taq DNA polymerase (Invitrogen) on a MX3000P Real-Time PCR System (Stratagene). In all tests, negative RNA preparation controls, and negative and positive RT-PCR controls as well as an internal transcription and amplification control (IC-2) were included [20]. The number of viral M gene copies or genome equivalent copy numbers (GEC) in 100 ml of the swab samples fluid was determined on basis of calibration experiments using RNA run-off transcripts of a plasmid carrying the M gene fragment under control of a T7 promoter ( Figure 1).
Competitive ELISA
The serum samples were tested with a competitive ELISA targeting influenza A nucleoprotein antibodies following the
Serum neutralisation test
The serum samples of all ducks have been tested by serum neutralization test (SNT), to quantify the serologic response, based on a previously described procedure [21].
Virus titration
The titre of HPAIV in the swab samples was extrapolated from Ct-values on basis of calibration experiments using different log 10 dilution series of A/duck/Vietnam/TG24-01/05 (H5N1) virus. Infectivity is expressed as TCID 50 per 100 ml of the swab sample fluids (Figure 1).
Immunohistochemistry and pathology
Tissues samples including trachea, lungs, heart, cerebrum, cerebellum, spinal cord, proventriculus, gizzard, small and large intestine, liver, pancreas and kidney of two birds from the control group, which died at 5 and 6 dpc, were collected, formalin fixed and processed for paraffin embedding according to standard procedures, and immunohistochemistry for influenza virus A nucleoprotein (NP) was performed. Briefly, after dewaxing sections were microwave irradiated for antigen retrieval (265 min, 600 W, 10 mM citrate buffer pH 6.0) and were incubated with a rabbit anti-NP serum (1:750). A biotinylated goat anti-rabbit IgG1 (Vector, Burlingame, CA, USA) was applied (1:200) as secondary antibody. By means of the avidin-biotin-peroxidase complex method, a bright red intracytoplasmatic and nuclear signal was observed. Positive control tissues of chickens experimentally infected with HPAI virus (H5N1) and additionally, a control primary rabbit serum against bovine papillomavirus (BPV 1:2000) were included.
Status before LPAIV exposure
The cloacal and oropharyngeal swab samples collected from 32 ducks during 8 weeks prior to LPAIV inoculation revealed negative results by real-time RT-PCR, indicating that the ducks were not shedding AIV before experimental infection. In addition, ducks were serologically negative to influenza A antigens tested by ELISA and HI tests (with using H4N6, H5N2 and H5N1 antigens), indicating that the birds were not exposed to AIV before inoculation.
LPAIV infection and status before HPAIV challenge
All birds remained clinically healthy during seven weeks after inoculation of H4 and H5 LPAIV. The results of serological evaluation of ducks by ELISA and HI tests using the homologous antigens at 1, 2, 4 and 7 weeks after inoculation are summarized in Table 1. Serum samples from the control group were serologically negative when tested by ELISA, HI and serum neutralization tests. The cloacal and oropharyngeal swab samples collected from all ducks before challenge, were negative in real-time RT-PCR indicating no virus shedding before HPAIV inoculation.
HPAIV challenge infection
Clinical symptoms. Clinical symptoms varied significantly among members of the three groups. From day two after inoculation onwards, up to seven ducks in the control group became severely sick, but only one of the control birds died (6 dpc) while others recovered slowly. One more duck died at 5 dpc. Unfortunately due to loss of the wing tag of this bird and also another bird from the H4 group at the same day, it could not be unambiguously assigned to either H4 or control groups (see also footnote 6 in Table 2). Clinical signs included severe weakness, loss of appetite, mild diarrhea and listlessness. Neurological signs mainly consisting of neck tremor were evident in one of the control ducks. Three out of 12 ducks from the H4 group transiently showed mild clinical symptoms consisting of listlessness and loss of appetite. In one duck of this group unilaterally a cloudy eye was evident. No clinical signs were observed in the H5 group.
Respiratory & intestinal viral shedding. The results of the real-time RT-PCR testing of cloacal and oropharyngeal swab samples taken on days 1, 2, 3, 4, 7, 10, 14, 21 after challenge are summarized in Figure 2 and Table 2. Cloacal and oropharyngeal excretion of HPAIV (H5N1) varied significantly among the three groups. In general, oropharyngeal excretion was much more pronounced. In the control group, viral shedding from the respiratory tract started at 1 dpc and continued with high viral genome loads for four days (on average 2.5610 5 , 1. Table 2). During these days also peaks of clinical signs were observed. Shedding continued in seven and four control ducks, respectively, until 7 and 10 dpc (Table 2). Two ducks from the control group continued respiratory viral shedding at low virus genome loads for two weeks and one duck continued for three The numbers indicate weeks after LPAIV inoculation. 1 The ELISA results indicated as the number of positives out of the number of tested birds. 2 Hemagglutination inhibition results indicate the geometric mean titre of (log 2 ) serum samples and mean log 2 6standard deviation. 3 Carried out against AI virus A/cygnus cygnus/Germany/R65/06 (H5N1) and expressed as geometric mean titre of (log 2 ) serum samples. 4 Before LPAIV H4N6 or H5N2 inoculation. 5 Post C. = 24 days post challenge. Before C. = the day of challenge, before inoculation of HPAIV. 6 None of the ducks with mean titre of ,2, showed reactivity higher than 1 log 2. doi:10.1371/journal.pone.0006706.t001 Table 2. ml of swab fluid. 3 The max and min range of positive ct-values. 4 Average genome equivalent copy (GEC) number of positive samples/100 ml of swab fluid. 5 Average tissue culture infectious dose (TCID 50 ) titre/100 ml of swab fluid. 6 At this stage one of the control birds died. 7 Two ducks lost the wing tags at 5 dpc, and one of them died at 5 dpc, since their assignment to either H4 or control groups was not possible, both were excluded from calculations after day 5. doi:10.1371/journal.pone.0006706.t002 weeks post-challenge, even after recovering from clinical disease ( Table 2). The ducks of the control group also excreted virus from the intestinal tract during the first week after infection, but at lower average genome loads (mean GEC/100 ml at 3 dpi: 8.7610 4 vs. 2.2610 4 ; p,0.05) and also for a shorter time compared to oropharyngeal swabs ( Figure 2, Table 2). All cloacal samples were negative after one week post challenge infection. Cloacal shedding was not observed in ducks from groups with previous LPAIV infection. Clear differences were also seen regarding the oropharyngeal shedding of the H4 and H5 groups, especially on day 3 and 4 significant differences in tracheal shedding is observed between all three groups ( Figure 2). Whereas in the H4 group viral genome loads of 7.5610 4 , 7.1610 4 and 2.3610 4 GEC/100 ml (equal to 1.4610 3 , 1.3610 3 and 3.1610 2 TCID 50 ) were observed during the first three dpc, samples from only three ducks of the H5 group with loads less than 1.8610 3 GEC/100 ml were found positive ( Table 2). The viral genome loads were higher and lasted for longer periods in oropharyngeal swabs of the control group than in samples of groups immunized with heterologous (H4) or homologous (H5) LPAIV respectively (mean GEC/100 ml 3 at dpi: 8.7610 4 vs. 2.3610 4 vs. 3.0610 2 ; p,0.05).
Tissue samples comprising brain, lung, liver and pancreas from the one control duck which died at 6 dpc were highly positive in real-time RT-PCR (2.0610 7 , 5.1610 4 , 4.4610 4 and 1.1610 6 GEC/100 mg, respectively). No viral RNA/infectivity was detected in the same tissues from any of the ducks surviving until 24 dpc.
Serological findings. Surviving ducks in all three groups developed high levels of HPAIV H5-specific antibodies postchallenge according to NP-ELISA, H5-specific HI and serum neutralization tests (Table 1). Serum neutralization titres against the challenge virus at 24 dpc ranged around 9 log 2 (1:512) in all three groups with no significant differences among them.
Pathological findings. The control duck, which died at 6 dpc showed moderate congestion of the liver and edema of the brain. In histopathology, the cerebrum was severely congested, multifocally there was neuropil degeneration with mild vacuolation ( Figure 3A) hemorrhage and glial nodules. Ventricles of the cerebrum were filled with blood, and a mild lymphoplasmacellular meningoencephalitis with few macrophages was present. Within the lungs there was a moderate congestion and edema. Besides this, severe heterophilic infiltrates, predominantly adjacent to parabronchi were observed. The heart and the liver showed mild multifocal parenchymal degeneration accompanied by lymphoplasma-histiocytic infiltrates. Influenza virus nucleoprotein was detected by immunohistochemistry within the brain (neurons and glial cells, Figure 3B), the liver (hepatocytes), the lung (bronchiolar epithelium and alveolar macrophages) and the heart (myocardiocytes). One more duck died at 5 dpc with abovementioned clinical signs, but due to loss of its wing tag, it could not be unambiguously assigned to either H4 or control groups (see also footnote 6 in Table 2). Ducks surviving until 24 dpc did not reveal any gross lesions.
Discussion
Here we show that pre-existing immunity induced by infection with homo-or heterosubtypic LPAIV modifies the course of an experimental challenge infection with HPAIV H5N1 in mallards. Clinical signs as well as amplitude and tissue tropism of virus shedding was affected.
Seven (out of eight) control ducks became severely sick. In contrast, only three ducks (out of 12) with previous H4N6 infection showed mild clinical symptoms but recovered fast, and no clinical symptoms were obvious in ducks with previous H5N2 LPAIV infection. Viral shedding from the respiratory tract was most pronounced in control ducks. Preferential viral shedding via the oropharynx has been consistently demonstrated with HPAIV H5N1 viruses [9,12,22].Two controls ducks even continued viral shedding at low titres for two more weeks after resolution of clinical symptoms. Viral shedding in the H4 group was markedly shortened and at lower titres (3 and 4 dpc). Just a few ducks from the H5 group were shedding the virus at very low titres compared to the control group. Cloacal viral shedding was evident only in ducks of the control group.
Clinical symptoms in ducks of the control group seemed to be more severe that has previously been reported for experimental inoculation of naive mallards with HPAIV H5N1 [9,23]. The observed variability in clinical symptoms and modes of oropharyngeal viral shedding among different studies could be due to virus strain-specific characteristics [22,24,25]. However, low level pre-existing AIV-specific immunity could explain the attenuation effect seen in some birds tested in previous studies.
Long-distance migration is one of the most demanding physiologic activities in the animal world [26] and although no overt clinical symptoms have been observed during previous experimental HPAIV infections of mallards, these birds may not have been able to engage on long distance migration flights at the height of viral infection. Previous experimental studies demonstrated that oropharyngeally excreted HPAIV originated from lung and air sacs [12], implicating a high replication rate of the virus and thus a possible functional impairment in organs important for long-distance flight and migration. From this point of view it seems more likely to assume that long distance transposition of HPAIV by migrating Anatidae might rather occur during the incubation period. This period may last only a few days. Nevertheless, in this study two control ducks shed virus for more than seven days after resolving of clinical signs of infection albeit at lower titers. Therefore, these birds may contribute to local transmission of the virus. Also, many ducks of the H4 group were shedding the virus, again at lower titers, in absence of clinical disease. Spread of virus by such individuals, at least over shorter to medium distances, can likewise not by excluded. Also, the high intra-species variability in susceptibility to HPAI (H5N1) viruses observed in many wild bird species during regional outbreaks in Europe in 2006 and 2007 may in fact be also explained by different levels of AIV-specific immunity primed by previous LPAIV infections.
In summary, the results of our study show that, in captive mallards, heterosubtypic cross reactive immunity can derogate clinical symptoms of an HPAIV H5N1 infection, reduce the amount and duration of viral shedding from the respiratory tract and prevent viral shedding from the intestinal tract. Homosubtypic immunity may fully abrogate clinical symptoms and viral shedding from the intestinal tract, and drastically reduce viral shedding from the respiratory tract. Therefore, mallards with prior exposure to homologous LPAI viruses may remain healthy and might be suitable for long-distance transposition of HPAIV, but probably only shed very low titers of virus. Mallards with prior exposure to heterosubtypic LPAI viruses might pose a greater risk for transmission and spread of HPAIV, because they can shed higher amounts of virus (but only via the respiratory route) without developing severe clinical disease. Still, the potential role of respiratory shedding compared to intestinal shedding in the efficacy of bird-to-bird transmission of HPAIV in the nature needs to be clarified. | 2014-10-01T00:00:00.000Z | 2009-08-20T00:00:00.000 | {
"year": 2009,
"sha1": "e70e7c1aec55954c85f1609ae884d5b043bfdfbb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0006706&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e70e7c1aec55954c85f1609ae884d5b043bfdfbb",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
59930217 | pes2o/s2orc | v3-fos-license | AEROSOL VARIABILITY OBSERVED WITH RPAS
To observe the origin, vertical and horizontal distribution and variability of aerosol particles, and especially ultrafine particles recently formed, we plan to employ the remotely piloted aircraft system (RPAS) Carolo-P360 “ALADINA” of TU Braunschweig. The goal of the presented project is to investigate the vertical and horizontal distribution, transport and small-scale variability of aerosol particles in the atmospheric boundary layer using RPAS. Two additional RPAS of type MASC of Tübingen University equipped with turbulence instrumentation add the opportunity to study the interaction of the aerosol concentration with turbulent transport and exchange processes of the surface and the atmosphere. The combination of different flight patterns of the three RPAS allows new insights in atmospheric boundary layer processes. Currently, the different aerosol sensors are miniaturized at the Leibniz Institute for Tropospheric Research, Leipzig and together with the TU Braunschweig adapted to fit into the RPAS. Moreover, an additional meteorological payload for measuring temperature, humidity and turbulence properties is constructed by Tübingen University. Two condensation particle counters determine the total aerosol number with a different lower detection threshold in order to investigate the horizontal and vertical aerosol variability and new particle formation (aerosol particles of some nm diameter). Further the aerosol size distribution in the range from about 0.300 to ~5 μm is given by an optical particle counter.
INTRODUCTION 1.1 Motivation
The formation of a large number of new nucleation mode particles (size range from ~3 to 15 nm diameter) in the atmospheric boundary layer (ABL) has been observed worldwide at various sites and constitutes a significant source of the atmospheric aerosol.Different nucleation processes lead to the formation and, if sufficient condensable gases are available, subsequent growth of particles to a detectable size above 3 nm and to larger sizes within a few hours.Then they act as cloud condensation nuclei and scatter solar radiation, which influences the regional and global climate.Thus, being able to understand and predict new particle formation is a key issue in understanding and quantifying the aerosol effects on climate (Spracklen et al., 2008).In the atmospheric boundary layer, particle bursts have been reported in the entrainment zone, the residual layer, and throughout the convectively mixed-layer.The RPAS has the potential to close the gap of atmospheric aerosol measurements between long-term ground-based observations and long-range aircraft measurements.Complementary to these measurements, RPAS observations allow for investigating the small-scale and short-term aerosol variability in vertical and horizontal direction at low cost and with minimal logistical requirements.The challenge is to modify small handheld instruments to meet the mass, power and size requirements of the RPAS, to obtain acceptable temporal resolution, and to calibrate the miniaturized systems.
Aims
Overall goal of the project is to investigate the vertical and horizontal distribution, transport and small-scale variability of aerosol particles in the ABL using highly flexible RPAS.The new RPAS ALADINA (Application of Light-weight Aircraft for Detecting In situ Aerosol, Fig. 1) is currently being modified at TU Braunschweig and equipped with aerosol instrumentation.This contribution has been peer-reviewed.
transport and exchange processes of the surface and the atmosphere.
The three main aims of the project are: a) The development, characterisation and integration of a miniaturized aerosol payload b) The characterisation and calibration of the aerosol UAV system in the field c) The investigation of ABL aerosol and its variability
Aerosol variability and particle bursts
The earth's surface can act either as a source or sink for aerosol particles.Aerosol formation, uptake, mixing, growth and gravitational settling proceed throughout the ABL.The vertical aerosol concentration is connected to the thermodynamic structure.It is well mixed in a turbulent ABL and forms layers of different concentration and properties if the atmosphere is stably stratified.Aerosol particles modify the ABL in many ways: Especially particles in the accumulation mode (about ~100 to 1000 nm) interact with solar and terrestrial radiation and have a direct impact on the radiation budget (IPCC, 2007).Depending on the aerosol properties and the surroundings, the effect can be an additional positive (warming) or negative (cooling) surface forcing (Lohmann and Feichter, 2005), influencing the driving force of convection and the development of the ABL structure (e.g.Yu et al., 2002).Aerosol particles also serve as cloud condensation nuclei (CCN), enable cloud formation in a saturated environment and modify cloud properties (indirect aerosol effect, Kerminen et al., 2005).As the vertical distribution of aerosol in the ABL is strongly correlated with turbulent activity (Boy et al., 2003), simultaneous profile measurements of aerosol concentrations and turbulent parameters serve to identify source altitudes and transport (Buzorius et al., 2001).In a stable stratified boundary layer, complex processes involving aerosol take place at different altitudes, which cannot be monitored by ground-based observation sites (Corrigan et al., 2008).During subsequent vertical and horizontal mixing, the particles are redistributed and reach different locations.The formation of a large number of new nucleation mode particles (size range from ~3 to 15 nm diameter) in the ABL has been observed worldwide at various rural, marine and urban observation sites and constitutes a significant source of the atmospheric aerosol (Kulmala et al., 2004).Different nucleation processes lead to the formation and, if sufficient condensable gases are available, subsequent growth of particles to a detectable size above 3 nm and further to the Aitken mode size (~15 to 100 nm) within a few hours.Then they act as CCN and scatter solar radiation, influencing the regional and global climate (Spracklen et al., 2008).Wiedensohler et al. (2009) showed that new particle formation (NPF) may enhance the available CCN by an order of magnitude.Being able to understand and predict is a key issue in quantifying the direct and indirect aerosol effects on climate.Above a certain particle diameter volatile material dominates the particle growth (Wehner et al., 2007).Ground-based observations of particle bursts were connected to intense solar radiation, high vertical wind variance (indicating a strong turbulent mixing), downward particle flux, low water vapour concentration and enhanced ozone concentration (Boy et al., 2003).NPF was observed with increased turbulence within the RL (Wehner et al., 2010), while these particles were mixed downwards and detected at ground stations.Ground-based observations revealed a mesoscale horizontal extent of NPF over hundreds of km (Wehner et al., 2007).Airborne measurements investigated the large scale variability of the particle concentrations along air mass trajectories (O'Dowd et al., 2009).Detailed measurements of the vertical and horizontal variability are recommended for the implementation of NPF in models (Boy et al., 2006).
Carrier platform
ALADINA provides a unique and flexible tool for characterizing the vertical and horizontal variability of the boundary layer aerosol.The Carolo-P360 with a wingspan of 3.6 m was designed at TU Braunschweig (Scholtz, 2009) to carry up to 2.5 kg of payload in the front compartment (Fig. 2).With electrical propulsion, it has an endurance of about 40 minutes.The cruising speed is 25 m/s.It is equipped with an emergency landing system (parachute).An electric motor is used as it reduces vibrations, and the centre of gravity is constant during flight.After changing the battery pack and saving the data, the system is ready for flight again in less than 20 min.With four sets of battery packs and parallel charging of batteries it is possible to cover the daily evolution of the ABL.The system starts on an undercarriage released after take-off without the need of special infrastructure and lands directly on the fuselage on soft and flat terrain (e.g.grass, snow field) with a dimension of 60 m x 25 m.In summer 2011, the new UAV was subject to extensive flight tests demonstrating convincing flight properties and good reliability.The flight altitudes of the new platform cover the range of the ABL (up to 3 km).A typical flight pattern for aerosol detection consists of a vertical profile up to the top of the ABL to identify altitudes of interest.Then longer horizontal flight legs of several km are performed to explore the spatial variability of the aerosol.The same board computer, autopilot system, and meteorological and navigation sensors are implemented as for the MASC systems.
Instrumentation
The aerosol instrumentation consists of an optical particle counter (OPC) and two condensation particle counters (CPC).Commercially available instruments are miniaturized at the Leibniz Institute for Tropospheric Research (TROPOS), Leipzig.The OPC GT-526 (Met-One) provides 6 channels in the particle size range from ~0.3 to ~5.0 µm.More channels are not useful, because on the one hand the miniaturized OPC shall be fast, i.e., have a relatively high time resolution (~ 0.5 Hz), which is needed to resolve the small scale aerosol features in the ABL.On the other hand statistics (how many particles are counted in one channel) limit both the time resolution and the number of channels.The CPCs 3007 (TSI) are handheld instruments.As the CPCs can only be operated in a narrow temperature window (10-35°C) the counters must be well insulated.The temperature difference between the saturator and condenser of the CPCs is adjusted to achieve two different lower threshold diameters (e.g. 6 and 12 nm).This operation mode allows deriving the concentration of freshly formed nucleation mode particles.The response time of the original instruments is improved, as the default value of ~9 s is too slow for resolving the small scale aerosol features in the ABL.Finally, the two CPCs are calibrated using the TROPOS calibration facilities.The whole aerosol payload can be operated as a stand-alone system.It has its own power supply batteries and can be operated independently from the RPAS.The payload shall be finally tested under realistic ambient conditions, i.e. temperatures down to 0°C and operating pressures down to 800 hPa.
Current status
A prototype of ALADINA has been extensively tested.The reliability of the emergency parachute and the start with undercarriage released after take-off has been demonstrated.The aircraft that will be used for the aerosol measurements is currently being modified for housing the aerosol instruments.Interfaces between the aerosol instruments and the central data acquisition are exchanged.The aerosol instruments will be miniaturized and are tested and calibrated.The turbulence payload and the central data acquisition are ready for implementation.The next step will be to assemble the miniaturized aerosol instruments as well as the data acquisition and turbulence payload in the ALADINA sensor compartment and fuselage and to perform necessary modifications to improve the weight and balance.
Planned campaigns
Field experiments of the aerosol sensor carrier ALADINA in combination with two meteorologically equipped MASC are planned to study the influence of turbulence on the particle distribution.After the implementation and test phase, a first scientific application is planned for autumn 2013 at the aerosol monitoring station Melpitz near Leipzig, Germany.The aim is to identify the location and altitude of the so-called particle bursts, i.e. events of new particle formation, in the atmospheric boundary layer.The events will be analysed in dependence of turbulence properties and in the context of the synoptic situation.For 2014, a direct intercomparison of vertical aerosol profiles with other airborne platforms is planned.
OUTLOOK
To fill the gap between ground-based measurements and costintensive long-range airborne observations, the aerosol sensor carrier ALADINA provides a flexible tool for observing horizontal and vertical aerosol variability.The aim is to contribute information about particle formation events and to better understand mechanisms by airborne observations on small scales.ALADINA can further be used as a path finder for providing vertical aerosol profiles and help to decide on the operation of other airborne sensors with manned aircraft.In the future, validation of aerosol properties observed with remote sensing instruments (multi-wavelength lidar) is planned.The combination of three RPAS operating simultaneously provides information on the correlation of various atmospheric parameters, like aerosol homogeneity, turbulence and atmospheric stratification.
Figure 1 .
Figure 1.ALADINA during take-off Two additional RPAS of type MASC (Multi-purpose Automatic Sensor Carrier) equipped with turbulence instrumentation by University Tübingen add the opportunity to study the interaction of the aerosol concentration with turbulent | 2018-12-21T15:09:23.235Z | 2013-08-16T00:00:00.000 | {
"year": 2013,
"sha1": "8ef9818dd31aefe75bf6b91c1fcf23ca2a1efffe",
"oa_license": "CCBY",
"oa_url": "https://isprs-archives.copernicus.org/articles/XL-1-W2/1/2013/isprsarchives-XL-1-W2-1-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8ef9818dd31aefe75bf6b91c1fcf23ca2a1efffe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
237515807 | pes2o/s2orc | v3-fos-license | Changes in muscle ultrasound for the diagnosis of intensive care unit acquired weakness in critically ill patients
To test diagnostic accuracy of changes in thickness (TH) and cross-sectional area (CSA) of muscle ultrasound for diagnosis of intensive care unit acquired weakness (ICU-AW). Fully conscious patients were subjected to muscle ultrasonography including measuring the changes in TH and CSA of biceps brachii (BB) muscle, vastus intermedius (VI) muscle, and rectus femoris (RF) muscles over time. 37 patients underwent muscle ultrasonography on admission day, day 4, day 7, and day 10 after ICU admission, Among them, 24 were found to have ICW-AW. Changes in muscle TH and CSA of RF muscle on the right side showed remarkably higher ROC-AUC and the range was from 0.734 to 0.888. Changes in the TH of VI muscle had fair ROC-AUC values which were 0.785 on the left side and 0.779 on the right side on the 10th day after ICU admission. Additionally, Sequential Organ Failure Assessment (SOFA), Acute Physiology, and Chronic Health Evaluation II (APACHE II) scores also showed good discriminative power on the day of admission (ROC-AUC 0.886 and 0.767, respectively). Ultrasonography of changes in muscles, especially in the TH of VI muscle on both sides and CSA of RF muscle on the right side, presented good diagnostic accuracy. However, SOFA and APACHE II scores are better options for early ICU-AW prediction due to their simplicity and time efficiency.
Muscle ultrasound is a convenient approach to investigate the muscle changes over time after admission in ICU 14 . Some muscle ultrasound studies have been able to detect reduction tendency of the cross-sectional area (CSA) 15,16 or decreasing pennation angle 17 , decrescent muscle thickness (TH) 15,18 and increase in echo intensity 17,19,20 in patients who were critically ill. Nevertheless, the relation between those muscle parameters and ICU-AW remains unclear. Witteveen 's research tested the accuracy of neuromuscular ultrasound 21 and found that receiver operating characteristics with a calculated area under the curve (ROC-AUC) of muscle parameters showed the promising possibility in differentiating patients with and without ICU-AW. The hypothesis of the present study is the changes in muscle ultrasound over time may show better diagnostic efficiency in the occurrence of ICU-AW. Consequently, we carried out the present study at a single center to test the diagnostic accuracy of the changes of muscle ultrasound over time in differentiating patients with and without ICU-AW.
Methods
Population and design. This longitudinal observational study was designed to be carried out at a single center, a general ICU in Shanghai, China from June 2019 to May 2020. The study was duly approved by the Rui Jin Hospital Ethics Committee and was performed in accordance with relevant guidelines and regulations. Written informed consent (either directly or through an appropriate surrogate) was obtained from all patients. Patients aged ≥ 18 years with an anticipated ICU stay of at least 2 days were eligible for screening after being evaluated daily for awakening and reaction to simple verbal commands. Exclusion criteria comprised individuals with prior diagnosed diseases characterized by generalized or regional weakness or with any diagnosis at the time of admission making patients abnormal muscle strength and unable to follow commands (e.g., cardiac arrest, stroke, spinal injury, traumatic brain injury, or intracerebral infection), or delirium or dementia during the ICU stay. Additionally, the patients experiencing edema of upper and lower limbs and patients who did not have arms or legs for muscle strength testing or ultrasound or had wounds, fractures, lesions, burns, or bleeding at the measurement points were excluded as well. Finally, patients who received early mobilization or physical therapy during the observation period were removed from the statistics.
Clinical data collection. Baseline data were collected after ICU admission and included age, sex, Body Mass Index (BMI), hand dominance, admission diagnosis, Sequential Organ Failure Assessment (SOFA) score, Acute Physiology, and Chronic Health Evaluation II (APACHE II) score, risk factors for polyneuropathy or myopathy (restraints, surgery, nutritional supports, mechanical ventilation, glucose peak concentration, glucocorticoid, use of sedative and analgesic) and comorbidities (cardiac dysfunction, respiratory failure, liver dysfunction, acute kidney injury, hypertension, diabetes mellitus and Multiple Organ Dysfunction Syndrome (MODS).
Ultrasound protocol. Two researchers who were trained and qualified to measure the muscle parameters immediately on the admission day by using a Philips ultrasound machine (IU22, USA) and a linear probe (frequency: 10-13 MHz) which enabled acquiring high-resolution images of clear structures of muscles 22 . Before performing, the patient must be in a supine position with extended elbows, wrists, knees and relaxed muscles, meanwhile the palms and toes of patients were facing or pointing to the ceiling 23 . The ultrasonography of muscles included TH and CSA of biceps brachii (BB) muscle, vastus intermedius (VI), and rectus femoris (RF) muscles (Fig. 1). All the muscles were measured bilaterally and scanned in the transversal (cross-sectional) image. The transducer was oriented transversally in relation to the longitudinal axis of the arm or thigh for obtaining a cross-sectional image, thus creating a right angle to the skin surface. Landmarks for ultrasound image acquisition were at standardized anatomical points, including the midpoint between supraglenoid tubercle and radial tuberosity for BB 24 , the second third of the distance between the anterior inferior iliac spine (AIIS), and the midpoint of the proximal border of the patella for RF, and the midline of the same distance as RF for VI 23 . The correlation coefficients of measurement accuracy of the two researchers were 0.88, 0.90, and 0.91 for BB muscle, RF muscle, and VI muscle respectively. When performing ultrasonography, the pressure on the skin was kept minimal, and adequate coupling agents were used for obtaining the images 25 . To enhance the accuracy of the measurement of target muscles, all the CSA and TH were measured three times continuously and an average was www.nature.com/scientificreports/ calculated as the final value. The whole muscle ultrasound procedure was repeated on day 4, day 7, and day 10 after ICU admission to know the changes of muscle TH and CSA.
Muscle strength assessment. Another two researchers who were blind for the results of quantitative measurement of muscle parameters were in charge of assessing the conscious patients for muscle strength by using the MRC score on the 10th day after ICU admission 26 . The MRC score is extensively utilized for diagnosing the ICU-AW and its good interobserver reliability in critical settings has been confirmed in a previously published study 12 .For the patients mechanically ventilated with sedatives, if the RASS (Richmond Agitation Sedation Scale) fell anywhere between − 1 and 1 27 and they showed a positive reaction to 5 verbal commands with facial muscles, we considered them feasible for muscle strength assessment 12 . Twelve muscle groups were tested for the calculation of MRC score including elbow flexion, wrist extension, shoulder abduction in upper limbs, and dorsiflexion of the foot, hip flexion knee extension in lower extremities. Examined subjects whose total MRC score < 48 were categorized as the ICU-AW group according to the international consensus statement 13 .
Sample size. According to the equation of diagnostic experiment 28 , a significance level of 0.05 for a twosided level and test power of 80% were assumed, and the expected sensitivity and specificity of muscle ultrasonography to be 0.8. Based on previous studies, ICU-AW was about 50% prevalent in critically ill patients 4 . 36 examined subjects was determined after considering the loss of 10% of the sample.
Statistical analysis.
Kolmogorov-Smirnov's normality test was employed for evaluating continuous variables' distribution. Data acquired from continuous variables with a normal distribution were expressed either as standard deviation or mean or as the interquartile range (IQR) or median in case if they had a non-normal distribution. Mann-Whitney test, Student t-test, exact Fisher test, and chi-squared test were employed to assess the differences among patients with and without ICU-AW diagnosis according to the distribution and type of the variable. Additionally, repeated measurement analysis of variance was used for testing the differences of changes in sonographic TH and CSA of observational muscles between groups. The discriminative power of changes of muscle ultrasound over time was examined with a 95% confidence interval (CI) using ROC-AUC (receiver operating characteristic curves with calculated area under the curve). The discriminative power of AUC values among 90 and 100 percent have been described as < 60 percent as failed, 60-70 percent as poor, 70-80 percent as fair, and 80-90 percent as good 29 . The change of CSA and TH are represented by ΔCSA and ΔTH respectively, and was calculated using the formula: Based on ROC curve analysis, the specificity, sensitivity, positive and negative predictive values (PPV, NPV) for muscle ΔCSA and ΔTH were calculated. The optimal cutoff value was confirmed by calculating the Youden Index. Youden Index = (specificity + sensitivity − 1). When the Youden Index is maximum, the corresponding value is the optimal cutoff value 30 . A significant two-level p-value taken for all analyses was < 0.05. SPSS version 19 was used for all statistical analyses.
Results
In total, 106 patients were enrolled and their informed consent was obtained. Among them, 37 patients finally went through all 4 times muscle ultrasonography measurement successfully, of whom 24 had ICU-AW. The flowchart of screening and inclusion is shown in Fig. 2. Table 1 enlists the patient characteristics.
Whether or not patients suffered from ICU-AW, all the groups presented a descending trend of both TH and CSA bilaterally. In the upper limbs, the changes of CSA in BB on the right side showed more statistical differences at different observation time points between groups. Moreover, ICU-AW patients had a greater degree of declination in CSA of RF bilaterally, and a remarkable reduction of TH in VI as well. Many significant differences between groups were found at different points in time (Fig. 3).
The ΔTH of BB on both sides had higher ROC-AUC than ΔCSA of BB in the upper limbs, with ROC-AUC ranging from 0.702 to 0.792. The ROC-AUC of ΔCSA of BB on both sides were not significant except ΔCSA day4 of BB on the right side. In the lower limbs, most ROC-AUC of ΔTH and ΔCSA of RF were not significant on the left side. While the ROC-AUC of ΔTH and ΔCSA of RF on the right side were significantly higher, ranging from 0.734 to 0.888, especially ΔCSA day10 of RF (ROC-AUC: 0.888, p < 0.001). Besides, ΔTH day10 of VI had fair ROC-AUC values that were 0.785 on the left side and 0.779 on the right side (Table 2; Fig. 4).
Following that, we compared the diagnostic power of SOFA, APACHE II, and certain muscle parameters that showed good diagnostic performance as previously mentioned. The SOFA (ROC-AUC: 0.886, p < 0.001) and APACHE II scores (ROC-AUC: 0.767, p < 0.05) at the time of admission to the ICU showed close diagnostic efficacy compared to the changes in muscle parameters (Fig. 4).
Further, using specific thresholds (15% for ΔTH day10 of BB, RF and VI muscle, 12% for ΔCSA day10 of BB and RF) in term of Youden Index of ΔTH day10 and ΔCSA day10 that were calculated based on the ROC curve, the sensitivity, specificity, PPV, NPV, and accuracy were confirmed and are presented in Table 3. Diagnostic accuracy of ΔTH day10 and ΔCSA day10 of RF on the right side and ΔTH day10 of VI on both sides was high and ranged from 75.7 to 78.4%.
Discussion
The present study confirmed that patients with ICU-AW had a significant reduction of muscle TH and CSA than those of patients without ICU-AW, especially in the lower extremities. Moreover, for a 15% threshold for ΔTH day10 and a 12% threshold for ΔCSA day10 , muscles in lower extremities showed a good diagnostic accuracy of the diagnosis of ICU-AW, particularly on the right side. More importantly, changes in muscle ΔTH day10 and ΔCSA day10 of the lower extremities were found to have close diagnostic validity to SOFA and APACHE II scores at the time of ICU admission. In this study, of all evaluated patients, 64.9% were found to have ICU-AW. Whether or not patients had ICU-AW, all patients presented the descending trend of both TH and CSA bilaterally. Moreover, patients with ICU-AW had an obviously greater degree of declination in CSA of RF bilaterally, and a remarkable reduction of TH in VI of both sides as well. Turton et al. carried out a study on 22 ICU patients who were mechanically ventilated and performed an ultrasonographic assessment of the flexor compartment of the elbow joint, the vastus lateral muscle, and the medial head of the gastrocnemius muscle on admission and 10 days later. The results showed that the loss of muscle mass mainly occurred in the lower extremities and there was no change in the size of the flexor compartment of the elbow joint. Such data helps to justify the argument regarding investigating the lower extremities further as peripheral muscles, in patients with critical illnesses have more chances to undergo early disuse atrophy 31 . In particular, a 3-week follow-up study employed ultrasonography to evaluate RF muscle in terms of the morphological changes and found severe muscle mass loss in CSA and muscle diameter experienced by all the ICU trauma patients. By day 20, approximately 45% of rectus femoris muscle mass was lost 32 . Consequently, in comparison to upper limbs, lower limbs muscles experienced earlier and greater atrophy. The potential reason was given in an earlier study that assessed rectus femoris CSA and protein/DNA ratio over time. The results showed that during the first week, virtually all cases decreased in muscle mass. Lower limb muscle atrophy is considered to be the result of net catabolism due to the decrease of muscle protein synthesis and the simultaneous increase of protein decomposition relative to protein synthesis 16 . www.nature.com/scientificreports/ Few previous studies tested muscle ultrasonography for the diagnosis of ICU-AW or prediction of prognosis or similar symptoms. One study diagnosed skeletal muscle loss by measuring the CSA of RF using ultrasound and compared it with frailty to predict the prognosis of critically ill patients. The outcomes of the study clearly show that the prediction value of adverse discharge tendency by bedside ultrasound in the diagnosis of skeletal myopenia was consistent with frailty 33 . Moreover, a prospective observational study found that the larger the CSA in RF on the day of admission, the lower the occurrence of the muscle fiber necrosis and muscle waste of RF 34 . In addition, Greening et al. demonstrated that an independent risk factor for unscheduled readmission or death is a smaller quadriceps muscle size measured by ultrasound 35 . These studies showed the potential diagnostic possibility for ICU-AW diagnosis. Further, Witteveen's study performed ultrasonographic TH of the tibialis anterior muscle, biceps brachii muscle, flexor carpi radialis muscle, and the rectus femoris muscle thus finding that for these muscles, the diagnostic accuracy of muscle TH was rather low with ROC-AUC ranging from 21 51.3 to 68.0%. Nevertheless, CSA which is considered as a crucial property for contraction and strength of muscle was not fully explored for its relation with ICU-AW 14 . According to results, the changes in quantitative muscle ultrasound had good performance when analyzed on MRC criteria for the diagnosis of ICU-AW and the best cutoff ratio of reduction in muscle parameters for diagnosing ICU-AW using ultrasound is more than 15% for ΔTH day10 and more than 12% for ΔCSA day10 in the lower extremity of the right side, which endorses the use of muscle ultrasound as a supplementary tool for ICU-AW diagnosis.
Although changes in some muscle parameters over 10 days presented good diagnostic efficacy, a comparison showed that SOFA and APACHE II scores at the time of ICU admission had a more adequate advantage. Given the time efficiency and implementation efficiency, the ROC-AUC of SOFA and APACHE II scores were shown to be near to the change in muscle parameters, making it appear unnecessary to predict the occurrence of ICU-AW by a 10-day muscle observation. Many predictors for the occurrence of ICU-AW have been confirmed, and the SOFA and APACHE II scores can be considered as indicators of multiple high-risk factors integrated together 36,37 . However, from previous reports, SOFA and APACHE II scores alone did not present sufficient diagnostic efficacy 38,39 , so further validation of these results is needed considering the limited sample size of the present study.
Some limitations of this study deserve the necessary commentary. First, due to the limited availability of biopsy or electroneuromyography in the ICU, we could not classify patients with ICU-AW into the three subcategories (critical illness neuromyopathy (CINM), critical illness myopathy (CIM), and critical illness polyneuropathy (CIP)). Second, we did not observe other ultrasonographic characteristics of muscle in recognition of ICU-AW, for instance, pennation angle and echo intensity which may have better diagnostic value. Third, it was impossible for ultrasound examiners to be completely blind to the MRC score because the absence or presence of spontaneous movements gave the impression of muscle strength. Therefore, to improve the accuracy of muscle measurement, all CSA and TH were measured three times in a row, and the average value is calculated as the final value to minimize the performer's deviation.
Conclusion
Ultrasound measurement of muscles can be used as a tool to assist in the recognition of ICU-AW, especially for unconscious critically ill patients. Changes in TH and CSA of RF on the right side and the changes in TH of VI on both sides had good diagnostic accuracy for diagnosis of ICU-AW. However, considering the convenience and time efficiency, SOFA and APACHE II score are better options for early prediction of ICU-AW.
Variable
No ICU-AW n = 13 ICU-AW n = 24 t/χ 2 P value Figure 4. Comparison of ROC curves among SOFA score, APACH II score, ΔTH day10, and ΔCSA day10 of muscles. (a) Comparison of changes in TH of biceps brachii muscle, vastus intermedius muscle and rectus femoris muscles; (b) comparison of changes in CSA of biceps brachii muscle, vastus intermedius muscle, and rectus femoris muscles; (c) comparison among changes in TH and CSA of muscles, SOFA and APACHE II score. BB, biceps brachii; RF, rectus femoris; VI, vastus intermedius; TH, thickness; CSA, cross-sectional area; APACHE II, Acute Physiology and Chronic Health Evaluation II; SOFA, Sequential Organ Failure Assessment. | 2021-09-16T06:23:24.396Z | 2021-09-14T00:00:00.000 | {
"year": 2021,
"sha1": "7c62e9c35823b41a9b0fd44a9691eb168ba73b6d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-97680-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d8d18dc668e5e882ad5113cd82837d3294f95c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.